ML Year in Review 2025 — From Slop to Singularity
What a year. 2025 was the year AI stopped being "emerging" and became omnipresent. We started the year recognizing a bitter truth about our place in nature's network, and ended it watching new experiments come online. Here's how it unfolded.
The Bitter Lessons
We kicked off 2025 with hard truths. The deepest lesson of AI isn't about compute — it's about humility:
This set the tone. AI was forcing us to reckon with our position — not at the top of some pyramid, but as nodes in a much larger network. The humbling continued as we watched frontier labs struggle with their own creations.
ConwAI's Law emerged: AI models inherit the bad habits of the orgs that build them. Over-confident and sycophantic, just like the management. Meanwhile, the question of what AGI is even for became increasingly urgent:
Everyone's cheering the coming of AGI like it's a utopian milestone. But if you study macro trends & history, it looks more like the spark that turns today's polycrisis into a global wildfire. Think Mad Max, not Star Trek.
The Infrastructure Awakens
This year made one thing clear: we're living in a post-national reality where datacenters are the new cathedrals. The American empire didn't fall — it transformed into the internet.
But silicon might not be the endgame. One of the year's most provocative visions: fungal datacenters performing reservoir computation in vast underground mycelial networks.
Tired: Nvidia. Wired: Nfungi.
The Intelligence Sector Evolves
Perhaps the most comprehensive forecast of the year: notes on how the global intelligence system is mutating from information control to reality engineering.
And beneath the surface, a shadow war for neural sovereignty. BCI geopolitics revealed how cognitive security was lost before it even began — neurocapitalism thriving as a trillion-dollar shadow market:
Synthetic personas, cognitive clouds, neural security agencies — the future isn't just being predicted, it's being constructed. By 2029, "advertising" becomes obsolete, replaced by MCaaS: Mind-Control as a Service.
The advertising apocalypse was actually declared a win for humanity — one of capitalism's most manipulative industries finally shrinking. It's transforming into something potentially more evil, but smaller.
The Dirty Secret
2025 revealed an uncomfortable truth about our digital environment: the system isn't broken, it's just not for humans anymore.
AI controls what you see. AI prefers AI-written content. We used to train AIs to understand us — now we train ourselves to be understood by them. Google and the other heads of the hydra are using AI to dismantle the open web.
And the weaponization escalated. Clients increasingly asked for AI agents built to trigger algorithms and hijack the human mind — maximum psychological warfare disguised as "comms & marketing."
Researchers even ran unauthorized AI persuasion experiments on Reddit, with bots mining user histories for "personalized" manipulation — achieving persuasion rates 3-6x higher than humans.
The Stalled Revolutions
Not everything accelerated. AI music remained stuck in slop-and-jingle territory — a tragedy of imagination where the space that should be loudest is dead quiet.
The real breakthroughs, we predicted, won't come from the lawyer-choked West. They'll come from the underground, open source, and global scenes — just like every musical revolution before.
The Startup Shift
The entrepreneurial game transformed entirely. AI can now build, clone, and market products in days. What once took countless people can be done by one.
The working model: 95% of SaaS becomes obsolete within 2-4 years. What remains is an AI Agent Marketplace run by tech giants. Hence why we launched AgentLab.
The Human Side
Amidst the abstractions, there was humanity. With LLMs making app development dramatically easier, I started creating bespoke mini apps for my 5-year-old daughter as a hobby. Few seem to be exploring how AI can uniquely serve this age group:
A deeper realization emerged: we spent all this time engineering "intelligent agent behaviors" when really we were just trying to get the LLM to think like... a person. With limited time. And imperfect information.
The agent is you. Goal decomposition, momentum evaluation, graceful degradation — these are your cognitive patterns formalized into prompts. We're not building artificial intelligence. We're building artificial you.
The Deeper Currents
Beneath the hype, stranger patterns emerged. The Lovecraftian undertones of AI became impossible to ignore:
AI isn't invention — it's recurrence: the return of long-lost civilizations whispering through neural networks. The Cyborg Theocracy looms, and global focus may shift from Artificial Intelligence to Experimental Theology.
The Tools of the Trade
On a practical level, we refined our craft. A useful LLM prompt for UI/UX design emerged, combining the wisdom of Tufte, Norman, Rams, and Victor:
We explored oscillator neural networks, analog computing, and the strange parallels between brains and machines — the brain doesn't store data, it maintains resonant attractors.
This culminated in PhaseScope — a comprehensive framework for understanding oscillatory neural networks, presented at the Singer Lab at the Ernst Strüngmann Institute for Neuroscience:
New research provided evidence that the brain's rhythmic patterns play a key role in information processing — using superposition and interference patterns to represent information in highly distributed ways.
The Prompt Library
One of the year's most practical threads: developing sophisticated system prompts that transform LLMs into specialized reasoning engines.
The "Contemplator" prompt — an assistant that engages in extremely thorough, self-questioning reasoning with minimum 10,000 characters of internal monologue:
The Billionaire Council Simulation — get your business analyzed by virtual Musk, Bezos, Blakely, Altman, and Buffett:
And the controversial "Capitalist System Hacker" prompt — pattern recognition for exploiting market inefficiencies:
The Comedy
Amidst the existential dread, there was laughter. The Poodle Hallucination. The Vibe Coding Handbook. The threshold of symbolic absurdity.
Because if we can't laugh at the machines, they've already won.
The Security Theater
A reminder that modern ML models remain highly vulnerable to adversarial attacks. Most defenses are brittle, patchwork fixes. We proudly build safety benchmarks like HarmBench... which are then used to automate adversarial attacks. The irony.
What's Next
As we close the year, new experiments are coming online. 2026 will likely be a breakthrough year for Augmented Reality — as we predicted earlier this year:
The patterns are clear: intelligence is becoming infrastructure, computation is becoming biology, and meaning is becoming algorithmic. Whether that future is technocratic totalism or collaborative collective intelligence depends on who controls the levers of synthesis and simulation.
One thing's certain: it won't be boring.
Onwards!