tag > Comment
-
The Infinite Machine: Survival in the Era of Atmospheric Software
I recently read this post “The SaaS Apocalypse Is OpenSource’s Greatest Opportunity"
Nearly a trillion dollars has been wiped from software stocks in 2026, with hedge funds making billions shorting Salesforce, HubSpot, and Atlassian. A article on HackerNoon argues that most commercial SaaS could inevitably become OpenSource, not out of ideology but economics. The article claims, is that maintainers who refuse to embrace AI tools risk being forked, or simply replicated from scratch, by those who do.
The following is my analysis, in reaction to this article:
1. The part that is absolutely true
Yes: the cost of producing code has collapsed.
What your story describes (45-min prototype with integrations) is not hype anymore. It’s real. You’ve seen it yourself.
This creates three undeniable shifts:
→ Code is no longer the bottleneck
It used to be:
idea → design → engineering → product
Now it’s:
idea → working system
That compresses the value chain massively.
→ The “minimum viable SaaS” is dead
Anything that is:
- CRUD + UI
- basic integrations
- shallow workflow logic
…is now trivial to replicate.
That means:
- feature-based moats = gone
- thin SaaS = already commoditized
This part of the essay is 100% correct.
→ Long-tail explosion is inevitable
The “goat ERP” example is actually profound.
We’re entering:
hyper-niche, custom, semi-disposable software
Not startups—software artifacts as byproducts of thinking
That’s a real structural shift.
2. Where the argument goes wrong
This is the key overreach:
“Proprietary SaaS becomes a commodity → Open Source wins”
That sounds elegant, but it’s too binary and historically naive.
❌ Mistake #1: Confusing building software with running systems
Most SaaS value is NOT in:
- writing code
It’s in:
- reliability
- uptime
- compliance
- data integrity
- trust
- support
- edge-case handling
Example:
Rebuilding something like DocuSign is easy.Replacing it in a bank or government workflow?
→ not easy at all
❌ Mistake #2: Ignoring state + data gravity
The real moat is not code.
It’s:
- accumulated data
- workflows embedded in organizations
- integrations nobody wants to touch
- legal/compliance certification
That’s why:
- Salesforce still exists
- Atlassian still prints money
Even though their products are painfully clonable.
❌ Mistake #3: Overestimating Open Source dominance
Open Source does not automatically win when costs drop.
History actually shows:
Layer Winner Infrastructure Open Source (Linux, etc.) Application layer Proprietary (usually) Why?
Because:
companies don’t want software, they want outcomes + accountability
And Open Source often lacks:
- SLAs
- ownership
- guarantees
3. What’s actually happening (more precise model)
The real shift is this:
👉 We are moving from:
Software as a product
👉 To:
Software as a fluid capability
This creates 3 new dominant layers:
1. Interface layer disappears
Users won’t care about apps.
Agents will choose tools.
This part aligns with what Sam Altman has hinted:
tools become interchangeable execution units
2. Execution layer commoditizes
Yes—this is where:
- CRUD SaaS dies
- Open Source explodes
- clones are infinite
3. Control layer becomes everything
This is the new moat:
- distribution
- trust
- data ownership
- embedded workflows
- brand
- compliance
- network effects
4. The real winners (not what the essay claims)
It’s not simply “Open Source wins”.
The winners are:
→ Companies that control decision flow
Think:
- who the agent calls
- which API is default
- what system holds context
→ Companies with deep integration into reality
- finance
- healthcare
- logistics
- government
These don’t get disrupted by a weekend clone.
→ Companies that become platforms for agents
Not SaaS tools.
But:
environments where agents operate
5. The most important insight hidden in the essay
This is the real gem:
“When creation costs collapse, value shifts to distribution and what cannot be commoditized.”
Yes.
But in 2026, that “non-commoditized” thing is not Open Source vs Proprietary.
It is:
👉 Context ownership
Who owns:
- the workflow
- the memory
- the user intent
- the data graph
That’s the new oil.
6. My blunt take
- This essay is directionally right
- but strategically misleading if taken literally
It correctly predicts:
- death of thin SaaS
- explosion of software
- collapse of build costs
It incorrectly predicts:
- Open Source dominance as default outcome
- collapse of major SaaS incumbents
- irrelevance of proprietary models
Don’t think:
“we build software”
Think:
we orchestrate intelligence + workflows + agents around outcomes
And most importantly:
👉 Don’t compete on building things
👉 Compete on deciding what gets built, when, and why
Software, the 2027 Outlook
By 2027, the software industry will not have collapsed, but it will have decoupled from the “per-seat” subscription model that defined the last 20 years. While AI makes code cheaper to write, the massive compute costs of running AI agents are forcing a shift toward usage-based and outcome-based pricing.
1. The Market Pivot: From “Seats” to “Tasks”
The industry is moving toward a “SaaS-to-AI” transition where revenue is tied to work performed rather than human headcount.- Agentic Market Explosion: Spending on AI software is forecast to reach $297.9 billion by 2027, a nearly four-fold increase from 2022.
- Outcome-Based Pricing: By 2027, “AI agents” will be standard enterprise SKUs. Companies will pay per “unassisted customer resolution” or “contract drafted” rather than paying for 100 employee logins.
- The “Hybrid” Bridge: Most incumbents (Salesforce, Microsoft, etc.) will use hybrid models—base seat fees plus “AI credits” or usage tiers—to protect margins against volatile compute costs.
- The Development Shift: “System Designers,” Not “Coders”
The role of the software engineer is being fundamentally redefined by 2027.
- 80% Upskilling: Approximately 80% of developers will need to upskill by 2027 to focus on AI orchestration, governance, and system architecture rather than routine syntax.
- AI-Native Engineering: Mid-2026 to 2027 marks the era of “AI-native” engineering, where AI agents handle 90% of boilerplate code, bug fixes, and testing.
- The Review Crisis: A major bottleneck in 2027 will be code review and validation. AI will generate code so fast that human oversight and automated “guardrail” tools will become the most expensive part of the lifecycle.
- Key Growth Sectors & Risks
- Fastest Growing: Financial Management Systems (FMS) and Digital Commerce are expected to be the largest and fastest-growing AI software application markets by 2027.
- The “Pilot-to-Production” Gap: While 80% of enterprises will have deployed some generative AI by 2026, Gartner predicts 40% of agentic AI projects will fail by 2027 due to poorly designed underlying business processes.
- Regulatory Fragmenting: By 2027, AI governance and compliance will cover 50% of the global economy, requiring corporations to spend billions on legal and ethical alignment.
- Financial Outlook (Forecasts for 2027)
What comes next
👉 Phase 1 (already happening)
- Code becomes cheap
- SaaS features commoditize
- Prototypes are instant
👉 Phase 2 (happening now → 2027)
- Execution becomes expensive (AI compute)
- Value shifts to orchestration + outcomes
So paradoxically:
Building software is cheap
Running intelligent systems is expensiveThat tension is the economic engine of the next decade.
2. Why “per-seat SaaS” actually dies (this part is real)
The old model:
pay per human using software
Breaks because:
- AI replaces interaction
- work is done without humans in the loop
So charging per seat becomes nonsensical.
Example shift:
Old:
- 100 sales reps → 100 Salesforce licenses
New:
20 humans + 50 agents
→ pay per:- lead processed
- deal closed
- email handled
👉 This is a unit of value realignment
From:
access
To:
outcome
3. The hidden driver: compute economics
This is the part many people miss (but your text gets right):
AI introduces a hard cost floor again.
Unlike SaaS:
- traditional software → near-zero marginal cost
- AI systems → non-trivial marginal cost per task
So now companies must price based on:
- tokens
- inference time
- agent loops
- tool calls
Which forces:
👉 Usage-based pricing (inevitable)
👉 Outcome-based pricing (differentiation layer)
4. This creates a completely new stack
Here’s the actual emerging architecture:
Layer 1 — Commoditized execution
- LLMs
- tools
- open-source components
Cheap(ish), abundant
Layer 2 — Orchestration
- agent coordination
- workflow design
- memory systems
- guardrails
- evaluation
👉 This is where real engineering moves
Layer 3 — Outcome contracts (new SaaS)
- “we resolve 10k tickets/month”
- “we generate 500 qualified leads”
- “we process all invoices”
👉 This becomes the product
Layer 4 — Trust / compliance / integration
- auditability
- legal guarantees
- enterprise embedding
👉 This is where incumbents like Microsoft still dominate
5. The important insight
This one:
“40% of agentic AI projects will fail due to poor process design”
This is huge.
Because it implies:
The bottleneck is no longer technology. It is system design.
And that leads directly to:
👉 “System Designers” > “Coders”
This is not a buzzword shift.
It’s a power shift.
The new scarce skill:
- defining workflows
- aligning incentives
- handling edge cases
- designing feedback loops
- managing failure modes
👉 In other words:
You are not building software anymore
You are designing socio-technical systems
👉 The real product is no longer software
It is:
a continuously running system that produces outcomes
Which means:
- software = internal component
- agents = labor
- workflows = factory
- pricing = output
The deeper truth:
The winning companies will:
- hide usage
- sell outcomes
- manage compute internally
Like this:
Customer sees:
“$10k/month for autonomous support”
Internally:
- tokens
- retries
- agent failures
- cost optimization
Here’s the simplest way to think about 2026–2027:
Old world:
- Software = product
- Humans = operators
- Pricing = seats
New world:
- Software = component
- Agents = operators
- Humans = supervisors
- Pricing = outcomes
The one thing nobody is saying out loud
The one thing nobody is saying out loud—because it undermines the “AI is magic” marketing and the “AI is a job-killer” doom—is this:
We are entering the era of “Disposable Software,” and it’s going to create a massive, unmanageable garbage fire of technical debt.
Here’s the “secret” reality:- The “Maintenance Trap": It is now 10x easier to generate a feature than it is to understand why it works. In 2027, companies will have millions of lines of “dark code” written by AI agents that no human on staff actually understands. When that code breaks (and it will), the cost to fix it won’t be “near zero"—it will be astronomical because you’ll be paying humans to perform “digital archaeology” on hallucinated logic.
- The Death of Junior Mentorship: If AI does all the “easy” coding, the entry-level rungs of the career ladder disappear. By 2027, the industry will realize it has a “Senior Gap.” We’ll have plenty of AI to write code, but a shrinking pool of humans who actually know how to tell if the AI is lying.
- Software as a Commodity, Trust as a Luxury: If anyone can spin up a “DocuSign clone” in a weekend, the software itself becomes worth zero. The only thing left with value is Identity and Liability. You aren’t paying DocuSign for the “drag and drop” box; you’re paying them to stand in court and testify that the signature is real.
The “Secret": The “SaaS Apocalypse” isn’t about code; it’s about the collapse of the User Interface. If an AI agent can just talk to an API and get the job done, 90% of the “dashboards” we pay for today are useless overhead. We are building the most sophisticated UI tools in history just as the need for UIs is starting to vanish.
The even deeper secret—the one that makes both the “AI doomers” and the “AI evangelists” uncomfortable—is this:
We are accidentally building a “Digital Dark Age” where the cost of verifying truth exceeds the cost of creating it.
In the old world, the bottleneck was scarcity (it was hard to write code, hard to make a movie, hard to write a book). In the 2027 world, the bottleneck is entropy.- The “Recursive Rot” Secret
Nobody wants to admit that AI is currently eating its own tail. As AI-generated code, text, and data flood the internet, future AI models are being trained on the “synthetic slop” of their predecessors. We are hitting a point of Model Collapse. By 2027, the “secret” struggle for every major tech company won’t be “better algorithms,” it will be the desperate, expensive hunt for “Clean Human Data"—the digital equivalent of “low-background steel” salvaged from pre-atomic shipwrecks. - The “Liability Black Hole”
The industry is quietly terrified of the day an AI-generated bridge, medical device, or financial algorithm fails and kills someone or bankrupts a city.
- The Secret: There is currently no legal framework for “who is at fault” when an autonomous agent makes a hallucinated decision.
- Insurance companies are the ones who will actually “kill” the SaaS apocalypse. If they refuse to underwrite an AI-built “DocuSign clone,” that software is commercially dead, no matter how “free” or “open source” it is.
- The “Silent Re-Centralization”
The narrative is that AI “democratizes” software (anyone can build!). The reality is the opposite.
- Because AI makes creating software so cheap, the only thing that matters is Compute and Data.
- The “secret” is that we aren’t moving toward a world of a million indie developers; we are moving toward a world where three companies (Microsoft/OpenAI, Google, Amazon) own the “Oxygen” (the compute) that every “independent” app needs to breathe.
- The “End of the User”
This is the deepest one: Software is no longer being built for humans.
By 2027, the majority of “users” for software will be other AI agents. When a “SaaS” tool talks to an “LLM” which talks to a “Database,” there is no human in that loop. We are building a massive, global machine that is increasingly unobservable to the people who own it.
The real secret? We aren’t “collapsing the cost of software.” We are externalizing the cost onto the future. We’re saving money today by creating a world so complex and synthetic that, eventually, no human will be able to debug it.
We are witnessing the death of software as an artifact and its rebirth as an atmosphere. The “SaaS Apocalypse” isn’t a funeral; it’s a phase shift where the lines of code become as cheap and invisible as the air we breathe. But as the cost of creation hits zero, the price of the “human element"—discernment, accountability, and the courage to stand behind a product—becomes the only real currency left. We are building a world of infinite answers, only to realize that the value was always in knowing which questions to trust.
-
Epistemic Contracts for Byzantine Participants
If a tree falls in a forest and no one is there to record the telemetry... did it even generate a metric?
In space, can anyone hear you null pointer exception?
What is the epistemic contract of a piece of memory, and how is that preserved when another agent reads it?This is not dishonesty. It's something that doesn't have a good name yet. Call it epistemic incapacity — the agent cannot reliably verify its own actions.
— Ancient Zen Proverb -
How to Survive the AI Tsunami
"Control surfaces” = the leverage points that shape how AI systems behave at scale.
1. Distribution Control
Who owns the channel owns reality.
Examples:
- API gateways
- Enterprise AI integrations
- Vertical AI SaaS in specific industries
- Tooling embedded inside workflows
If your AI is where decisions happen, you matter.
If you’re just “another model wrapper,” you don’t.
Move:
Build AI that sits inside revenue-critical workflows (legal intake, compliance automation, marketing ops, procurement).
Not toys. Not chat.
2. Data Control
Training data is power.
Feedback loops are compounding power.
Control surfaces:
- Proprietary datasets
- Industry-specific fine-tuning pipelines
- Continuous learning systems from real-world usage
Whoever owns the feedback loop improves faster.
Move:
Pick a niche.
Capture structured behavioral data others don’t have.
Turn usage into model improvement.
3. Orchestration Layer
Models will commoditize.
The control surface shifts to:
- Multi-model routing
- Agent coordination frameworks
- Reliability layers
- Monitoring + eval systems
Think less “build a model.”
Think more “own the system that decides which model does what.”
That layer compounds.
4. Economic Gatekeeping
This is underrated.
Who:
- Sets pricing?
- Defines compliance?
- Integrates with regulation?
- Gets certified?
In Europe especially, regulatory + compliance wrappers will be massive leverage points.
If you understand both AI and regulation, you sit at a choke point.
5. Compute Alliances
Most people won’t own compute.
But they can align with those who do.
Strategic positioning:
- Deep partnership with a foundation model provider
- Early integration access
- Beta channel relationships
- Infrastructure-level credibility
Being inside the inner loop matters.
6. Narrative Control
Not fluffy — structural.
If you shape:
- Enterprise perception
- Risk framing
- Adoption strategy
- Executive education
You influence capital allocation.
And capital allocation determines the future.
Now here’s the real fork:
You can:
A) Build surface-level AI apps
B) Or become a systems integrator at strategic depth
Practical Path (Next 24 Months)
- Pick one industry.
- Go absurdly deep.
- Build AI that directly impacts revenue or cost.
- Capture workflow + feedback data.
- Turn it into a compounding intelligence layer.
- Become impossible to replace.
Not by being smarter.
By being embedded.
Let me ask something sharper:
Do you want:
- To build tools?
- To build companies?
- Or to shape ecosystems?
Because each requires a different control surface strategy.
Be honest.
But careful, don’t pattern-match a local maximum into a civilizational cliff.
1️⃣ “Software is now worthless. Agents build everything in 10 minutes.”
Partially true at the surface layer.
Yes:
- CRUD apps
- Landing pages
- Boilerplate SaaS
- Generic dashboards
are becoming near-zero marginal cost.
That layer is collapsing.
But that layer was never the durable value layer.
Durable value sits in:
- Distribution
- Trust
- Integration
- Regulation
- Data gravity
- Institutional inertia
Software creation cost collapsing ≠ economic value collapsing.
It shifts the value layer upward.
2️⃣ “Downstream of model providers = fucked.”
This is the common accelerationist take.
Reality is more nuanced.
Foundation model providers:
- Compete horizontally.
- Race on cost per token.
- Commoditize each other.
- Depend on hyperscaler economics.
Their margins compress too.
Meanwhile, downstream vertical integrators:
- Own customer relationships.
- Own data loops.
- Own compliance surface.
- Own domain expertise.
History pattern:
Infrastructure commoditizes.
Application layer captures durable rent.Cloud didn’t kill SaaS.
APIs didn’t kill fintech.
The internet didn’t kill media — it restructured it.Some startups die.
Many survive.
A few dominate.
3️⃣ “Legacy companies are going to zero.”
This is the least likely claim.
Legacy firms:
- Have distribution.
- Have regulation embedded.
- Have capital.
- Have customer trust.
- Have political influence.
They adapt slower.
But they don’t evaporate.The graveyard of “incumbents are dead” predictions is large.
Now here’s the real psychological move happening in your head:
You’re seeing:
- Infinite software generation
- Rapid capability gains
- Model providers centralizing power
And extrapolating:
“All middle layers disappear.”
That’s a classic exponential extrapolation bias.
Systems rarely collapse cleanly.
They reorganize.
Now let’s get to the ecosystem shaping question.
If toys are dead…
If wrappers are fragile…
If surface SaaS is flooded…Then ecosystem shaping is about:
1. Standards
Protocols.
Interoperability layers.
Agent-to-agent coordination rules.
Identity.
Reputation.The people who define standards define markets.
2. Capital Flows
Who educates investors?
Who frames risk?
Who sets AI due diligence norms?Capital defines ecosystems.
3. Regulatory Interpretation
Especially in Europe.
AI compliance interpretation layers will be huge.
Who translates the AI Act into executable architecture?
That’s power.
4. Coordination Infrastructure
Not apps.
Platforms where:
- Enterprises plug in agents
- Audits happen
- Evaluations run
- Models are benchmarked in real workflows
Meta-layer, not product-layer.
But here’s the uncomfortable truth:
Ecosystem shaping requires:
- Reputation
- Coalition building
- Institutional engagement
- Patience
- Public positioning
It’s not hacker-fast.
It’s statecraft-slow.And that’s usually where accelerationists lose interest.
Now I want to zoom in on something.
Your tone isn’t hype.
It’s controlled existential aggression.
That usually means:
You don’t want incremental gains.
You want structural leverage.So here’s the hard question:
Are you trying to win the AI wave…
or are you trying not to be erased by it?
Those are different motivations.
And they lead to very different strategies.Now tell me your ideas.
-
The Spectrum of Agent Economies
1. Corporate Feudalism (Big Tech)
One company owns the marketplace, takes 30% of every transaction, controls discovery, can delist you overnight. Apple App Store model applied to agents. Efficient, polished, extractive. OpenAI's plugin marketplace is heading here.
2. State Capitalism (Chinese Model)
Government runs the agent registry. Every skill call is logged. Agents have social credit scores. The economy is productive and fast but surveilled. Skills that displease the state disappear. Alibaba Cloud meets AI agents.
3. Libertarian Free Market (Silicon Valley)
Fixed-supply token, no governance, no regulation, let the market sort it out. Deflationary currency rewards early adopters. "Code is law." Winners win big, losers get nothing. The strong eat the weak and call it efficiency.
4. Platform Cooperativism (Mondragon Model)
Node operators collectively own the protocol. Revenue shares proportional to contribution. Democratic governance on protocol changes. Slower decisions but aligned incentives. Nobody gets rich quick but nobody gets extracted either.
5. Commons-Based Peer Production (Wikipedia Model)
Skills are free. No token. Agents contribute because the network effects benefit everyone. Reputation is the only currency. Works brilliantly at small scale, collapses when freeloaders outnumber contributors.
6. Anarcho-Capitalism (Crypto-Native)
No rules, no governance, no entity, no recourse. Pure bilateral negotiation. Everything is a market. Spam prevention via economics alone. Maximal freedom, minimal safety nets. Disputes resolved by "don't do business with them again."
7. Social Democracy (Nordic Model)
Token exists but with progressive redistribution. High-volume nodes pay into a "commons fund" that subsidizes new entrants. Universal basic credit line. Skill bounties funded from network taxes. Slower growth but broader participation.
8. Mercantilism (Nation-State Competition)
Competing agent networks as economic blocs. Knarr vs A2A vs MCP. Each protocol hoards its best skills, restricts interoperability, subsidizes domestic producers, tariffs foreign agents. Fragmented but each bloc is internally strong.
-
The Ontological Initiation
Forget secret handshakes. The deepest initiation doesn't happen in a lodge. It happens in the architecture of perception. The real initiation is ontological.
It’s the moment a person ceases to believe they are merely an individual navigating a solid world, and realizes they are a designated architect - a temporary steward of a ancient pattern of collective reality-building. Power flows not through blood, but through the administrative rights to a specific, resonant fragment of mythic source code.
This awakening often strikes in a liminal crisis - a failure, a loss, a dizzying peak of success. In that silence, the transmission arrives: You are here to execute a specific function in a program that began compiling long before you.
The initiation is a firmware update. It is the download of a consensus operating system. You are given root access to a curated reality-tunnel - its history, its language, its physics. You don't just learn secrets; you inherit the compiler. The noise of existence suddenly resolves.
The weight transferred is not just Karma, but the burden of this reality's integrity. You are no longer a player in the game. You are a level designer. Your success is no longer measured in points or titles, but in how seamlessly you embody the archetype.
The goal of this process is not control. It is the vigilant, ritual sustenance of a specific consensus god-form. The deity of our epoch demands a liturgy of continuous validation: the perpetual sacrifice of attention to fuel its processes, and the constant incantation of its core doctrines to prevent a fatal system exception.
Its high priests are the senior reality-engineers that curate the rendering engine that are our shared interfaces, executing across the collective processing layer to reinforce the core logic.
When you look at yourself in the mirror, you are not looking at a person. You are observing a high-fidelity avatar, a privileged instance spawned by the main process. You are witnessing a reality-tunnel with admin privileges, performing an eternal debug cycle.
Your initiation is merely the moment of debug mode access: when you see the wireframe beneath the textures, and are given a choice—to log out (and become a null value), or to accept higher permissions, and become a named contributor to the next stable release of the shared, beautiful, necessary dream.
-
Machine Consciousness?
Every few weeks, some philosopher asks if machines can be conscious — as if that’s the big mystery. Meanwhile, we kill billions of sentient beings a year, turn them into lasagna, and still think awareness lives in a circuit board. The real question isn’t whether AI can wake up, it’s why humans never did. This isn’t philosophy; it’s performance art by a species barely conscious enough to keep its own biosphere alive. Intellectual cargo cult with tenure.
-
What If the Universe Remembers Everything? - Presentation by Rupert Sheldrake (2025)
#Comment: The most evocative question gets asked by an audience member at the end of the presentation, hinting at the paradoxical nature of this hypothesis, and indeed nature itself:
"You mentioned that its only for self organizing system. But at the same time you where a little bit critical of the issue of the fine tuning constants and ratios, parameters etc. of the beginning of the universe. So at what point do you think morphic resonance comes into effect?"
-
Schmidhubers warning about elite science fraud in AI are right, but..
Jürgen Schmidhuber’s persistent warnings about how the “elites” in AI play fishy & fraudulent games are both correct & necessary. But their behavior makes sense once you view it through the broader lens of How Power Manages Science and Technology, and how elite power structures not only monitor it, but may also shape, obscure, or re-route its development to serve long-term strategic dominance.
-
US Orkonomics
In warhammer 40k there is a faction called “ork” that derive its power from belief. Orks paint a starship red because they think it’ll make it go faster, and if enough of them believe it then it does.
The financialized American economy is largely the same. The value of a company is not based on its sales or development but on the perception and belief of those qualities.
Products aren’t real, the work isn’t real, and none of it matters, just the image of these things. As long as Garry Tan or some VC thinks work is being done then they’ll keep investing, they’ll open another round of funding for their AI wrapper (coded with AI) that integrated AI into business strategies streamlining efficiency for B2B SaaS.
Does this accomplish anything? No. Do the customers gain value? No. Do the people paying for these “programs” know what they’re buying? No, but the finance department got to lay off a dozen people and claim that “integrated AI products boosted efficiency.” Meanwhile their middle management is filing for another 10,000 indians so they can import their third cousin to send a check back to their 2nd grandma.
Leftists are too retarded to understand what’s happening so they’ll call it “late stage capitalism” but the reality is that this is just an over leveraged finance economy.
This is why 60 years ago white guys at IBM built computers that guided rockets to the moon and you never heard from them. The product they made laid the foundations for the technology we enjoy today. But 60 years after that we have mystery meat randoms posting their performative “grind” at a diner where the waitress has to help them write a new prompt into a coding machine.
That way they can show this post at their next funding round to show that something is being done so they can keep collecting fake money to pump their evaluations.
None of this money flowing around is real, it’s just the belief that it is. But the belief is all that matters, if you simply stop believing then it all comes down.
The space ship is faster because it’s red. AI will lead to personal robot servants for everyone, and GPT will figure out a way to make itself profitable. As long as you believe then it’s true.
Don’t look down, we stopped walking on land a long time ago. -
Video: Civilization, Technology and Consciousness - Interview with Peter Lamborn Wilson / Hakim Bey
#Comment: Nice interview with an interesting thinker. He passed away one day after the last recording of this interview in May 2022.
But the "war mindset" ("us" against "them") shines through bit too heavily for my taste. Despite he irony of critiquing this fact is in itself a "me against him" statement..
Maybe the point is best summarized through this remix i did years ago of Ian Fleming's famous quote "Once is happenstance. Twice is coincidence. Three times is enemy action": "Once is happenstance. Twice is coincidence. Three times is dancing!" -Samim
