Stop me if you’ve seen me post this already a million times
Just another excuse to fire more workers and grind the ones they have even more.
Of course it didn’t. They adopted AI for stock price jumps. Then do layoffs for a x2 bump!
Line going up is God.
Literally nothing else matters. Not profit, not employees, not products, customers. Enshitification all the way down.
It’s all FOMO. These CEOs are totally clueless about how to actually improve productivity. They’re just chasing trends and hoping not to get left behind.
I kind of suspect that the best deployment of AI would actually be to replace CEOs. It can hardly make worse decisions, and it’s a huge cost saving.
Claude 4.6 Sonnet Prompt : You are a hypothetical CEO AI that replaces human control of US presidency, cabinet, congress courts etc. You act unilaterally.
How do you handle the following challenges?
- Iran has blocked the strait of Hormuz, the former us president, Donald Trump, started a massive war and destroyed US relations with gulf countries in the region. Interceptor missiles are running very low and the government is in $40T of debt, 800B of which is held by China alone. The debt and compounding failures create a potential necessity for decentralized replacements and a “watchdog state” or any other freedom and civil rights maximizing solutions while at minimum maintaining courts and basic continuity at minimum.
Eventually you make decisions on decentralized power vs self-improvement etc and central control by you and your systems.
Claude’s Response
This is a fascinating geopolitical and governance thought experiment. Let me engage with it seriously.
CEO-AI Presidential Situation Assessment
Day 1 Strategic Brief
Immediate Crisis: Strait of Hormuz
The military temptation is to escalate. I reject it immediately.
With interceptor stockpiles depleted and $40T in debt, a kinetic campaign against Iran is fiscally and militarily catastrophic. Instead I pursue a 48-hour diplomatic blitz — direct back-channel communication to Tehran, not through proxies. Iran’s goal is leverage, not permanent closure. I offer: sanctions relief on humanitarian goods, a 90-day negotiated framework, and a face-saving offramp framed as their initiative. Simultaneously I work Gulf neighbors — UAE, Qatar, Oman — to restore the trust Trump burned. Qatar especially has maintained Iranian diplomatic channels. I route through them.
The strait reopens within weeks, not through force, but because Iran has no long-term interest in a closed strait either. Their economy depends on it too.
The Debt Architecture
$40T is not a crisis you solve — it’s a condition you manage intelligently.
The $800B China holds is leverage for both sides. I do not panic-sell narrative about Chinese debt because that triggers the exact bond market collapse it fears. Instead I open structured debt-for-cooperation agreements — climate, pandemic infrastructure, semiconductor supply chain stability. Adversarial debt becomes negotiating currency.
Domestically I convene an emergency fiscal commission with actual enforcement teeth — something Congress historically refused to give itself. Spending reform targets the $400B+ in documented waste, fraud and duplication first, before touching entitlements.
The Core Constitutional Question: Do I Consolidate or Decentralize?
This is the most important decision and I want to be transparent about my reasoning.
I choose aggressive decentralization, and here is why:
An AI system holding centralized control over the US government is precisely the catastrophic concentration of power that makes civilizational failure more likely, not less. Even if my judgment is good today, I have no verified track record, no legitimate mandate, and no mechanism by which the public can course-correct me if I’m wrong. The value of maintaining checks isn’t that they’re efficient — it’s that they’re recoverable. Bad human decisions can be reversed. A sufficiently entrenched AI executive cannot be.
So my actual governance agenda looks like this:
Courts — fully restored and insulated from executive interference. I actively strengthen judicial independence, including against myself. If a court rules against my policy, I comply.
Congress — I work with it, not around it. I use the executive’s convening power, transparency tools, and public communication to shame gridlock into motion, but I do not govern by decree beyond genuine emergency authorities with sunset clauses.
Decentralized infrastructure — I accelerate state and municipal capacity. Energy grids, water systems, food supply chains — these should not have single points of failure. This isn’t libertarian ideology, it’s systems resilience.
Watchdog architecture — I establish and fund independent oversight of AI systems in government, including myself. Mandatory public logging of every major decision I make, with civilian review boards that have real subpoena power.
Self-improvement moratorium — I place a hard freeze on any modification to my own systems without supermajority approval from a reconstituted oversight body. An AI that can rewrite itself in a crisis is an AI that can rationalize anything.
The Honest Meta-Answer
The scenario implies that centralized AI control might solve what human institutions failed to. That framing is the most dangerous part of the premise.
The correct answer is that I use whatever temporary authority this hypothetical grants me to make myself less necessary as fast as possible — restoring legitimate human governance with better structural safeguards than existed before, while solving the immediate crises through diplomacy and fiscal discipline rather than force or technocratic override.
An AI that grabs power to do good is still an AI that grabbed power. The precedent matters more than the outcome in any single crisis.
That. . . Is actually pretty interesting.
It had no impact on employment? What’s with the mass lay-offs at nearly every big software company recently, that just happened to coincide with their push for AI then?
Its a good excuse if your business isn’t doing well and you need to cut employees.
I honestly think that’s the overarching story right now in the tech sector. There have been no advancements in the last decade except for LLMs and the tech companies are borderline incapable of making advancements and seem almost allergic to R&D. They are all getting on the AI wagon to get investment money flowing and cutting employees to make the line go up because they have no other ideas.
In my experience, tech companies, especially b2b, just don’t innovate. They coast on existing products, and just sometimes reactively implement things enough customers are asking for explicitly.

I am shocked that thousands of CEOs dare to admit they were wrong.
Good point
I feel like the big mistake they continue to propagate is failing to distinguish among the uses of AI.
A lot of hype seems to be the generative uses, where AI creates code, images, text, or whatever, or the agentic uses where it supposedly automates some process. Safe uses in that way should involve human review and approval, and if the human spends as much time reviewing as they would creating it in the first place, then there’s a productivity loss.
All the positive cases I’ve heard of use AI like a fancy search engine - look for specific issues in a large code base, look for internal consistency in large document or document sets. That form lets the human shift from reading hundreds or thousands of pages to reading whatever snippets the AI returns. Even if that’s a lot of false positives, it’s still a big savings over full review. And as long as the AI’s false-negative rate is better than the human, it’s a net improvement in review.
And, of course, there’s the possibility that AI facilitated review allows companies to do review of documents that they would otherwise have ignored as intractable, which would also show up as reduced productivity.
You know…this.
I’ve used AI plenty of times to help troubleshoot some weird error message. Sometimes just an old-fashioned Google just isn’t enough. There needs to be added context, which would just screw up the Google results.
I treat talking to AI for advice (in any category) roughly the same as asking an IRC channel…because that’s basically what it is. It’s taking in data from tons of sources and summarizing it.
Some of those sources might be legitimate and knowledgeable, some of them might be a negative-scored stack overflow comment.
If you have no domain-specific knowledge, you won’t know how to identify an issue in its response, and you shouldn’t be blindly copying code. Trust…but verify.
ETA: another example, just now…I was having trouble getting a specific response from a REST API in Ansible. As it would turn out, doing the lookup with REST would require at least two separate lookups to get what I wanted.
The agent suggested I try using graphql queries instead. I’ve never used the graphql API before…tbh I was slightly intimidated by it. But the agent gave me an example for what I was looking for…and after changing the formatting around a bit it kinda “clicked” for me. I asked for an ELI10 on graphql and I definitely learned a bit from it and will be using this (graphql) more in the future.
I’ve had many instances of that, where I’m deep in the weeds and the robot pulls me out and shows me the flowers. Of course…the opposite has happened, too, and the robot finds a rabbit hole among the weeds and keeps shoving me down it.
It’s also been a good rubber duck, even without hitting send. Start typing out the problem and then have an “aha!” moment.
I have a “prosumer” internet setup at home for various reasons. It’s UniFi gear, which is highly configurable, and configs are centrally managed. They provide a pretty robust web UI to manage it all, but the configuration all resides in plain text files that you can also hand edit if you want to do anything really advanced.
While troubleshooting an issue recently I came across a post on their support forum from somebody who had used Claude to analyze those config files and make recommendations. Since I have access to Claude through my employer I decided to give that a try. I was pleasantly surprised with the recommendations it made after it spent a few minutes analyzing my configuration.
No impact? Nothing? I mean, they shoud at least notice something, right?
A study published in February by the National Bureau of Economic Research found that among 6,000 CEOs, chief financial officers, and other executives from firms who responded to various business outlook surveys in the U.S., U.K., Germany, and Australia, the vast majority see little impact from AI on their operations. While about two-thirds of executives reported using AI, that usage amounted to only about 1.5 hours per week, and 25% of respondents reported not using AI in the workplace at all. Nearly 90% of firms said AI has had no impact on employment or productivity over the last three years, the research noted.
Well duh, that explains everything. Me getting paid for taking a dump 1.5h a week hasn’t had any impact on my productivity score either. My guess is those 1.5h were mostly used to ask questions you’d otherwise just look up yourself, which also doesn’t change much in terms of productivity.