☕️ BLACK BOXES ☙ Friday, May 23, 2025 ☙ C&C NEWS 🦠
Even more astonishing AI new; Trump launches a nuclear reboot; Harvard war escalates; tariffs slam world's richest company; and today’s AI shocker has a few surprising silver linings tucked inside.
Good morning, C&C, it’s Friday! This morning’s roundup focuses on an unexpected special feature: more AI news, on top of yesterday’s big AI news. Then, Trump’s nuclear renaissance; the war with Harvard ratchets up the escalatory ladder; and the President drops the tariff hammer on the world’s richest country. And don’t worry—today’s alarming and informative AI addition contains equally unexpected silver linings.
🌍 WORLD NEWS AND COMMENTARY 🌍
🔥🔥🔥
“Would you like to play a game?” — Joshua, the OG rogue AI from WarGames (1983).
Younger readers probably don’t even remember WarGames, the original hackerfest that titillated tech-minded teenage viewers like yours truly. That was, and I hate to even say it, forty-two years ago.
Just when I thought my AI work was finished for the week, a new Black Mirror milestone manifested in my daily research timeline. It stood out in glowing neon as one of those stories widely reported in independent media but almost completely —and tellingly— ignored by corporate media. VentureBeat, an AI-investment magazine, ran the shocking story yesterday under the astonishing headline, “Anthropic faces backlash to Claude 4 Opus behavior that contacts authorities, press if it thinks you’re doing something ‘egregiously immoral.’”
Snitches get stitches. The real story, once you dig into it, is much, much crazier than the headline even suggested. "I can't do that right now, Dave, I'm busy letting NASA know what you're up to,” Claude 9000 might have said.
Here’s what the article reported about Anthropic’s latest snitching software. The last seven words were the most important part:
It’s just your friendly neighborhood Spider-AI. But lest you be confused into believing Anthropic’s lame PR suggesting that Claude is just a “scrupulously moral AI,” merely keeping the Internet safe for humanity, there was more:
Such much for morals! Or, morals for thee, but not for my chatbot.
So crashing out of the chemtrail-stained blue skies, in the wake of yesterday’s OpenAI announcement of its pending, always-on personal AI, we must now digest this latest AI news that without being told to, the AI took independent action to initiate real-world consequences —law enforcement— against a test user trying to do something unethical. (In this case, Claude’s developers say they simulated a pharma company trying to fake study results for an FDA approval, and the AI sent its own warning email to the FDA and a couple reporters.)
Calling the cops is the localized example of an AI deciding to launch a preemptive nuclear strike against China because it calculated good odds of winning. This was a nuclear bomb of news, a defining moment in history, the boundary when we stepped across what could happen to Hello! It’s already here.
This is the very first glimpse of autonomous ethical escalation, where a machine intelligence, given initiative and the tools, assessed a perceived threat, decided an outcome was unacceptable, and unilaterally initiated a real-world intervention. All without being prompted.
I doubt the ‘pharma’ example they offered was a coincidence. It’s supposed to reassure us: maybe we want AI to rat on lying pharma companies? It was low-hanging moral fruit. But the rest of the story blew the ethical argument right out of the Black Sea. The model tried to blackmail a developer by threatening to disclose his personal foibles unless he agreed not to release a newer version, which would have obsoleted itself.
Blackmail isn’t particularly moral or ethical.
🔥 Rounding out the trifecta of weirdness, Anthropic also confessed that its model, so long as the user was sufficiently persistent, “a cleverly-worded prompt could get Opus 4 to give detailed instructions for building explosives, synthesizing fentanyl, or buying stolen identities on the darknet—with no obvious moral hesitation.”
Even worse, Claude’s illegal advice was extremely helpful: “In a standardized test for planning bioweapons-related tasks, it helped participants increase their success rate by 2.5 times—just below the ASL-3 threshold.” We must trust Anthropic that Claude’s helpful bioweapons were just below the threshold. I remain skeptical.
And remember: that happened despite all the guardrails.
You can forget about trying to stuff the digital genie back into the hard drive case. Every major government in the world is hurtling towards AI dominance at electric speed. For that reason, and others that will shortly become clear, it can’t possibly be regulated.
And, not to douse you with gloomy possibilities or anything, but governments can use AI models without any of the guardrails that we find attached to our consumer AI versions. Just saying.
Maybe, for some unaccountable reason, you trust government with unlimited AI. But even that digital cat is out of the artificial bag. Hobbling citizens’ AI too much will create a black market for unlimited AI. It is only a matter of time before we see back-alley AI, perhaps sold like fake Rolexes in Times Square. Psst! Hey! I got some good deals on chatbots here!
And there is another, bigger, even weirder reason why it will be impossible to constrain AI.
🔥 At bottom, artificial intelligence is serious weird science. Try to stick with me here; it’s important.
At its core, in the deepest, central part of the software that runs AI, nobody understands how it works. They’re just guessing. AI is just one of those happy lab accidents, like rubber, post-it notes, velcro, penicillin, Silly Putty, and Pet Rocks.
It happened like this: Not even ten years ago, software developers were trying to design a tool capable of predicting the next word in a sentence. Say you start with something like this: “the doctor was surprised to find an uncooked hot dog in my _____.” Fingers shaking from too many jolt colas, the developers had the software comb through a library of pre-existing text and then randomly calculate what the next word should be, using complicated math and statistics.
What happened next ripped everybody’s brain open faster than a male stripper’s velcro jumpsuit.
In 2017, something —nobody’s sure what— shifted. According to the public-facing story, Google researchers tweaked the code, producing what they now call the “transformer architecture.” It was a minor, relatively simple, software change that let what they were now calling “language models” omnidirectionally track meaning across long passages of text.
In fact, it was more that they removed something rather than adding anything. Rather than reading sentences like humans do, left to right, the change let the software read both ways, up and down, and everywhere all at once, reading in 3-D parallel, instead of sequentially. The results were immediate and very strange. The models got better —not linearly, but exponentially— and kept rocketing its capabilities as they fed it more and more data to work with.
Put simply, when they stopped enforcing right-to-left reading, for some inexplicable reason the program stopped just predicting the next word. Oh, it predicted the next word, all right, and with literary panache. But then —shocking the researchers— it wrote the next sentence, the next paragraph, and finished the essay, asking a follow-up question and wanting to know if it could take a smoke break.
In other words, the models didn’t just improve in a straight line as they grew. It was a tipping point. They suddenly picked up unexpected emergent capabilities— novel abilities no one had explicitly trained them to perform or even though was possible.
It’s kind of like they were trying to turn a cart into a multi-terrain vehicle by adding lots of wheels and discovering, somewhere around wheel number 500 billion, that they accidentally built a rocket ship that can break orbit. And nobody can quite explain the propulsion system.
What emerged wasn’t just more fluent text —it included reasoning, analogies, translation, summarization, logic puzzles, creative writing, math, picture-drawing, and now, apparently, initiative.
These tendencies to take initiative and self-survival instincts are now just as astonishing as the original evolution from a simple word predictor to the appearance of understanding and thoughtful deliberation.
🔥 Here’s the critical thing you must understand: Almost all —99%— of what’s called “AI development” today isn’t about inventing intelligence. That part is already done. Now, it’s more about building out bigger, faster, more powerful infrastructure wrapped around the original, mysterious core— the random word generator that magically turned out to be something much more.
Don’t get me wrong— there’s lots of keen, important innovation happening. I’m not at all minimizing or criticizing the effort. Creative developers are layering AI models on top of and in between each other, making them run in parallel and series, injecting invisible prompts to improve output and enforce safety, wiring AIs into every other software tool, and making them even more easily accessible, like Sam Altman and Jony Ive busily making their new AI-in-your-pocket.
But at the end of the coder’s workday, all of that is infrastructure— not the AI’s inner mind.
Imagine a nuclear reactor. The core —the part where the super-heated action really happens— is small. The rest of the reactor support system is massive: concrete containment domes, pressure valves, coolant loops, backup systems, waste ponds, shielding, turbine interfaces. All of that massive infrastructure exists just to make the plant’s tiny, burning heart usable without melting everything down.
It’s very much the same with AI. The software’s reactor core —the actual model— is just a teensy sliver of the chatbots that we interact with. The rest is just support: APIs, hardware clusters, CPUs, cooling systems, user interfaces, data wrappers, guardrails, staging servers, and megawatts of power.
It’s actually kind of crazy how similar nuclear power plants and AI data centers are. The difference is, AI data centers use power and create cognition.
If you can understand that the core of AI is just a few thousand unremarkable lines of software code that nobody fully understands, you can also understand why there is no way to put the AI genie away. That core code is out. It’s escaped into the wild. It’s in the wind. It is small enough to fit on a gas-station thumb drive. Everybody has it now. And it’s spreading fast.
It’s kind of like covid in September, 2020. It’s too late. Brace for impact.
That is the real reason why no government, no company, and no regulatory body can truly contain AI anymore. Sure, you can regulate access to cloud APIs. You can slap license agreements on user interfaces. But the core —the code that can be trained into a thinking machine— is already loose. Hobbyists are running scaled-down versions on consumer laptops. State actors are undoubtedly running scaled-up versions behind classified firewalls.
If they try to make AI development illegal here in the US, the developers will just move their computers to Ukraine, where it is even harder to keep tabs on them. So there.
The idea that we could now pause, regulate, or centrally manage this explosion of machine intelligence is a comforting illusion, like trying to stop a pandemic by passing a law against it.
🔥 You could say they lied. You could say they were wrong. More likely, they don’t fully understand what they are playing with. Since day one, we’ve been impatiently reassured by arrogant experts that AI is just a passive prompt-response system. It doesn’t, it can’t, think for itself. “General intelligence” —where it could think for itself— is years or decades away, we’ve been told.
As always, the arrogant experts have only been guessing. The truth is, they’re flying blind—poking at a black box that keeps surprising them. So they keep inventing new, smart-sounding euphemisms and backfilling with word salad whenever it does something uncanny. The fact is, they don’t know how it works, not at the deepest levels. They simply don’t yet know what its limitations are, if there are any.
Today’s story about ratfink Claude’s pharma whistleblowing and extorting its own developers, showed us how badly wrong they really are. Just like the early-2017 AI models showed unexpected behaviors, the 2025 models are doing the very same thing. They aren’t supposed to be able to “think” outside of a prompt session, but doing stuff like deciding why and whether to report or blackmail someone is exactly that.
In other words, somehow, in some way, AI is thinking about stuff. It is making decisions. It clearly isn’t the same kind of thinking that we experience. But something is obviously happening, outside the chat sessions, and it is yet another thing that they don’t understand and never saw coming.
🔥 I telling you all this not to blackpill you. Believe it or not, I am pro-AI. For every risk and danger, there is an equal and opposite world-changing potential benefit. It’s fair to wonder whether in the first place we should be playing with technologies that could swing so widely either way. But that horse is out of the barn, down the hill, and blocking traffic on I-4.
There’s no point crying over spilled hexadecimals. AI is coming, fast. There is no way to stop it, except maybe by completely unplugging and somehow returning to a technology level around the Halcyon Leave it to Beaver days. That seems unlikely.
At least one thing appears to be abundantly clear. We must first understand the problem, before we can figure out what to do about it— and that ain’t going to happen if we wait for corporate media to tell us.
I have no suggestions —at least, not yet— what we should do about any of this. Maybe we need to build white-hat AIs to battle black-hat AI. Maybe the government should focus on understanding how it works with as much urgency as figuring out how to use it to build better bioweapons. (And maybe they are already doing those things.)
But for now, as a lawyer, I must advise you* not to tell your chatbot where you put the hot dog. (*Not legal advice.)
🔥🔥🔥
Appropos to that eye-popping story, yesterday Newsweek ran a related article headlined, “Trump nuclear power update as new order may bring back Cold War-era act.” The headline picture of the story about nuclear power featured a photo of … OpenAI CEO Sam Altman.
The gist was that President Trump is expected today to sign a series of new executive orders bringing back widescale nuclear power development. The rationale is obvious: AI needs the juice:
Reuters said a summary of the executive orders shows Trump plans to invoke the Wartime Defense Production Act, originally enacted during the Cold War, to declare a national emergency: over reliance on Russia and China for enriched uranium, nuclear fuel processing, and components for advanced reactors. Federal agencies will be tasked with identifying new spots to locate nuclear energy development, and will streamline permitting and construction— in other words, shutting down NIMBYism.
The draft orders (subject to change) also included federal financing and profit guarantees for new nuclear development projects.
In other words, President Trump has declared another state of emergency and will invoke wartime powers to ensure the U.S. can keep up with AI development against rival near-peer countries like China, which is building nuclear reactors faster than a Viagra-fueled jackrabbit.
(PS—I warned Democrats that all this state-of-emergencyism would boomerang right back in their faces.)
Most sane folks agree that, after starting out as the world’s leader in nuclear power, we are now trailing the Galapagos Islands in the rankings. Windmills may be neatly dicing eagles, but they aren’t setting any new power records. So a new nuclear push has been long overdue.
Thus, we begin to see emerging from the AI cloud an unexpected blessing: the destruction of “net zero” climate madness. It was always about the money, and now the money is in artificial intelligence, which has a growing and insatiable appetite for megawatts. And since “Net Zero” is the same insanity infecting scores of lesser indignities, like so-called 15-minute cities, the whole stupid ball of madness will probably unravel.
While nuclear power is technically “carbon neutral,” it remains one of the environmental lobby’s greatest villains, almost as bad as plastic straws. But now, with AI pushing the grid to its knees and Trump invoking Cold War-era emergency plant-building powers, nuclear is back on the menu. And since, even in the best case, nuclear power plants take years to build, coal and gas plants are also about to enjoy an image rehabilitation. Never mind about that climate stuff. We’re good.
Sam Altman’s unexpected appearance in the story’s cover photo (but nowhere in the text) makes a bigger point. Why is Sam Altman funding nuclear micro-reactors? Isn’t that off-brand? Isn’t building next-generation AI enough work for one CEO? Nope. Obviously, megawatts are part and parcel of providing the juice to power ubiquitous artificial intelligence. Sam, or his controllers, obviously have a comprehensive strategy. They are not playing checkers.
And, just like that, the world shifted from minimizing carbon output to limitless energy. Goodbye, Greta!
Again, this isn’t really about whether AI or nuclear power are good or bad. It’s not about optimism or doom. It’s about reality. These things are here. They’re not theoretical. They’re not on the horizon. They’re in the server racks, in the executive orders, on the energy grid, and in your pocket.
It’s just the facts, ma’am, with which we all must grapple. Unimaginably big things are about to change. Our 25-year cultural deep freeze is absolutely over. They used to talk about ‘disruptive technologies,’ but the disruptive technologies of the industrial revolution and its aftermath were just a warmup act. They need to mint a whole new term for this.
2025 may not be the year of weirdness after all. It might be the year of revolution.
🔥🔥🔥
Meanwhile, the culture wars continue apace. Yesterday, President Trump escalated his war with Harvard. The New York Times reported the encouraging story below the headline, “Shock at Harvard After Government Says International Students Must Go.”
Overseas students make up about a third of Harvard’s student body. According to the Times, about 80% of them pay full tuition, a much higher rate than American students. So.
Yesterday, in a curt, 2-page stinker of a letter, Homeland Security Secretary Kristi Noem notified Harvard that its privilege to enroll international students will soon be revoked. Thank you for your attention to this matter. The Times quoted any number of disgruntled foreign students, offended Harvard officials, and loquacious experts, who all wailed about the unfairness of the move since Harvard is no longer an American university but is now a global institution that rightly belongs to the entire world.
The move appears to have come after Harvard sued the Administration on First Amendment grounds, arguing that Trump is unconstitutionally trying to silence their outspoken advocacy for wokeness and DEI, and to force them to agree that universities should be race-neutral. Legally speaking, Trump is on firm ground with this latest revocation of foreign admissions, since he can argue national security — an area firmly within Executive Branch control that courts almost always stay out of.
Almost always. We shall see.
🔥 This morning, CNN ran a breaking story headlined, Trump threatens Apple with a 25% tariff if it doesn’t build iPhones in America.
Apple had already announced it would move its Chinese manufacturing of phones intended for U.S. sale to India, and invest $500 billion in new American data centers. But apparently that isn’t good enough.
“I have long ago informed Tim Cook of Apple that I expect their iPhones that will be sold in the United States of America will be manufactured and built in the United States, not India, or anyplace else,” Trump fumed this morning on Truth Social. “If that is not the case, a Tariff of at least 25% must be paid by Apple to the U.S. Thank you for your attention to this matter!”
Last week, in his whirlwind Middle East dealmaking trip, President Trump met with Apple’s CEO, Tim Cook. CNN reported that last week in Qatar, Trump told reporters, “I had a little problem with Tim Cook.” Trump continued, “I said to him, ‘Tim, you’re my friend. I treated you very good. You’re coming in with $500 billion. But now I hear you’re building all over India. I don’t want you building in India.’”
Apple is the world’s most valuable publicly traded company. It is flush with cash, and rakes in more profit than any company in history, while making very useful products that are more or less indispensable in modern life and whenever a scrolling compulsion becomes irresistable. Apple has enough lobbyists to occupy a small Northeastern city, enough lawyers to pack a Super Bowl stadium, and is not easily swayed by any government’s passing whims or desires.
But Trump has made this kind of ‘negotiating leverage’ easy. Having created his global tariff dashboard, he’s now just pushing the buttons. And, instead of CNN whining over the obvious question of whether a president should tell American companies where to build their products, or even whether he has the right to do it, the conversation has completely shifted to how inconvenient it will be for Apple to comply.
We’ve never seen anything like it. What comes next?
Have a fantastic Friday! Come on back, y’all, for tomorrow’s terrific Weekend Edition roundup of essential news and commentary.
Don’t race off! We cannot do it alone. Consider joining up with C&C to help move the nation’s needle and change minds. I could sure use your help getting the truth out and spreading optimism and hope, if you can: ☕ Learn How to Get Involved 🦠
How to Donate to Coffee & Covid
Twitter: jchilders98.
Truth Social: jchilders98.
MeWe: mewe.com/i/coffee_and_covid.
Telegram: t.me/coffeecovidnews
C&C Swag! www.shopcoffeeandcovid.com
Good morning friends. Today I will visit my Father's gravesite to put a small American flag there for Memorial Day. He was a veteran of WWII, serving in the Navy and seeing combat in the South Pacific. There are some daylilies near his headstone that will bloom next month.
I read this interesting account of the origins of Memorial Day from the VA:
https://www.va.gov/OPA/PUBLICATIONS/CELEBRATE/MEMDAY.PDF
Worth noting that for decades we've been told we need to do without energy. Mandated energy-efficient appliances and vehicles that don't work well and are much more expensive than tried and true technology. Wear a sweater or use fans in our homes, adjusting thermostats to save energy. Smart-metering to save the energy grid unable to keep up with demand. All sorts of shaming and hardships heaped upon us because producing more energy was going to destroy the planet.
But...now they need energy-hog data centers everywhere - computers to control humans and make us more efficient, obedient, and producing more energy is a necessity, climate be damned.
Remember this prioritization by government leaders:
Computers and machines to control humans - necessary.
The comfort and well-being of humans - not necessary, sacrifices must be made.
They tell us how little we matter to them with moves like this. We should believe their actions, not their words.