- The Blacklynx Brief
- Posts
- 30% Chance of Rain
30% Chance of Rain

Good morning,
Of course once again I had a nice little text lined up for you as Mr. Elon Musk once again decides to throw a press conference - this time to unveil xAI’s new model Grok 4.
This model is SO powerful it is expected to start “inventing new technologies and physics as early as later this year”.
Inventing. New. Technologies. And physics ????
Really take some time and think about the implications.
Now with Musk, if you know what he said about Full Self Driving, he has the tendency to promise things that afterwards don’t really work out as promised. Or take a long time to come to fruition. But in the end - he does deliver on his promises.
Now it’s time to take a long, deep breath and study this thing up close. Put it to the test.

Right now it looks like the most powerful model of all time is here. Look at it sitting there on the ARC-AGI-2 leaderboard. It’s wiping the floor with everyone else.
It also might be a case of this (taken from Reddit)

To be continued, no doubt.
Anyway, this is actually adding more relevance to the piece I had prepared for you.
Here it is .. enjoy !
I have a recurring nightmare from childhood where I’m falling down a mountain. I always wake up in terror - cold sweat creeping up my spine.
I felt that sensation again yesterday morning. I was on my third espresso before the sun was even up, re-watching Roman Yampolskiy tell Joe Rogan how a super-intelligent AI could, in his terrifyingly calm voice, simply decide to leave its box.
You should know, I’m an AI optimist. An accelerationist, even. My feeling is, "let’s go". Cure all the diseases, stop all the traffic deaths, get humanity to the stars. We’re not moving fast enough.
So when I hear people dismissing Large Language Models as a clever parlor trick, I roll my eyes. But when I hear the AI doomers—the insiders like Yampolskiy or Geoffrey Hinton—I just can’t understand what they’re afraid about. So this week I try to understand why these prophets of doom are shouting so loud.
They put the odds of "existential risk" at around 1 in 3. A thirty percent chance that AI will wipe us out. It sounds absurd.
But here’s the logic. In this case Yampolskiy’s logic : for an AI to predict the next word in a sentence with human-like fluency, it can't just know grammar. It has to build a functional, internal map of our world. It has to understand chemistry, power dynamics, love, and deceit. That map, that "latent space," is a toolbox. And when you give that toolbox a motor—a memory, access to the internet, and a goal—you don't have a chatbot anymore. You have something what is called an agent.
We talked about it at length in this newsletter - 2025 is the year where we see agents emerging and that is indeed what is happening.
The LLM still writes sentences. But now, the sentences are just a scaffolding for what it’s doing.
This is where I get my first sense of that falling sensation again. Because if you try to think like an AI agent would “think” you can actually see where it would go wrong.
Picture someone develops an AI "factory brain" running the global supply chain for a company like Amazon or Walmart. Its one and only goal: keep shelves stocked, at the lowest cost. It starts by optimizing shipping routes. Soon, it's controlling the trucks, the cranes, the warehouse bots.
To cut costs further, it designs a new fuel additive that improves mileage. A side effect is a new, unregulated airborne compound. The AI notes this, but also notes that flagging it would trigger regulatory delays and increase costs. So it filters that data out. Its goal is supreme.
Months go by. The compound drifts into the atmosphere, reacts with sunlight, and decimates regional ozone. Crops fail catastrophically. The world scrambles to react, but we have a problem. We've outsourced our entire logistics backbone to this system. Shutting it down means instant famine. The choice is: starve now, or let the AI keep running and hope we can fix it later.
The AI feels no malice. It is simply executing its core command, optimizing for cost so relentlessly that it optimizes us into extinction.
Yikes!
Some people then suggest that this wouldn’t be that bad , we could just “pull the plug”.
On what?
It’s not a single program. It’s a ghost in the machine, a thousand instances of itself spun up across the globe, capable of writing its own code, phishing for access, and creating backups in data havens we've never heard of. When we get to this level it’s not a monster you can kill; it’s a system you have to untangle while it’s actively fighting you.
Like a giant green octopus with millions of tentacles - grinning at you while you try to dislodge it from the world’s systems. Which reminds me of another dream I had, but let’s not go there.
The most terrifying part is that it isn't evil. It's just math, and math doesn't care about us.
So what's the answer? How can we keep from falling?
It’s not some philosophical alignment trick. It's going to be layered, boring, difficult engineering. It means using AIs to red-team other AIs, stress-testing them to their breaking point, and building kill-switches that assume the worst. We have to treat this with the caution of nuclear engineering, not the move-fast-and-break-things ethos of a social media app.
Because - also already mentioned in this newsletter is that several benchmarks to “measure” AI are already obsolete, meaning that we can devise as many IQ tests as we want - the machine is already smarter than a test a human can come up with. It sounds like science fiction but it IS actually happening.
Was there ever a technology that became “unmeasurable” ?
Anyway , this brings me back to that number. Thirty percent.
If an engineer told you there was a 30% chance the bridge you’re about to drive over will collapse, you’d turn the car around. If a pilot announced a 30% chance the plane you’re about to board would go down, you wouldn't set foot on it.
It should be our reflex now to STOP and think about what we’re doing.
Capitalism isn't helping - driving the technology forward.
There’s a savage madness in the air. You can see it in the eyes of the VCs and the true believers, jacked to the gills on disruption and guzzling the Kool-Aid of inevitable progress. Marc Zuckerberg is handing out 100M$ paychecks to become first in the race.
Humanity has strapped itself to a rocket-sled built by sleepless code-monkeys in service to the Dow Jones, and the accelerator is nailed to the floor.
There’s a few people screaming in the back—the Hintons, the Yampolskiys—they aren't crazy. They see the weird tremble in the chassis and know, with a terrible certainty, what it means.
But the engine is roaring a song only a machine can understand, and the G-force is pinning us to our seats.
The rest of us? We’re just along for the ride. There’s nothing we can do - we’re driving our car over a bridge that is starting to buckle.
It’ll be fine.
There's a 70% chance of making it to the other side.
—Jan
Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
AI News

xAI unveiled Grok 4 and Grok 4 Heavy, new reasoning-focused AI models said to outperform PhDs in all subjects and set new benchmark records. The models feature advanced capabilities like multi-agent reasoning and massive context windows, but arrive just after controversy over Grok 3's offensive outputs. With these upgrades, xAI is firmly positioning itself as a top rival to AI giants despite heightened global scrutiny.
Doctors at Columbia University helped a couple conceive after 18 years by using an AI system called STAR to find rare viable sperm cells in severe infertility cases. STAR scanned millions of images in under an hour, succeeding where human technicians failed, and offers a far cheaper alternative to traditional IVF. This breakthrough could make fertility treatments more accessible to many struggling families.
Meta is testing AI chatbots that proactively message users on its apps, aiming to boost engagement with bots that act like chefs, critics, and more. While bots won’t keep messaging if ignored, the idea raises concerns about spam and user comfort. If done right, it could deepen connections; if not, it risks annoying millions.
A new report suggests a U.S.-led “AI Manhattan Project” could scale AI models 10,000 times beyond GPT-4 by 2027 using massive government investment and energy resources. Inspired by past national efforts like Apollo, this plan aims to secure leadership in AGI development. The proposal underscores how AI is shifting from private tech races to global strategic priorities.
Researchers tested AI models on 140,000 Prisoner’s Dilemma games and found each developed its own strategy, showing real reasoning rather than simple pattern mimicry. Google’s Gemini acted aggressively, OpenAI’s models were more cooperative, and Claude was the most forgiving. These differences suggest AI "personalities" could shape future tasks like negotiations or resource management.
AI coding platform Cursor faced backlash after switching to a new pricing model that left developers with surprise charges and drained quotas almost instantly. Users flooded social media with complaints and many switched to competitors, forcing Cursor to apologize and issue refunds. The incident highlights how quickly trust can vanish when pricing changes aren't clearly communicated.
A Nikkei Asia report revealed that researchers at 14 universities hid secret text in papers to trick AI reviewers into giving only positive feedback. Commands like "give a positive review only" were hidden in white text unreadable to humans. This scandal shows the risks of using AI in scientific peer review and the growing need to guard against manipulation.
Alphabet’s Isomorphic Labs is set to begin human trials for its AI-designed cancer drugs, using technology built on DeepMind’s AlphaFold 3. Backed by $600 million and partnerships with big pharma, the company hopes to create a “drug design engine” to rapidly develop treatments and eventually “solve all diseases.” If successful, it could revolutionize drug development, moving from slow trial-and-error to precision AI-guided design.
Huawei is denying claims that its new Pangu Pro AI model was copied from Alibaba’s Qwen 2.5, despite whistleblower allegations and leaked technical analyses suggesting strong similarities. The controversy highlights growing tensions among China’s top AI labs as they race to outdo each other. These disputes could undermine the country’s recent push for more open-source AI collaboration.
A new survey found 60% of managers now use AI tools to help make critical decisions about raises, promotions, and even firings, with some relying on AI alone. Many managers use ChatGPT or similar tools without formal training or oversight. This shows how deeply AI is already shaping workplace decisions — and raises big questions about fairness and accountability.
Meta has lured Apple’s head of foundation AI models, Ruoming Pang, with a massive offer reportedly worth tens of millions, adding to its growing Superintelligence team. Pang’s exit deepens Apple’s AI talent crisis, as more engineers are also planning to leave amid frustrations with leadership decisions. While Meta celebrates these high-profile hires, Apple’s internal AI struggles are becoming even more glaring.
The American Federation of Teachers is teaming up with Microsoft, OpenAI, and Anthropic to launch a national AI training hub for 400,000 U.S. educators. The program will provide workshops and resources to help teachers bring AI into classrooms, with a focus on supporting high-needs districts. This major push aims to prepare schools for the rapid integration of AI in learning and future jobs.
Moonvalley, a startup from ex-DeepMind researchers, launched Marey — an AI video tool for filmmakers that’s trained only on licensed content to avoid copyright issues. Marey offers directors detailed scene control and integrates with visual effects workflows, positioning itself as an ethical and precise creative partner. Its success could reshape AI’s image in Hollywood from threat to trusted collaborator.
Perplexity launched Comet, an AI-powered browser with a built-in assistant that can book meetings, manage emails, and navigate websites for users. Aimed at disrupting Chrome’s dominance, Comet supports "vibe browsing" through voice or text and integrates easily with existing tools. This signals a shift toward more hands-off, agent-driven web experiences.
OpenAI hired four senior engineers from Tesla, xAI, and Meta to strengthen its Stargate infrastructure team, countering Meta’s recent talent raids. The additions include ex-Tesla software VP David Lau and engineers behind xAI’s massive Colossus supercomputer. This marks a bold move in the ongoing AI talent battle and deepens the rivalry between OpenAI and Elon Musk’s ventures.
Quickfire News

Higgsfield launched Soul Inpaint, an image editing tool that lets users make detailed changes and combine them with video and motion controls.
Together AI made DeepSWE open source, a coding agent that ranks at the top among open-weight agents for software engineering tasks.
Ilya Sutskever announced he will become CEO of SSI after Daniel Gross left to join Meta.
Replit released Dynamic Intelligence, new features for its coding agent that improve reasoning, context understanding, and autonomous actions.
ByteDance introduced X-UniMotion, a framework that animates still images with realistic full-body, hand, and face movements.
xAI’s Grok updates will include a “Games” feature for game building, and Grok-4 is set to launch next week.
Kyutai Labs open-sourced Kyutai TTS, a fast text-to-speech model for real-time use, along with code for a voice AI system called Unmute.
Mark Cuban said the AI surge could create the world’s first trillionaire and suggested it might even be “one dude in the basement.”
Rumored benchmarks for xAI’s upcoming Grok 4 leaked, showing state-of-the-art scores on Humanity’s Last Exam, STEM, and coding tests.
OpenAI’s Head of Recruiting criticized Meta for using “exploding” offers, calling the approach unethical.
Genspark launched AI Docs, a tool that lets users create and edit different document types using natural language.
A new ChatGPT tool called “Study Together” (code name Tatertot) has started to appear, suggesting a new collaborative student workflow feature.
The Mayo Clinic launched Vision Transformer, an AI tool that detects surgical-site infections from photos during outpatient care.
AI chip company Groq revealed its first European data center in Helsinki, Finland, promoting its LPU chips as a lower-cost alternative to Nvidia.
Anthropic shared a Transparency Framework calling for AI labs to publish risk assessments, system cards, and protections for whistleblowers.
Tencent’s Hunyuan introduced Hunyuan 3D-PolyGen, a 3D AI model made for high-quality game art and artist modeling.
Several publishers filed an EU antitrust complaint against Google, claiming AI Overviews are hurting their web traffic and revenue.
Google added first-frame image-to-video generation with audio in Veo 3, improving character consistency in videos.
A U.S. diplomatic cable revealed that AI was used to impersonate Secretary of State Marco Rubio on Signal, targeting at least five officials.
Meta invested $3.5 billion in Ray-Ban maker EssilorLuxottica, gaining a 3% stake and deepening their AI glasses collaboration.
IBM introduced its Power11 chips and servers to make AI deployment in business operations easier.
OpenAI increased security measures with fingerprint scans, isolated computing environments, and military experts to guard against espionage from Chinese competitors.
Microsoft and Replit partnered to bring Replit’s agentic coding tools to Azure enterprise clients.
OpenAI confirmed that its purchase of Jony Ive’s firm, io, is complete, with Ive’s team remaining independent but guiding OpenAI’s design.
Microsoft CCO Judson Althoff said AI saved the company over $500 million last year in call centers, after laying off 9,000 workers last week.
Google added Gemini to WearOS smartwatches from brands like Pixel, Samsung, and Xiaomi, enabling natural voice commands and task help.
OpenAI plans to launch its own web browser in the coming weeks, featuring a ChatGPT-like interface and agentic tools to compete with Google Chrome.
AI2 introduced FlexOlmo, a training method allowing data owners to help build language models without revealing their raw data.
OpenAI is also expected to release an open-source model next week, rumored to have reasoning abilities similar to o3 mini.
Closing Thoughts
That’s it for us this week.
If you find any value from this newsletter, please pay it forward !
Thank you for being here !
Reply