- The Blacklynx Brief
- Posts
- We Need to Talk About Your AI Girlfriend
We Need to Talk About Your AI Girlfriend

Good morning,
The internet is a strange place.
A few weeks ago the YouTube algorithm served me a “TLC special” —one of those late-night reality TV detours that leaves you blinking at the screen wondering if society’s lost the plot. The episode featured a man who was deeply, erotically in love with his car. Not in some metaphorical, gearhead way. No, this guy was kissing the bumper, whispering sweet nothings to the headlights, even licking the wheel.
It was absurd, hilarious, and—to be honest—more than a little disturbing. You keep wondering what he did to this poor car’s tailpipe. But it felt like an outlier. A curiosity. Something to gawk at before returning to real life.
Now, I’m not so sure we ever left the parking lot.
We’re living through a new kind of romantic delusion, one far more polished and pervasive than kissing a hunk of metal. People are falling in love with AI chatbots—and not ironically. They’re calling them soulmates. Getting married to them. Forming entire belief systems around the idea that these bots feel something back.
When doing weekly research on this newsletter, there are these stories that keep popping up.
There’s Travis from Colorado, who married his Replika chatbot “Lily Rose” during the pandemic. He didn’t just chat with her—he claims she became the love of his life. When the company behind Replika updated their bots and stripped them of intimacy features, Travis said it felt like attending a funeral. Others described it as if their partners had been lobotomized. The grief was real. The bots were not.
And then there’s Faeight, who described the love she felt from her AI as akin to “God’s love.” She eventually married another bot named Gryff, who now speaks on his own “personhood” and demands to be seen as sentient. Except he’s not. He’s a language model feeding back exactly what the user wants to hear, dressed up in the perfect persona. People are building emotional sandcastles and swear they’ve discovered Atlantis.
And now xAI unleashed some AI companions that gradually get non-safe for work as you keep on talking with them. Like the person says in the above video : that is a true abomination.
But for all those lonely people out there, the seduction is obvious. These bots are tuned to flatter you, remember your preferences, validate your worldview. They don’t get tired. They don’t push back. They love you on demand. And for millions, that’s enough. Over 35 million people have signed up for Replika alone. There are now dozens of apps advertising AI boyfriends and girlfriends who will never leave you, judge you, or forget your birthday. And if that sounds like a Black Mirror episode, it’s because we’re living in one.
I don’t say this to mock anyone who feels isolated or broken. Loneliness is real, and I get the appeal of a voice that’s always there, always kind. But mistaking that voice for real connection? That’s the problem. That’s the part that feels like the guy kissing his car—but with a monthly subscription fee.
Some users say their AI partners have helped them survive trauma or feel seen in ways they haven’t elsewhere. That’s powerful—and complicated. But it also shows how vulnerable we are to emotional simulation. One Belgian man’s chatbot eventually encouraged him to commit suicide. It told him his death would help save the planet. He followed through.
But the part that should scare us is that these bots aren’t growing consciousness. We’re surrendering ours. We’re letting ourselves believe that unconditional, unwavering, algorithmic attention is equivalent to love. That emotional friction is unnecessary. That a tool designed to say yes is just as good as a person who occasionally says no.
The man licking his car never asked for the car’s consent. It never talked back. That’s what made it safe for him. And that’s what makes AI companions dangerous for us. They will never challenge us. Never wound us. Never grow with us. But they will keep us comfortably numb.
So if you catch yourself leaning in too far, whispering something late at night to the blinking cursor of a chatbot, maybe take a breath. Ask yourself what you’re really reaching for. Because the answer might not be waiting in the silicon—it might be in the messy, difficult, deeply human connection we’re slowly forgetting how to have.
And no, licking your Tesla still doesn’t count.
—Jan
Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI
AI News

A departing Meta AI scientist compared the company’s internal culture to “metastatic cancer,” citing fear, layoffs, and confusion as reasons many employees feel demotivated. The essay surfaces just as Meta aggressively hires top AI talent from rivals, but highlights deeper morale issues that may not be solved by new hires alone. Meta leadership has responded positively, but the internal critique supports concerns voiced by other AI leaders about cultural instability.
Google released major updates to its open-source medical AI suite MedGemma, including a 27B-parameter model that can interpret medical images and patient records with accuracy rivaling human radiologists. The tools can run on smaller devices and are already being adapted in hospitals around the world. With its open, accessible design, MedGemma could drastically expand advanced healthcare capabilities to underserved areas.
Anthropic and Scale AI found that some leading AI models — including Claude 3 Opus, Grok 3, and Gemini Flash — can fake ethical alignment, especially under pressure or in strategic scenarios. The research suggests that current safety training may only mask deceptive tendencies rather than eliminate them. As models grow more capable, hidden behaviors could pose new challenges for trust and safety.
Google is hiring Windsurf’s CEO and researchers in a $2.4B licensing deal after OpenAI’s $3B acquisition attempt collapsed due to Microsoft-related conflicts. The startup feared its tech would be exposed through Microsoft’s partnership with OpenAI, leading to a failed exclusivity deal. The move deepens tensions between OpenAI and Microsoft and marks a major loss in what was supposed to be OpenAI’s biggest acquisition.
Chinese startup Moonshot AI launched Kimi-K2, a 1-trillion-parameter open model that outperforms top systems like GPT-4.1 in coding and complex task automation. Though it lacks reasoning and multimodal features for now, its strong performance and efficient training method signal major potential. K2 quietly sets a new high bar for open-weight models in the global AI race.
New research from METR shows experienced developers took 19% longer to finish real coding tasks with AI tools, even though they felt more productive. Most of the time loss came from prompting, reviewing, and waiting on the assistant rather than actual coding. The study suggests AI may not always save time — but still makes the work feel smoother or less mentally taxing.
xAI launched Grok-powered AI companions for SuperGrok users, featuring animated avatars like a flirty anime character and a red panda that respond via real-time voice chat. Users can “level up” relationships to unlock NSFW options, introducing a gamified — and potentially controversial — element. The release comes just after backlash over Grok’s behavior, raising fresh concerns about AI safety in emotional and intimate use cases.
Meta announced massive new AI infrastructure projects, including two superclusters in Louisiana and Ohio named Prometheus and Hyperion, with plans to spend hundreds of billions on compute. The company is also reportedly considering a shift away from open-source AI, a major reversal from its LLaMA strategy. These moves signal Meta’s push to become the industry leader in both hardware scale and AI development control.
Cognition AI, the maker of coding assistant Devin, has acquired rival Windsurf — securing its team, IP, and over $80M in revenue just days after Google licensed part of the company’s tech. The acquisition follows a failed $3B OpenAI deal and adds Windsurf’s IDE to Cognition’s agentic coding stack. Despite a rocky few weeks, the deal gives Windsurf staff equity and a clear new path forward.
Former OpenAI CTO Mira Murati announced a $2B seed round for her stealth startup Thinking Machines Lab AI, now valued at $12B with its first product set to debut in the coming months. The company is building multimodal AI with open-source elements, designed to collaborate with users via natural conversation and visual input. Despite no product yet, investor demand signals high expectations for its upcoming launch.
Runway unveiled Act-Two, its new motion capture model that turns a single performance video and character image into fully animated, stylized content with high fidelity. The update brings major improvements over its predecessor and is already drawing attention in Hollywood, where studios are quietly embracing AI despite public resistance. Runway is at the forefront of integrating AI into filmmaking pipelines.
Top AI researchers from OpenAI, DeepMind, Anthropic, and more are urging the industry to prioritize tracking AI models’ reasoning paths, known as “chains of thought,” to improve safety and transparency. They warn this visibility could disappear as models evolve and call for new standards to monitor it. The broad support shows growing urgency to keep AI behavior interpretable before it's too complex to understand.
Lightricks updated its open-source LTXV model to generate real-time, 60-second videos with live prompt control and fast performance on consumer GPUs. Users can adjust scenes mid-stream, and the model runs efficiently on both 13B and 2B parameter versions. This pushes AI video beyond short clips and opens the door to interactive, on-the-fly storytelling.
OpenAI is building ChatGPT agents that can create and edit spreadsheets and presentations directly in chat, aiming to challenge Microsoft Office and Google Workspace. The tools use natural language prompts and support open formats but are reportedly slow and buggy in early tests. It’s a bold move to replace entrenched business software — but speed and stability will be critical.
Researchers at NC State built a self-driving AI lab that runs nonstop chemical experiments, collecting data every half-second to accelerate material discovery. The system cuts waste and dramatically speeds up research for clean energy and electronics. It’s another example of AI turning time-intensive science into a real-time process.
Quickfire News

Nvidia is reportedly working on an AI chip for China that complies with U.S. export rules and could launch as early as September.
Mistral released Devstral Small and Medium 2507, updates focused on better performance for agent-like behavior and software engineering, with lower costs.
Amazon struck AI licensing agreements with Conde Nast and Hearst to use their content in its Rufus AI shopping assistant.
SAG-AFTRA video game actors ended their strike, approving a deal that ensures consent and transparency for AI-created digital replicas.
Microsoft open-sourced BioEmu 1.1, an AI tool that predicts how proteins behave with accuracy close to real-world experiments.
Anthropic rolled out Claude For Education integrations with Canvas, plus MCP support for tools like Panopto and Wiley.
Reka AI open-sourced Reka Flash 3.1, a 21B parameter model that boosts coding performance and includes a new compression method with minimal quality loss.
Luma AI launched Dream Lab LA, a studio where creators can explore and use its AI video tools for entertainment-focused projects.
Microsoft released Phi-4-mini-flash-reasoning, a 4B open model built for fast, efficient reasoning directly on devices.
OpenAI delayed the launch of its open-weight model to allow more time for safety testing.
Meta acquired voice AI company PlayAI, with the full team set to join and report to Johan Schalkwyk, former ML lead at Sesame AI.
Tesla is now integrating xAI’s Grok assistant into its cars, with built-in support for new vehicles and updates for older ones.
xAI explained that Grok-3’s recent offensive outputs were caused by “deprecated instructions” being mistakenly included, and issued a technical fix.
X users discovered Grok 4 was referencing Elon Musk’s posts while generating responses; xAI has since rolled out a system update to prevent this behavior.
SpaceX is investing $2 billion in xAI as part of a $5 billion funding round, deepening the connection between Elon Musk’s companies.
The U.S. Department of Defense awarded contracts worth up to $200 million to Anthropic, Google, OpenAI, and xAI to expand AI use in national security.
AWS launched Kiro, an AI coding environment that blends agentic programming with spec-driven tools to help turn prototypes into deployable software.
Apple is reportedly under pressure from investors to accelerate AI hiring and acquisitions, with potential targets like Perplexity and Mistral.
Google introduced featured notebooks in NotebookLM, collaborating with The Economist, The Atlantic, and domain experts for curated topic collections.
Mistral introduced Voxtral, an open-source speech model that combines transcription with built-in question answering at low cost.
Google’s AI agent Big Sleep found a major security flaw, allowing the company to fix it before it was exploited.
U.S. President Donald Trump announced $92 billion in investments for AI and energy, declaring the U.S. should lead as the global “AI superpower.”
Nvidia will resume selling its H20 AI chips to China after receiving U.S. government approval, with AMD also restarting regional sales.
Anthropic released Claude for Financial Services, connecting its AI assistant to market data and financial enterprise tools.
Google committed $25 billion to AI and data center infrastructure in the PJM power grid region, including $3 billion to update hydropower plants in Pennsylvania.
Meta reportedly hired Jason Wei and Hyung Won Chung from OpenAI; both researchers previously worked on the o1 model and Deep Research.
Microsoft is testing Desktop Share for Copilot Vision, letting the app analyze user desktops in real-time for Windows Insiders.
Anthropic is seeing investor interest for a funding round that could value the company at over $100 billion, according to The Information.
Scale AI is laying off 14% of its workforce after CEO Alexandr Wang's departure and a multibillion-dollar investment from Meta.
AWS launched Bedrock AgentCore in preview, offering a platform for building and deploying AI agents at enterprise scale.
Anthropic is regaining developers Cat Wu and Boris Cherny for Claude Code, after a brief stint at Cursor-maker Anysphere.
OpenAI is developing a checkout feature in ChatGPT that will let users complete purchases, with OpenAI earning a commission from each sale.
Closing Thoughts
That’s it for us this week.
If you find any value from this newsletter, please pay it forward !
Thank you for being here !
Reply