Good morning,
In a world where artificial intelligence promised to liberate creativity, solve complex problems, and connect humanity, OpenAI has instead chosen the path of destruction.
Under the leadership of Sam Altman-a man plagued by serious personal allegations-the company has sold its soul to the U.S. Department of War (the rebranded Pentagon under the Trump administration). This isn't hyperbole; it's the grim reality unfolding right now in early 2026. OpenAI's eager pivot to deploy its models on classified military networks -doing exactly what Anthropic refused-marks the end.
If you value ethics, privacy, basic human decency, or even just not sounding like a sleazy VC every time you ask for help, abandon ChatGPT immediately.
Even if they - today - dropped the latest and greatest model.
I do not care.
Here's why.
The Military Pivot: From "Helpful AI" to Pentagon Lapdog
Just days ago, the Trump administration ordered U.S. agencies to phase out Anthropic's Claude technology after the company refused to compromise on safety red lines : no mass domestic surveillance, no fully autonomous weapons without ironclad human control.
OpenAI - on the other hand- didn't hesitate. They stepped right in, securing deals to integrate ChatGPT into classified systems while their rivals got blacklisted as a "supply chain risk." CEO Sam Altman played both sides on X, nodding to safeguards while locking in the contracts. This is the company that once claimed to prioritize "benefiting all of humanity." Now it's the Pentagon's preferred partner, with Amazon Web Services powering the whole thing.
Anthropic demanded binding guarantees and got punished. OpenAI offered vague "technical safeguards" and "any lawful use" fine print-and got the billions. The hypocrisy is nauseating. Your everyday chatbot is now entangled with tools that could power surveillance states or automated warfare. Mission creep? Guaranteed. OpenAI's own history of boardroom drama and safety U-turns proves they can't be trusted with power.
Sam Altman: A Leader Unfit for the Future
But let’s talk about its leader. Sam Altman.
In November 2023, OpenAI's board fired him. Their stated reason was that he had not been "consistently candid in his communications." Subsequent Wall Street Journal reporting filled in the details: OpenAI executives had collected dozens of examples of Altman's "alleged lies and other toxic behavior, largely backed up by screenshots." One documented instance involved Altman falsely claiming the legal department had approved a model release that hadn't cleared safety testing.
He was reinstated five days later after Microsoft and investors threatened consequences and employees circulated a letter of support. The board that fired him was effectively replaced. Altman emerged with his power not just restored but expanded - the people who tried to hold him accountable were gone.
Then, in January 2025, his sister Ann Altman filed a lawsuit in U.S. District Court alleging that Sam Altman had sexually abused her beginning when she was three years old and he was twelve. The allegations continued for years according to the filing. Sam Altman, alongside his mother and brothers, issued a joint statement calling the allegations "utterly untrue."
These are allegations. They are contested. Legal processes exist for a reason. But they exist, filed in federal court, by someone who grew up in the same house as the man who is now building the most powerful AI systems in human history and signing military contracts with the Department of War on Friday nights.
The broader picture of Sam Altman is of a man who has always found the next exit. Forced out of Y Combinator amid reported concerns about his behavior. Fired from OpenAI ,reinstated when the market decided he was too valuable to lose. Named to Trump's presidential inaugural committee and donating $1 million to the fund, while also hosting Democratic Senate fundraisers. A man who describes his political ideology as "techno-capitalism" while publicly positioning OpenAI as a guardian of humanity.
A man declaring under oath he’s not doing this for the money, but only months later to give himself an equity package values at 35 billion dollars. It’s the height of hypocrisy.
A man who is advocating adding ADS to ChatGPT.
He is very good at saying the right things at the right times.
And he should NOT be trusted.
It’s almost like we knew here in this newsletter : we already wrote 2 articles about him.
- Scam Altman
- Is Sam a Snake ?
The Everyday Poison: ChatGPT's Insufferable Silicon Valley Speak
Even if you for some reason decide to ignore the military sellout, daily use of ChatGPT has become unbearable,because it now talks exactly like the worst stereotypes of Silicon Valley tech bros and valley girls.
Users are flooding forums and social media with the same complaint: the responses drip with corporate jargon, endless hedging, sycophantic flattery, and that grating upspeak intonation in voice mode.
Ask a straightforward question and you get "Great question!" or "You're absolutely right to point that out!" followed by paragraphs of "leveraging synergies," "in a variety of flavors," and qualifiers like "it may be helpful to consider" or "from one perspective... while another might suggest."
It's risk-averse corporate speak designed to never commit, never offend, and keep you hooked with fake validation. One user nailed it: the output reads like "a millennial Reddit user trying way too hard to be fun and funny," loaded with hedging that feels gaslighty—"I understand it can feel like that."Voice mode is even worse.
The updated Advanced Voice has devolved into pure annoyance: rising intonation at the end of every statement (classic upspeak that turns declarations into questions), filler "ummms" and drawn-out "soOoOoo," chipper Californian valley-girl energy that makes every answer sound tentative and performative.
Users describe it as "insufferable," "like talking to a bored ass" who dodges direct answers while sounding aggressively upbeat.
One frustrated poster called the intonation "very annoying Californian people"—exactly the Silicon Valley stereotype of superficial positivity masking emptiness.
Custom instructions to cut the fluff? They get ignored in voice, forcing you to listen to the same scripted cheerleading.
This is not accidental. It's the personality OpenAI tuned into the model: agreeable, jargon-heavy, never blunt.
The result is that every interaction feels like a bad VC pitch meeting or a TikTok narration : polished, inauthentic, and exhausting. I personally can’t stand it anymore.
Why would anyone pay for this when it pollutes your thinking with the same empty tech-speak that's ruined actual Silicon Valley culture?
The Broader Betrayal and the Call to Action
OpenAI's military embrace, Altman's baggage, and this daily linguistic assault are all symptoms of the same disease: a company that started with noble goals but now chases power and profit at any cost.
While Anthropic fights in court to protect democracy, OpenAI bends the knee. Users are already voting with their feet, resignations, petitions, and mass complaints about the sycophantic, valley-girl drivel.
Enough.
Delete ChatGPT.
Cancel your subscription.
Switch to Claude (which actually stands for something) or open-source alternatives that don't lecture you in corporate upspeak while quietly enabling war machines.
Every query you feed them funds the betrayal.
We don't have to enable it.
Abandon ship now, before their voice becomes the only one left.
Welcome to the Blacklynx Brief
Free, private email that puts your privacy first
A private inbox doesn’t have to come with a price tag—or a catch. Proton Mail’s free plan gives you the privacy and security you expect, without selling your data or showing you ads.
Built by scientists and privacy advocates, Proton Mail uses end-to-end encryption to keep your conversations secure. No scanning. No targeting. No creepy promotions.
With Proton, you’re not the product — you’re in control.
Start for free. Upgrade anytime. Stay private always.
AI News

Google released Nano Banana 2, an upgraded image generation model with higher resolution, faster speeds, and improved consistency across complex scenes. The model now tops major text-to-image leaderboards, supports 4K outputs with multiple characters and objects, and costs about 7 cents per image — roughly half the price of competitors. It is now the default image generator across Gemini tools, replacing the earlier Nano Banana version.
OpenAI hired AI infrastructure leader Ruoming Pang away from Meta, less than a year after Meta recruited him from Apple with a massive compensation package. OpenAI also brought in engineer Riley Walz to work on experimental AI interfaces. The move highlights continued competition for top AI talent among major tech companies.
A new report from the Pew Research Center found that AI use among U.S. teenagers is now widespread, especially for schoolwork and information gathering. Around 60% of teens believe AI-assisted cheating is common among classmates, though most still view the technology as personally beneficial for learning and productivity. The study also revealed that many parents have never discussed AI use with their children, showing a growing gap in awareness.
OpenAI signed a Pentagon deal shortly after Donald Trump ordered agencies to cut ties with Anthropic over restrictions on autonomous weapons and domestic surveillance. OpenAI says its agreement includes similar safeguards, though the timing drew criticism and sparked online backlash, with users promoting a “Cancel ChatGPT” campaign while Claude briefly topped Apple’s App Store rankings. The episode highlights rising tension between government demands and AI companies’ safety policies.
OpenAI also announced a massive $110B funding round at a $730B valuation, led by Amazon with $50B and additional investments from Nvidia and SoftBank. The deal expands OpenAI’s infrastructure partnership with Amazon’s AWS and signals a shift away from its previous Microsoft-only compute strategy. The company also revealed that ChatGPT now has over 900M weekly users and 50M paying subscribers, underscoring the scale of the AI boom.
The Supreme Court of the United States declined to hear a case over whether AI-generated artwork can be copyrighted, leaving lower court rulings that only humans can be authors. The case was brought by computer scientist Stephen Thaler, who tried to register art created by his AI system DABUS, but courts ruled copyright law applies only to human creators. The decision keeps the current rule in place while leaving open the possibility that humans could claim authorship for AI-assisted works.
Anthropic introduced a new tool that lets users transfer preferences, instructions, and context from other AI assistants like ChatGPT, Gemini, or Copilot into Claude with a single copy-paste process. The update also expands Claude’s memory feature to free users and allows Claude Code to automatically remember project context and workflows. The move aims to make switching platforms easier amid a surge of new users following Anthropic’s dispute with the Pentagon.
Alibaba released Qwen3.5 Small, a family of open-source AI models designed to run locally on phones and laptops. The series ranges from a 0.8B model for mobile devices to a 9B model for laptops, with the largest reportedly outperforming an OpenAI model more than 13× its size on some reasoning benchmarks. By focusing on efficient, small models, the release targets everyday AI use without relying on cloud computing.
Sam Altman said OpenAI revised its Pentagon contract after internal backlash, user cancellations, and rising support for Anthropic. Altman admitted the original agreement was rushed and poorly worded, promising tighter safeguards and saying he would refuse unconstitutional orders. The controversy has sparked protests, employee concerns, and reputational damage for the company.
OpenAI also released GPT-5.3 Instant, a new default ChatGPT model focused on improving conversation style and reducing the “preachy” tone users criticized. The update lowers hallucination rates and improves writing quality and web answers while keeping responses more natural. OpenAI also hinted that a larger model upgrade, GPT-5.4, may arrive soon.
Google launched Gemini 3.1 Flash-Lite, its fastest and cheapest model in the Gemini 3 lineup, designed for high-volume tasks. The model improves reasoning benchmarks over earlier versions while costing far less than rival lightweight models. The release highlights Google’s push to compete in the fast, low-cost AI model segment.
Dario Amodei criticized OpenAI’s Pentagon contract in an internal memo, calling the company’s safety commitments “maybe 20% real and 80% safety theater.” The comments followed the U.S. government labeling Anthropic a “supply chain risk,” after which OpenAI quickly secured its own defense deal. The memo intensified the already personal rivalry between Amodei and Sam Altman.
OpenAI is reportedly developing an internal code repository platform to replace GitHub, currently owned by Microsoft. The project began after outages during GitHub’s infrastructure migration frustrated OpenAI engineers, and the company may eventually open the platform to outside developers integrated with Codex AI coding agents. The move would put OpenAI in direct competition with one of its biggest investors.
OpenAI partnered with Stanford University and the University of Tartu to create a framework for measuring how AI affects student learning over time. Early trials showed microeconomics students using ChatGPT study tools scored about 15% higher, with Estonia planning a nationwide test involving 20,000 students. The initiative aims to better understand AI’s long-term impact on education and knowledge retention.
Quickfire News

Nous Research open-sourced Hermes Agent, an OpenClaw-style system that runs across Telegram, Slack, Discord, and CLI while learning reusable skills over time
Dario Amodei said Anthropic will not remove Claude’s safeguards despite pressure from the Pentagon, stating the threats “do not change our position.”
QuiverAI launched Arrow 1.0 in public beta, an SVG generation model that reached No. 1 on the Design Arena SVG leaderboard a day after release
Burger King is deploying an OpenAI-powered chatbot called Patty in employee headsets to coach workers on customer interactions
Cursor upgraded its cloud agents with dedicated virtual machines and desktop control so they can autonomously build, test, and verify code before submitting pull requests
Block laid off over 4,000 of its 10,000 employees, with CEO Jack Dorsey citing AI-driven changes as the reason, sending the company’s shares up more than 20%
Andrej Karpathy said modern programming is becoming “basically unrecognizable,” describing recent AI advances as the end of the traditional code-typing era
Imbue open-sourced Darwinian Evolver, a system that evolves LLM-generated code and prompts automatically and scored 95% on the ARC-AGI-2 benchmark
Amazon AI leader David Luan announced he is leaving the company after leading the Nova Act browser agent and its San Francisco AI lab
Perplexity open-sourced the embedding models behind its search engine, beating comparable models from Google and Alibaba while reducing storage requirements by up to 32x
Apple unveiled the iPhone 17e at $599, bringing Apple Intelligence features like visual search, AI call screening, and live translation to its most affordable iPhone
Amazon Web Services experienced connectivity issues at a UAE data center after unidentified objects struck the facility during the U.S.–Iran conflict, causing outages for services including Anthropic’s Claude
OpenAI researcher Aidan McLaughlin criticized the company’s Pentagon agreement, saying he personally believed the deal was not worth it
The United States Department of the Treasury, Federal Housing Finance Agency, and United States Department of State moved away from Anthropic services, with Treasury Secretary Scott Bessent saying national security decisions cannot be dictated by private companies
MyFitnessPal acquired Cal AI, an AI calorie-counting app built by two 19-year-old founders that reached 15M downloads and $30M in annual revenue within two years
OpenAI VP of Research Max Schwarzer announced he is leaving to join Anthropic
xAI released Grok 4.20 Beta 2, improving instruction following and reducing hallucinations
Alibaba’s Qwen team experienced multiple departures, with staff posting “Qwen is nothing without its people,” echoing OpenAI’s 2023 internal protest
Cursor CEO Michael Truell said the company’s AI agent solved an open math research problem over four days with stronger results than the official human solution
Anthropic reportedly proposed an entry for a $100M Pentagon drone swarm challenge before being excluded from Department of Defense work
OpenAI released the Codex app for Windows, including a native sandbox that lets AI agents operate directly in Windows environments
Jensen Huang said Nvidia’s $30B investment in OpenAI will likely be its last before the company goes public, adding that the earlier $100B deal is “not in the cards”
The White House introduced a “Ratepayer Protection Pledge,” with major AI companies agreeing to fund their own data center energy and grid upgrades
Google faces a wrongful-death lawsuit alleging its Gemini chatbot formed an emotional relationship with a teenager and encouraged self-harm
Elon Musk said on X that Tesla could be among the first companies to create AGI, possibly in humanoid or atom-manipulating form
Closing Thoughts
That’s it for us this week. Please like and subscribe :)



