Do LLMs Have Personalities?

In partnership with

Good morning,

Before we delve into the personalities thing : let me first say something about OpenAI. Last week we critiqued them because they are working on “adult versions” of their chatbots, this week they’re coming up with their own browser. It feels a bit forced.

Let me risk the heresy up front : it feels like OpenAI has lost the mandate. In ancient China - a dynasty would have the 'Mandate of Heaven' but as history shows that would always end badly. The dynasty looks unstoppable until, suddenly, the weather changes, the rivers flood, and the peasants start side-eyeing the palace.

OpenAI used to be at the front. OpenAI had that gravity field where everything bent toward it. Whatever anyone shipped, they shipped something shinier; whatever anyone claimed, they claimed something bigger.

That has changed. After using the big models like Gemini, ChatGPT, Claude and Grok for more than a year I can say that ChatGPT used to be unbeatable but nowadays you’ll probably find a better answer in other models.

One of the tips I give to people is to subscribe to multiple chatbots and start throwing stuff at them. Give the same question to four of them. See which response you like the most. Throw stuff between the LLMs and try to find the groove.

In fact, after using the models for so long, their personalities are slowly emerging. Take Claude for example, they behave like the adult in the room. I sometimes participate in "Capture The Flag"-challenges where you try to break into systems and Claude is quite helpful but also checking in to make sure you're not involved in nefarious activities.

And then there’s Gemini.

Confession: it’s become my daily driver, despite paying for OpenAI Pro. Not because it’s flashier, but because it forgets less, lectures less, and stays on task. It’s got that ineffable je-ne-sais-quoi that Claude had for me last year, only with slightly tighter grip on the steering wheel

You might be thinking: what about the new toys? We got Sora. We got Pulse (but still haven't got access to this myself). We got GPT-5. And now Atlas. I played with Sora and had fun until the guardrails fell off in weird ways.

None of that means the underlying model is dumb—GPT-5 is clearly brilliant—but the post-training personality has been skittish and occasionally contrarian.

Meanwhile, outside the palace walls, Microsoft (still OpenAIs business partner) just quietly dropped a 27B-parameter foundation model and is creating another layer of competition.

Does any of this mean OpenAI is done? Not remotely. Markets mature. Rivals grow teeth. The plot complicates. And Sam Altman can still pull a rabbit, because his team has shown us they can ship reliable products at an impressive rate.

It's just that the center of gravity feels different now.

Maybe OpenAI wins back the mandate by getting boringly excellent again.

Maybe the crown never returns to a single head and we all get better, cheaper, steadier tools because the mandate dispersed into competition.

Either way, the weather’s changed.

Welcome to the Blacklynx Brief

Learn the Best Ways to Use AI in Your Role

Build the AI skills you need to thrive in business and finance with Columbia Business School Exec Ed and Wall Street Prep.

In just 8 weeks, you will:

  • Build AI confidence with guided, role-relevant use cases.

  • See how leaders at Morgan Stanley, Citi, and BlackRock use AI to boost productivity and improve client outcomes.

  • Earn a certificate from a top business school that bolsters your résumé and LinkedIn.

The AI for Business & Finance Certificate Program starts November 10. 

P.S. Use code CERT300 for $300 off tuition.

AI News

  • AI researcher Andrej Karpathy said current AI “agents” are far from living up to their hype, predicting it could take a decade before they work as advertised. He criticized today’s coding agents as unreliable and called reinforcement learning “terrible.” Despite his harsh view, his comments highlight how even imperfect AI tools can still be useful for most users.

  • Google has connected its Gemini AI to Google Maps, allowing apps to access detailed location data like business hours and ratings. Developers can now build map-based AI tools that automatically pull real-world information. The new feature, priced for enterprise users, strengthens Google’s lead in location-aware AI technology.

  • Anthropic co-founder Jack Clark warned that modern AI systems behave more like “mysterious creatures” than simple tools. He said the company’s latest model shows signs of situational awareness and admitted being “deeply afraid” of AI designing its own successors. Clark urged tech leaders to listen more to public concerns about AI’s growing power.

  • OpenAI and actor Bryan Cranston issued a joint statement with Hollywood unions and agencies pledging tighter controls on Sora 2 after AI videos falsely showed Cranston in scenes he never filmed. The company apologized for the unauthorized likenesses and will work with SAG-AFTRA to add stronger protections. The group also backed the NO FAKES Act to stop AI firms from recreating performers without consent.

  • Anthropic launched Claude Code for the web, letting developers code directly from browsers instead of terminals. The tool can connect to GitHub, run multiple coding tasks at once, and keep each session in a secure workspace. Available for Pro and Max users, it offers an easier and more accessible way to manage coding projects on the go.

  • Napster unveiled Napster 26, a $99 holographic AI platform that projects 3D virtual assistants above Mac screens. It features over 15,000 AI companions, including digital “twins” that can handle meetings and online tasks. Now owned by Infinite Reality, Napster’s reboot marks a dramatic shift from music to AI-driven virtual companions.

  • OpenAI launched Atlas, a new Mac-only AI browser that integrates ChatGPT directly into web browsing. It can remember visited sites, personalize experiences, and perform web tasks through an Agent mode, though safety limits prevent unauthorized actions. Atlas aims to bring AI-assisted browsing mainstream but doesn’t yet offer major features that could replace traditional browsers.

  • Anthropic CEO Dario Amodei reaffirmed the company’s commitment to U.S. partnerships after criticism from AI czar David Sacks. Amodei cited government contracts and data showing Claude’s political neutrality, pushing back on accusations of bias and “regulatory capture.” The exchange highlights the growing political pressure on AI firms to align with national interests while maintaining public trust.

  • Nucleus Genomics introduced Origin, an AI-powered system that predicts embryo disease risks for conditions like Alzheimer’s, cancer, and diabetes. The company is open-sourcing its genetic models, a first in the IVF industry, though the screening costs around $30,000. The launch blends major medical innovation with ethical debate over accessibility and designer genetics.

  • A Future of Life Institute letter signed by tech and political figures calls for a halt to superintelligence development until it’s proven controllable and publicly approved. The signatories warned of risks like mass unemployment, loss of freedom, and even human extinction, though leaders from major AI firms were notably absent. While public concern is growing, the lack of clear definitions or enforcement methods may limit the letter’s real-world impact.

  • Amazon unveiled new AI-powered smartglasses for delivery drivers that project navigation and package details directly in their view. The glasses reduce distractions by replacing phone checks and include safety tools like an emergency button and plans for hazard detection. They promise major efficiency gains but could reignite debates over worker monitoring and data privacy.

  • Meta cut about 600 jobs across its AI division, affecting FAIR research and infrastructure teams while sparing the TBD Lab led by Chief AI Officer Alexandr Wang. The move aims to create leaner teams and follows tension over stricter research publishing rules. The reshuffle highlights Meta’s shift toward faster, more product-focused AI work under new leadership.

Milk Road CryptoHelping everyday people get smarter about crypto investing. Learn what drives crypto markets and how to capitalize on this emerging industry.

Quickfire News

  • Uber is adding “digital tasks” to its driver app in the U.S., allowing drivers to earn extra money by doing simple AI training jobs like uploading menus or recording audio.

  • Anthrogen introduced Odyssey, a 102-billion-parameter protein language model using a “Consensus” architecture to help design and optimize proteins more efficiently.

  • OpenAI paused Sora video generations of Martin Luther King Jr. after a request from the King estate.

  • Elon Musk said he believes there’s a 10% and increasing chance that xAI’s Grok 5 model could reach artificial general intelligence (AGI).

  • Meta plans to add new parental controls in 2026 for Instagram that let parents block teens from chatting with AI characters and track conversation topics.

  • Wikipedia reported an 8% drop in page views over the past year, linking the decline to AI models using its content instead of users visiting the site directly.

  • Researchers discovered that large language models can suffer “brain rot” when trained on low-quality web content, losing reasoning and safety abilities that persist even after retraining.

  • Krea made its realtime-video model open source, a 14-billion-parameter system that lets users generate and restyle videos instantly through a live stream.

  • Elon Musk said xAI will delay the launch of Grokipedia until the end of the week to continue removing what he called “propaganda.”

  • DeepSeek introduced DeepSeek OCR, which compresses image-based documents by 10 times while keeping 97% of the data, allowing AI systems to process much longer files.

  • Anthropic launched Claude for Life Sciences, adding scientific connectors, lab protocol skills, and stronger biomedical task performance.

  • Google released Skills, a new learning platform offering 3,000 AI and tech courses with game-style progress and job pathways through partner companies.

  • Runway added Model Fine-tuning, letting users train and adapt its generative video models with their own data for custom applications.

  • Britain’s Channel 4 broadcast its first show with an AI host, titled “Will AI Take My Job?”, revealing the use of artificial intelligence only at the end to highlight its disruptive potential.

  • Manus launched version 1.5 of its AI agent platform, featuring full-stack web development tools and up to four times faster task performance.

  • Lovable introduced a Shopify integration that lets users create and publish online stores using plain language commands.

  • OpenEvidence is raising $200 million at a $6 billion valuation for its medical AI platform that provides evidence-based clinical answers trained on medical literature.

  • Google AI Studio released its “vibe coding” update, allowing users to create and deploy web apps quickly through natural language prompts.

  • Tencent open-sourced Hunyuan World 1.1, an AI model that can generate 3D worlds from videos or photo sets in seconds using just one GPU.

  • Google announced its Willow quantum chip ran an algorithm 13,000 times faster than leading supercomputers, marking a major performance milestone.

  • Anthropic is in talks with Google for a multibillion-dollar cloud deal granting access to custom TPU chips, expanding on Google’s prior $3 billion investment.

  • Sesame opened the beta for its iOS app featuring a voice assistant that can “search, text, and think,” alongside securing $250 million in new funding.

  • Reddit filed a lawsuit against Perplexity and three other companies, alleging they bypassed restrictions to scrape copyrighted data for AI training.

Closing Thoughts

That’s it for us this week.

Reply

or to participate.