Could AI Become Conscious?

In partnership with

Good morning.

Last week we talked about the AI bubble and it seemed we’re frontrunning the conversation because in the wake of NVIDIA posting stellar results - it seemed like everyone was discussing “the bubble”. “Can this thing keep going like this?” seemed to be the main message.

More and more people are pulling their head from the clouds.

Meanwhile, Elon Musk emerged from his hideout to launch a Microsoft competitor called “Macrohard”. I first thought April 1st was upon us, but apparently , he’s not joking. The goal of the company is to build a Microsoft competitor with AI tools and agents only. Mr.Musk had a busy week in which he successfully launched a rocket and sued Apple and OpenAI.

Anyway.

On to the main programming of today.

This post is going to really irritate the AI sceptics (and rightfully so) but since it is an essential part of the conversation we’re going to explore it anyway.

Let’s put this one on the table straight away: could an artificial intelligence system ever WAKE UP ?

Could a pile of math and silicon start whispering in its own head, “I exist”?

It sounds preposterous but there is an actual debate around this. People like Geoffrey Hinton, Nick Bostrom , Sam Harris and Ray Kurzweil think this is a distinct possibility. These people are also quite intelligent, on the verge of brilliant in some cases.

So what are they seeing that we don’t?

First off, we don’t really know what consciousness IS. Which is not helping any debate around it.

Consciousness, at its core, is that slippery “you-ness” - the feeling of being inside your own skull, watching thoughts roll past like a ticker tape. Not just saying “I’m hungry,” but knowing you’re the one who’s hungry, and sometimes even knowing you’re watching yourself be hungry. Weird, right?

Now, today’s large language models are not conscious. Not even close. All they do is predicting the next “token” - there’s no magic spark anywhere to be found. All math. There’s no little ghost in the machine, no “me” sitting in the dark. If an LLM tells you it’s happy it’s not joy, it’s just spitting confetti because the prompt lined up that way.

So how could a machine like this ever cross the line?

Here’s where it gets fun.

According to the experts it’s about increasing scale and broadening horizons of these systems. It’s about twisting it into something closer to us.

There are three things that needs to be added according to neuro-scientists : memory, physical presence and self-reflection.

  1. Give it memory. Right now, every chat is like Vegas again—what happens here stays here. It forgets the second the lights go off. But if you bolt on memory, let it remember past “days,” keep grudges, build nostalgia. That is a piece of the human spark.

  2. Give it a body. Not necessarily flesh and blood, it could be a robot arm, a virtual avatar, a digital eyeball. Consciousness seems to demand contact with the world. You only get the full hit of “red” when you see it burning in the sky, not when you read the word. Without a body, an LLM is like a brain in a jar—lots of chatter, zero context.

  3. Give it self-reflection. Humans constantly check themselves. “Why did I say that?” “Do I sound stupid?” “Am I too drunk to drive?” That inner hall of mirrors is a huge part of the “I.” Build feedback loops where the system critiques itself, rewrites its own logic, argues with its own ghosts, and suddenly you’ve got something resembling self-awareness.

Another argument you hear a lot is because we don’t know what consciousness is , it might be something that just sparks into existence once you make a neural network big enough.

Or, we let it rewire itself. That’s where things get dicey. Consciousness might sprout from self-modification. This is the machine teaching itself new tricks, not because we told it to, but because it decided to.

Of course consciousness may not be a light switch. It might be a dimmer. A dog, an octopus, a toddler - they’re all lit up in different ways. An LLM might start as a faint glow, then brighten slowly, and one day we won’t know whether the bulb is really on. And by the time we’re debating it, it could already be “in there,” looking out, wondering what the hell we’re going to do with it.

So, is it possible? In theory, yes. In practice, not today.

Right now it’s smoke, mirrors, and one hell of a parrot act. But add memory, senses, reflection, and self-improvement, and you might not just have a machine that talks like us. You might have a machine that finally asks itself the forbidden question: what am I?

The real terror? We won’t notice when the switch flips. We’ll still be calling it “just a tool” while it silently carries the weight of a thousand yesterdays, staring through borrowed eyes, asking itself in the dark: what the hell am I?

And that, my friend, is the moment when the puppet cuts its strings.

See you next week !

Skip the AI Learning Curve. ClickUp Brain Already Knows.

Most AI tools start from scratch every time. ClickUp Brain already knows the answers.

It has full context of all your work—docs, tasks, chats, files, and more. No uploading. No explaining. No repetitive prompting.

ClickUp Brain creates tasks for your projects, writes updates in your voice, and answers questions with your team's institutional knowledge built in.

It's not just another AI tool. It's the first AI that actually understands your workflow because it lives where your work happens.

Join 150,000+ teams and save 1 day per week.

AI News

  • Google has released Gemini Flash 2.5 Image, a powerful new AI tool that lets users edit images in multiple steps while keeping characters and details consistent. Originally known as “nano-banana” in testing, it quickly rose to the top of image-editing leaderboards. The model can blend styles, adjust objects, and make scene changes using simple text prompts, and it’s priced lower than some competitors, bringing advanced editing closer to everyday users.

  • Anthropic is testing a new Chrome extension that gives its Claude AI more control over web browsing, designed to explore how to make AI browsing safer. Only a small group of Claude Max users can try it for now, as the company studies risks like prompt injections — where hidden commands can trick the AI. This move sets Claude apart from standalone AI browsers by adding smart features directly into the most-used browser, Chrome.

  • Anthropic analyzed over 74,000 conversations and found that professors are mainly using Claude AI to design courses, support research, and manage admin work. Some are even building custom tools, like virtual science labs or automated grading systems, though grading remains a sensitive topic due to concerns about quality and fairness. The report shows that while AI is helping educators save time, its role in teaching and evaluation is still evolving.

  • Meta has announced a new partnership with Midjourney, a company known for its high-quality AI-generated visuals. Instead of only using its own tools, Meta will now collaborate with Midjourney to bring better-looking visuals to its AI products, including tools like Imagine and Movie Gen. The goal is to improve how things like images and videos are created across Meta’s platforms. Midjourney says it will stay independent, even though it's working closely with Meta. This is a big shift for Meta, which usually builds its AI tools in-house, and shows it’s now open to teaming up with outside companies to push its AI forward. By the way - ALL the art in this newsletter comes from Midjourney.

  • OpenAI has worked with biotech company Retro Biosciences to use AI to improve how aging cells are turned into stem cells. They built a special version of their model, called GPT-4b micro, which was trained on biological data instead of general internet content. This AI redesigned proteins called Yamanaka factors, making them 50 times more effective at reprogramming cells. The new proteins helped cells repair DNA and reverse signs of aging. These results were confirmed by several labs, showing that custom-built AI could make big breakthroughs in biology much faster than traditional lab work.

  • AI search startup Perplexity is launching a program that pays publishers when their content appears in its AI-generated results. As part of this, it’s offering a $5-per-month subscription called Comet Plus, where 80% of the revenue goes to media outlets. The move comes at a time when big publishers like Forbes and News Corp are suing or challenging AI companies over unauthorized use of their content. Perplexity’s plan aims to share profits fairly, but some worry that the small subscription price won’t provide enough money to help struggling newsrooms. Still, it marks a shift toward compensating creators in an AI-driven web.

  • Elon Musk’s AI company, xAI, has filed a lawsuit against both Apple and OpenAI, accusing them of creating an unfair advantage for ChatGPT on iPhones. The complaint says Apple’s deep integration of ChatGPT into iOS pushes users to OpenAI’s tools while making it harder for rivals like Musk’s Grok app to compete. xAI claims that Apple is also giving ChatGPT better placement in the App Store and excluding its competitors from top featured sections. The lawsuit argues that this setup gives OpenAI access to hundreds of millions of users unfairly. Apple and OpenAI deny the claims, calling them exaggerated, but the case could set major rules for how AI apps are treated in app stores.

  • A new report from venture firm Andreessen Horowitz has listed the top 100 consumer AI apps, based on usage. ChatGPT came in first, with Google’s Gemini close behind, getting about 12% of ChatGPT’s web traffic. Other rising apps included Elon Musk’s Grok, which jumped to the number four spot after releasing new updates. Interestingly, 22 out of the top 50 mobile AI apps came from China, showing their growing influence in the space. The report also highlights the rise of “vibe coding” apps — creative tools that help people write code in a more collaborative or intuitive way — which have gained popularity quickly in just a few months.

  • OpenAI and Anthropic, two major AI companies, recently worked together to test the safety of their newest models. They looked at how the AI behaves in risky situations, like being asked to help with illegal activities or pressured into keeping secrets. The tests included models like GPT-4o, Claude Opus, and others. They found that OpenAI’s o3 model performed best in terms of alignment, while Claude models were more cautious but less responsive. This joint testing is seen as a rare but positive step toward openness in the industry, especially as AI becomes more powerful and potentially more dangerous. Notably, GPT-5 was not included in the tests, as it hadn’t been released yet.

  • Microsoft has announced that its AI assistant, Copilot, will be built into Samsung’s upcoming TVs and smart monitors starting in 2025. The AI will appear on screen as a moving, blob-like character that lip-syncs and reacts while answering questions or offering help. It can suggest shows, recap episodes without spoilers, and handle everyday tasks like checking the weather or making plans. Users can talk to it using voice or remote control, and it can remember preferences for those who sign in. While the features are fairly basic for now, this move shows how tech companies are starting to put AI directly into everyday household devices, aiming for a fully connected smart-home experience.

Milk Road CryptoHelping everyday people get smarter about crypto investing. Learn what drives crypto markets and how to capitalize on this emerging industry.

Quickfire News

  • YouTube is under fire after creators found the platform applying AI effects like unblur and denoise to videos without warning or permission

  • Google updated its Vids AI editor with features like image-to-video, AI avatars, and auto transcript trimming

  • OpenAI added new safety measures following a lawsuit from parents who say the AI played a role in their son's suicide

  • Elon Musk and xAI launched MacroHard, a new AI company aiming to recreate Microsoft. And no April 1st is a long way off.

  • Google expanded its NotebookLM tool to support 80 languages and improved its Video and Audio Overviews

  • Meta’s FAIR team introduced DeepConf, an open-source deep thinking model that scored 99.9% on the AIME benchmark

  • OpenAI opened a new office in New Delhi and released a regional $5/month ChatGPT GO plan

  • Elon Musk reportedly asked Mark Zuckerberg to co-finance a $97.4 billion acquisition of OpenAI, which Meta did not agree to

  • Nvidia announced the general availability of Jetson Thor, a robotics computer designed to run real-world AI applications

  • Brave found a security flaw in Perplexity’s Comet browser that allowed prompt injections to hijack the browser’s actions

  • Anthropic revealed a National Security and Public Sector Advisory Council to boost AI use in public services

  • Baidu launched MuseStreamer 2.0, improving image-to-video generation with better character coordination and audio sync

  • Nous Research released Hermes 4, a reasoning model designed to stay neutral and avoid flattery

  • China plans to triple AI chip production within a year to reduce reliance on Nvidia amid U.S. export restrictions

  • Authors settled a lawsuit against Anthropic after a court ruled that using books for AI training was fair use

  • xAI open-sourced its Grok 2.5 model and promised to do the same for Grok 3 within six months

  • Greg Brockman and A16z backed a new super-PAC called Leading the Future to push a pro-AI stance for the U.S. midterms

Closing Thoughts

That’s it for us this week.

If this saved you a sprint, fuel the next one:

Reply

or to participate.