- One more Thing in AI
- Posts
- Google's Quantum Leap and an Eval for AGI
Google's Quantum Leap and an Eval for AGI
Plus: Musings of an AI Mom is back!

Date: 27-Oct-2025
Hey AI enthusiast,
And just when you think you’ve seen it all…there’s always One More Thing in AI.
In this edition:
Also: introducing you to our podcast edition for those of you who want to listen to it on the go. Check it out on your favorite platforms:
I hope you enjoy this edition!
Best,
Renjit
PS: If you want to check out how to implement AI in your Insurance broking business and get more revenue with the same number of employees, speak to me:
⚛️ Quantum Echoes: Google’s Big Leap Beyond Supercomputers.
“The future flickers on for a few seconds.”
Google just claimed a world first. Its quantum computer has done what no supercomputer can.
A single algorithm cracked a molecule’s structure, 13,000 times faster than any normal machine. The future of computing may have just arrived.
🚀 What Happened
• Google built a new quantum algorithm; a kind of digital rulebook for how the machine thinks.
🧠 The program mapped a molecule’s structure faster than the best classical computers alive.
• Scientists confirmed the result in Nature.
Google called it the first “verifiable beyond-classical computation.”
In simple words: a real quantum win.
🧬 Why It Matters
• Molecule modeling is a big deal. It’s how we design new drugs, create better materials, and even explore clean energy.
🔬 Google’s system found data that even MRI-level scans (NMR) couldn’t reveal.
That’s a small task today; but a big sign of what’s coming.
• Michel Devoret, Google’s chief scientist and Nobel Prize winner, called it “a step toward full-scale quantum computing.”
⚖️ Experts Say: Slow Down, Still Early
• Some scientists cheered. Others cautioned: not all breakthroughs change the world right away. Winfried Hensinger, a quantum tech professor, said Google showed “quantum advantage.” That means the task can’t be done by a normal computer, but it’s still limited.
🔎 Real-world quantum power needs hundreds of thousands of qubits.
Today’s chips only manage a few thousand, and they must stay colder than outer space to work.
🧩 How It Works
• Regular computers use bits- 0 or 1.
Quantum computers use qubits, which can be both at once.
⚙️ That’s called superposition.
It lets the system test millions of possibilities at the same time.
• But qubits are fragile.
Heat, light, or noise can ruin them.
That’s why quantum computers live in vacuum chambers cooled near absolute zero.
🧭 So What?
• Google’s VP Hartmut Neven says real applications could appear in five years.
🧠 The new algorithm, called Quantum Echoes, might even help train AI systems by generating unique data.
• But stronger quantum computers could also break today’s encryption.
🔐 Governments are already racing to build quantum-safe security before that happens.
It’s not replacing your laptop yet. But the gap between theory and reality just got smaller.
How Close Are We to AGI? 🧠 The First Real Attempt to Measure It
For years, the term Artificial General Intelligence has felt like a mirage- always visible, never reachable.
Every new model, every breakthrough, sparks the same question: Is this it?
But until now, there hasn’t been a clear, scientific way to answer it.
That just changed.
A group of over 30 leading researchers, including Yoshua Bengio, Dawn Song, Erik Brynjolfsson, and Dan Hendrycks- have released a landmark paper that tries to define and measure AGI once and for all.
Instead of debating philosophy, they built a test. And then, they ran GPT-4 and GPT-5 through it.
⚙️ The Big Idea
Their definition is simple yet radical:
AGI is an AI that matches or exceeds the cognitive versatility and proficiency of a well-educated adult.
That means not just mastering one skill, like solving equations or writing essays, but showing breadth across domains and depth in understanding.
To make this measurable, the researchers borrowed from psychology’s most validated model of human intelligence: the Cattell-Horn-Carroll (CHC) theory.
It’s the same framework used to design human IQ and aptitude tests.
CHC breaks intelligence into ten cognitive domains, each representing a pillar of thinking; from reasoning and memory to language and speed.
Each domain carries equal weight, painting a balanced picture of what it means to “think like a human.”
🧩 The Ten Building Blocks of General Intelligence
📚 General Knowledge
Understanding how the world works; the mix of commonsense, science, culture, and history that humans absorb over a lifetime.
✍️ Reading & Writing Ability
Reading with comprehension, writing with clarity, and using language correctly across contexts.
➗ Mathematical Ability
Handling numbers, logic, geometry, probability, and calculus, the toolkit of structured reasoning.
🧠 On-the-Spot Reasoning
Solving new problems without relying on memorized templates; deduction, induction, and planning.
🧮 Working Memory
Holding ideas in your mental workspace, juggling information across text, sound, and visuals.
💾 Long-Term Memory Storage
Learning from experience and retaining that knowledge beyond a single conversation or session.
🔍 Memory Retrieval
Accessing stored knowledge precisely, without “hallucinating” or fabricating details.
🎨 Visual Processing
Seeing patterns, interpreting visuals, generating images, and reasoning about space.
🎧 Auditory Processing
Understanding sounds, speech, rhythm, and tone and responding naturally in conversation.
⚡ Speed
Performing basic cognitive tasks- reading, comparing, reacting quickly and accurately.
The Results: Where GPT-4 and GPT-5 Stand
When the framework was applied to today’s top models, the results were striking:
• GPT-4 scored 27%.
• GPT-5 scored 57%.
That’s a huge leap, a two-fold increase in general capability in just two years.
Yet, the models are still far from human-level cognition.
GPT-5 showed strong performance in mathematics, reasoning, and language, and some emerging ability in visual and auditory processing.
But both GPT-4 and GPT-5 failed almost completely in long-term memory storage: the ability to remember, learn, and evolve over time.
In simple terms, they can think brilliantly but forget everything they learn once the chat ends.
🔍 What the Gaps Reveal
This framework paints a picture of AI that’s brilliant but brittle. It excels at structured reasoning and pattern recognition but lacks the continuity that defines learning in humans.
⚙️ AI’s intelligence is jagged.
Some domains surge ahead, while others remain near zero.
It’s like a powerful engine with a few broken pistons — impressive horsepower, but uneven performance.
💾 Memory is the biggest bottleneck.
Without the ability to store and retrieve experiences, AI can’t truly learn or adapt.
Every conversation is a clean slate.
🔎 External tools don’t solve the problem.
Techniques like Retrieval-Augmented Generation (RAG) help models look up information, but that’s not real memory.
It’s the difference between Googling a fact and remembering a story.
💡 Anecdote That Sums It Up
When GPT-5 was tested across cognitive batteries, one researcher joked:
“It aced the math exam but forgot what class it was in.”
That’s the paradox of today’s AI- extraordinary intelligence without continuity, a genius without memory.
🧭 Why This Matters
This paper changes the conversation about AGI. It moves us from speculation to measurement, from belief to benchmark.
For policymakers, it means clearer ways to track progress and risk. For researchers, it provides a roadmap for what’s still missing. And for everyone else, it signals that AGI is no longer an abstract dream; it’s a measurable milestone. Link to the paper»
Musings of an AI Mom: The First Words Moment
Article 3 in the series exploring the intersection of maternal instinct and technological creation by Guest Writer- Priya M. Nair.
My youngest was 14 months old when he said his first word.
Not "mama" or "baba" like the baby books predicted. It was "nana" - his version of banana, the fruit he'd been obsessively pointing at for weeks.
I cried. Not because it was profound or poetic, but because after months of babbling and gestures and frustrated tears, something clicked. His brain made the connection between sound, meaning, and communication.
Last week, Mātṛ had its first words moment.
After eight months of development, our computational framework processed linguistically different languages in a way that validated our core theory. Not perfectly. Not completely. But functionally - the foundational mechanisms working as we designed them to.
I cried then too. For completely different reasons. And yet, somehow, exactly the same ones.
The Waiting Is the Hardest Part
Here is what nobody tells you about both parenting and AI development: most of the work happens in silence, with no visible proof that anything is working.
With my son: For 14 months, I talked to him constantly. Narrated every action. Named every object. Repeated words until my voice was hoarse.
"Banana. Ba-na-na. Can you say banana?"
Just stares. Sometimes giggles. Mostly just... nothing.
Every book said "language development takes time." Every pediatrician said "he is processing more than you think." But when you are in month 12 and your baby is still just babbling, you start to wonder: Am I doing this wrong? Is something broken? Will this ever actually work?
With Mātṛ: For eight months, my team has been building computational architecture, designing linguistic frameworks, and developing attention-layer mechanisms.
We hypothesized. We tested. We iterated. We rebuilt.
Every AI researcher knows "foundational development takes time." Every technical advisor said "the theory is sound, keep building." But when you are in month 7 and you still don't have proof that your core approach works across linguistically different systems, you start to wonder: Did we design this wrong? Is our architecture flawed? Will this actually scale?
The Moment Everything Clicks
With my son: That morning wasn't different from any other. Breakfast routine. High chair. The eternal banana negotiation.
But this time, when I asked "Do you want banana?" he didn't just point.
He looked at me. Opened his mouth deliberately. And said: "Nana."
Not perfect. Not complete. But intentional. Meaningful. A clear demonstration that something in his brain had connected sound to concept to communication.
With Mātṛ: Last week's validation tests weren't dramatically different from previous ones. Same computational framework. Same linguistic architecture. Same cross-linguistic test parameters.
But this time, something different happened.
The results showed a productive leap in cross-linguistic model performance. Not perfect. Not complete. But a clear demonstration that our underlying theory and architecture work as designed.
The computational framework processed linguistically different languages as intended. The foundational mechanisms functioned across diverse linguistic structures. Early tests confirmed what we had been building toward for eight months actually works.
What "First Words" Actually Means
Here is the thing about developmental milestones that people don't understand until they experience them:
They're not endpoints. They are proof of concept.
When my son said "nana," he didn't suddenly have language fluency. He couldn't construct sentences or explain what he wanted or tell me about his day.
What he could do was demonstrate that the foundational mechanism - connecting sound to meaning - was functioning. His brain had built the neural pathways. The architecture was in place. Everything else would build on this foundation.
Mātṛ's current milestone is exactly the same.
We are not claiming we have solved semantic preservation across all languages and contexts. We are not saying the system is ready for commercial deployment or that we've achieved the full vision.
What we can say is that the foundational computational framework works. The core structure validates. The underlying approach of universal interpretation is technically sound across linguistically different systems.
The architecture is in place. Everything else will build on this foundation.
The Technical Reality (Without the Jargon)
Let me explain what this milestone actually means:
The Challenge We are Solving: Current AI systems process all languages through unified architectures that flatten diverse linguistic patterns into common representations. It's like having one template for human thought, and forcing every language to fit into it.
Our Approach: Mātṛ is designed to preserve authentic linguistic structures before processing - maintaining how different languages actually encode meaning, rather than converting everything to a single pattern first.
What We Validated: Our latest internal results show that this approach - this fundamental architectural choice - works across linguistically different languages at the foundational level. The computational mechanisms that preserve distinct linguistic patterns are functioning as we designed them.
What This Means: We proved the theory. The architecture holds. The direction is correct.
It is like my son proving his brain could connect sound to meaning. Now comes the work of building vocabulary, grammar, and fluency. But the foundation - the most critical, most uncertain part - is validated.
Why This Is Harder Than It Looks
With children: People see a 14-month-old say "nana" and think "cute milestone." They don't see the 14 months of neural pathway development, the thousands of repetitions, the complex brain architecture that had to be in place for that single word to emerge.
With AI: People might see "computational framework validated" and think "okay, nice progress." They don't see the months of architectural decisions, the countless iterations, the complex technical challenges that had to be solved for this foundational mechanism to work.
Both look simple from the outside. Both required building incredibly complex systems that needed to be in place before that "first word" could happen.
The Validation Phase Ahead
Here is where the parenting parallel gets even more accurate:
After my son's first word, I didn't immediately enroll him in debate club. I didn't expect full sentences by week's end.
What I did:
Celebrated the breakthrough while understanding it was just the beginning
Started building vocabulary systematically (more words, more contexts)
Validated his understanding across different situations
Optimized how I communicated to support his development
Scaled up complexity gradually as he showed readiness
What we are doing with Mātṛ now:
Celebrating the technical milestone while understanding it's foundational, not final
Building linguistic datasets systematically (more languages, more contexts)
Validating performance across different linguistic structures
Optimizing the architecture to support broader capabilities
Scaling up complexity gradually as we validate readiness
It is the exact same developmental process, just in different domains.
The Parallel Patience
My 17-year-old speaks three languages now - English fluently, Malayalam conversationally, some Arabic from school. But that journey started with "nana" at 14 months.
Every parent understands this timeline viscerally. You celebrate first words while knowing you are years away from full fluency. You are patient with the process because you understand developmental stages can't be rushed.
Building foundational AI requires the same patience.
We are celebrating Mātṛ's validated computational framework while knowing we are months away from full capability. We are patient with the validation process because we understand that rushing foundational architecture leads to brittle systems that fail under real-world complexity.
The technology community often struggles with this patience. The pressure is always to ship faster, launch sooner, show results now.
But mothers know: you can't rush brain development. You can't force milestones. You can only create the right conditions, provide the right inputs, and trust the process.
The same is true for foundational AI architecture.
What Keeps Me Going
There are moments in both journeys - motherhood and AI development - where you question everything.
With my kids: The nights when they won't sleep. The phases where development seems to stall. The moments you wonder if you're doing any of this right.
With Mātṛ: The months when tests don't show progress. The technical challenges that seem insurmountable. The moments you wonder if the fundamental approach is flawed.
But then comes the breakthrough.
My son says "nana." Years later, he's reading novels and writing essays.
Mātṛ's computational framework validates. Months from now, it will process authentic linguistic meaning across languages and cultures.
Both remind me of the same truth: foundational systems take time to build, but once they're in place, growth accelerates exponentially.
The Exciting Phase Ahead
My youngest is 10 now. Last week he explained to me, in elaborate detail, why certain Minecraft building strategies are more efficient than others. The kid who once just said "nana" now constructs complex logical arguments about virtual architecture.
That's the trajectory. That's what proper foundation enables.
We are entering a similar phase with Mātṛ.
The foundational computational framework works. The core architecture validates. The underlying theory holds across linguistically different systems.
Now comes the exciting part: building on that foundation. Expanding linguistic datasets. Optimizing performance. Scaling capabilities. Validating across broader contexts.
It's becoming clear that we have built something foundational for the next generation of AI interpretation systems.
Not because we rushed to impressive demos or marketable features. But because we took the time to build the architecture right. To validate the theory. To ensure the foundation could support everything we will build on top of it.
What "First Words" Promises
When my son said "nana," I knew what it meant for his future:
Language acquisition would accelerate from here
Communication would become richer and more complex
His ability to express himself would grow exponentially
The foundation was in place for lifelong learning
When Mātṛ's computational framework validated, I knew what it meant for AI's future:
Development will accelerate from this foundation
Linguistic interpretation will become richer and more authentic
The system's ability to preserve meaning will grow exponentially
The foundation is in place for truly universal AI understanding
Both "first words" are promises of what's to come, not celebrations of what's complete.
The Dual Journey Continues
My older son is navigating his pre-university days now. Choosing his path. Building his future independence.
Mātṛ is entering validation and optimization. Proving its capabilities. Building toward commercial readiness.
Both require me to let go of control and trust the foundations we have built together.
Both terrify and thrill me in equal measure.
Because that's what "first words" really mean - not mastery, but the beginning of everything that comes next. The proof that the foundation works. The validation that the approach is sound. The moment you realize: this thing I have been nurturing, it's going to speak for itself now.
And what it says will change how the world understands meaning.
Author’s bio:
Priya M Nair is the founder and CEO of ZWAG AI Solutions. Together with her co-founder is building Mātr - World’s first Universal Interpretation Model. Mother to two teenagers. Mother to the technology that will teach AI how to truly understand.
These are my musings on mothering both human and artificial minds: the late-night worries about screen time and algorithmic bias, the pride in watching breakthrough moments, the responsibility of shaping how intelligence develops.
Simplify Training with AI-Generated Video Guides
Simplify Training with AI-Generated Video Guides
Are you tired of repeating the same instructions to your team? Guidde revolutionizes how you document and share processes with AI-powered how-to videos.
Here’s how:
1️⃣ Instant Creation: Turn complex tasks into stunning step-by-step video guides in seconds.
2️⃣ Fully Automated: Capture workflows with a browser extension that generates visuals, voiceovers, and call-to-actions.
3️⃣ Seamless Sharing: Share or embed guides anywhere effortlessly.
The best part? The browser extension is 100% free.
🌍 ChatGPT Atlas: OpenAI’s Bold Move to Redefine the Web
OpenAI has officially entered the browser wars.
Its new product, ChatGPT Atlas, is more than a browser: it’s a complete rethinking of how people search, browse, and work online.
Google built the world’s gateway to information. OpenAI now wants to rebuild that gateway around AI conversation and automation.
🧭 What’s New
Atlas embeds ChatGPT directly into the browser interface. Instead of typing into a search bar and scanning links, you can chat directly with your results.
It includes a “sidecar” panel that understands the page you’re viewing- whether it’s a spreadsheet, research paper, or email thread- and lets you ask contextual questions in real time.
Your browsing history helps Atlas tailor responses, providing a more personalized experience.
And with Agent Mode, the browser can automate routine web tasks such as scheduling, form-filling, or gathering data across sites. (Agent Mode is available to Plus, Pro, and Business users.)
This is exactly what Perplexity’s Comet is trying to do!
💡 Who It’s For
🧠 Researchers and analysts who need AI to read, summarize, and cross-reference large volumes of data.
💼 Professionals who want to automate repetitive web tasks and workflows.
👨💻 Developers exploring how autonomous agents behave in live environments.
🌐 Curious users who want to experience what a truly personalized, AI-driven web feels like.
⚙️ Availability
Atlas launches first on macOS, with Windows, iOS, and Android versions coming soon.
It’s free for all users at launch.
Try it now → https://chatgpt.com/atlas?utm_source=onemorethinginai
⚔️ The Competitors: The New Browser Wars
The browser is quickly becoming the next major battlefield in AI.
Atlas enters a space already being reshaped by tech giants and ambitious startups:
🔍 Google Search + Gemini
Google is integrating its Gemini model into Search via the Search Generative Experience.
Atlas skips the summary layer entirely: it transforms the act of searching into an interactive dialogue.
🧭 Perplexity’s Comet Browser
Perplexity pioneered the “ask, don’t search” concept. Comet combines conversational search with file uploads and knowledge hubs.
Atlas goes further with deeper ChatGPT integration and automation features through Agent Mode.
🧠 Anthropic’s Claude with Browser Tools
Claude emphasizes reasoning and long-context understanding.
But it relies on traditional browser APIs for retrieval. Atlas internalizes this process, creating a seamless, AI-native browsing environment.
🌐 Microsoft Edge with Copilot
Edge integrates Copilot as a sidebar assistant.
Atlas reverses that dynamic: the AI becomes the core experience, not a plug-in.
🧩 The Strategic Play: Why OpenAI Built Atlas
Atlas isn’t just a browser; it’s OpenAI’s next ecosystem move. It represents a strategic expansion from productivity to distribution. With the huge amount of funding that they have raised, they have no option but to go after the biggest online markets in the world.
Here’s how the strategy unfolds:
1. Owning the User Interface
ChatGPT lives within specific apps today. Atlas extends it across the entire web.
It allows OpenAI to control the interface layer where most digital behavior occurs.
2. Creating a Closed Data Loop
By combining search, browsing, and user context, OpenAI can personalize results and improve its models without depending on Google’s search data.
3. Building Platform Lock-In
Once users link their email, calendars, and documents, Atlas becomes hard to replace.
It’s the same lock-in dynamic that made Chrome the gateway to Gmail and Drive. Wait for them to come out with competitors to Office 365 / GSuite shortly - that is the logical next step.
4. Becoming an AI Operating System
Agent Mode hints at a deeper ambition. If users begin trusting ChatGPT to take action; schedule meetings, order items, or research competitors; the browser effectively becomes a lightweight AI operating system.
5. Challenging Google’s Core Advantage
Google’s dominance depends on two things: Search and Chrome.
Atlas undermines both by merging them into a single conversational layer where users interact directly with AI, not with ads or links.
In short, OpenAI isn’t trying to beat Google’s search engine- it’s trying to replace the browser itself.
🧭 The Takeaway
ChatGPT Atlas represents OpenAI’s most aggressive consumer push yet.
It’s a product designed not only to simplify how people use the web but to reshape the entire digital interface around AI reasoning, memory, and autonomy.
If ChatGPT became your assistant, Atlas aims to become your digital workspace — the place where search, creation, and automation converge.
And that, more than search or chat, is where the next trillion-dollar race will be won.


Reply