!-- Google Tag Manager -->
Button TextButton Text
Download the asset
Back
Insight

The Singularity Monthly: Chasing AGI

The Singularity Monthly: Chasing AGI

We’re now several years into the generative AI era. AI companies are attracting billions of dollars in investment, have hundreds of millions of users, and are experiencing rapid revenue growth. Though corporate profitability and the technology’s ultimate impact on the economy are uncertain, it’s clear that many people are using AI in a variety of ways.

But selling a new software product isn’t the industry’s end game. The companies at the heart of the AI boom have publicly stated they're aiming for artificial general intelligence. So it’s worth asking, “What comes next?” If you had surveyed the comments of AI CEOs a few years ago, the answer would have been, “Nothing. This technology will get us to AGI. It could happen soon. We just need more scale.”

That belief now seems less prevalent.

Several CEOs, including Microsoft’s Satya Nadella and OpenAI’s Sam Altman, have been downplaying the term AGI—which has, admittedly, always been vaguely defined—when talking about the technology’s future. More importantly, AI luminaries, such as Ilya Sutskever and Yann LeCun, say they’re now looking to make breakthroughs that take us beyond the large language models (LLMs) central to generative AI.

Sutskever, who has been deeply involved in the most consequential AI developments in the last two decades and cofounded OpenAI, is perhaps the most notable defection. But LeCun, who shared the Turing Award with Geoffrey Hinton and Yoshua Bengio for the invention of deep learning, has also long said LLMs won't yield human-level AI. Both have founded new startups on their convictions.

In a fascinating podcast discussion with Dwarkesh Patel published at the end of last year, Sutskever noted that today’s models still don’t generalize well, and there’s a large gap between impressive paper evaluations—like an AI model passing an advanced medical exam—and real-world performance—a model encouraging a man to ingest dangerous amounts of bromide when asked for health advice. “I think what people are doing right now will go some distance and then peter out,” he said. “It will improve, but it will also not be ‘it.’”

According to Sutskever, continual learning will be a critical aspect of what “it” is. Foundation models undergo a huge amount of training, both the automated kind, where they’re stuffed with data, as well as the hands-on kind, where engineers and experts later fine-tune them. But the core algorithms are largely static after launch. This is, of course, nothing like the constantly morphing connections in the human brain. We’re not born knowing calculus; we steadily gain such skills as we grow.

Sutskever declined to say what his company, Safe Superintelligence (SSI), is working on—such is the state of an industry locked in fierce competition—but he did sketch out a vision of what a completed product might be like. Instead of a fully formed, superintelligent agent, he suggested it might be more like a superintelligent 15-year-old with a basic toolset but little experience in the world. Then, through a process of continual learning, the AI might go out and learn trades, say doctor or lawyer, the way we do. In this scenario, AGI wouldn’t explode on the scene fully formed.

LeCun, meanwhile, is focused on world models. These algorithms develop an internal representation of the world and use it to reason about their surroundings and plan actions. JEPA, the framework LeCun built at Meta, does this by scarfing down video data. “It learns the underlying rules of the world from observation, like a baby learning about gravity,” LeCun told MIT Technology Review in an interview. “This is the foundation for common sense, and it’s the key to building truly intelligent systems that can reason and plan in the real world.” LeCun left Meta in November and founded his new company, Advanced Machine Intelligence (AMI), shortly thereafter.

Both researchers also seem to agree on timing: Human-level, human-like AI isn’t going to arrive this year or next. In his podcast interview, Sutskever suggests such AI is probably something like 5 to 20 years out. LeCun agrees, “It’s going to take a while. There are major conceptual breakthroughs that have to happen before we have AI systems that have human-level intelligence,” LeCun said in the interview.

SSI and AMI are just two of the most prominent examples of a crop of research-first AI startups called neolabs. These companies number in the dozens, according to the Wall Street Journal, and are attracting billion-dollar valuations with no product or business in sight. Rather, investors are hoping to get in early on the next OpenAI. “I am very interested in today’s 22-year-old who’s going to spend the next 10 years trying to find AGI,” Sequoia Capital partner David Cahn told the Wall Street Journal.

This doesn't mean the next advance won't arise at one of the big players. Google, where transformers were first discovered, is well-positioned to push the field forward with Google DeepMind, even as it develops generative AI as a product too. Further, generative AI, as it stands, will continue to have a major impact. Indeed, with AI’s coding prowess, it seems plausible the technology may even speed up AI research.

Take the ruckus over Claude Code as an example. After Anthropic updated the tool last year, developers and hobbyists alike have been singing its praises. Anthropic engineers say they already use it to produce a notable fraction of their code. Of course, AI isn’t doing the work alone. We’re not there. If you want polished products, you need qualified developers holding the reins and keeping a close eye on the end result. But with the guidance of these top coders, it’s possible AI coding tools might speed up experimentation and iteration. Using the latest generation of tools to bring the next generation into existence is a very old story in technology.

As the field enters a new “era of research,” as Sutskever calls it, LLMs might not themselves evolve into AGI, but they will quite possibly be an essential tool in the hands of those racing to take the next steps toward it.

EP Banner Ad 600x200-2

MORE NEWS

Early error-corrected quantum computers are on the way.

For years, we’ve been stuck in the NISQ era of quantum computers: Noisy, intermediate-scale, quantum machines. These computers boast as many as 1,000 physical qubits but are too delicate for practical work of commercial value. Soon, publicly available quantum computers may enter the next phase, in which they correct errors. Two of these, both neutral-atom machines, are expected from Atom Computing and QuEra. Atom’s Magne quantum computer will boast 50 logical (or error-corrected) qubits made from around 1,200 physical qubits and arrive in early 2027. QuEra will deliver a 37-logical-qubit computer to Japan’s AIST later this year.

Can CRISPR knock out any flu, anywhere, for all time?

The gene-editing tool CRISPR comes in a variety of flavors depending, in part, on the type of protein scissors it employs. One of these proteins, known as Cas13, snips RNA instead of DNA. Last October, virologist Wei Zhao suggested we might co-opt this strategy to fight the flu. A hypothetical treatment would deliver CRISPR-Cas13 to respiratory cells, where it would knock out a conserved region of the flu virus’s RNA, preventing the virus from multiplying. If we choose an RNA region critical to flu’s survival, it would likely be shared across strains and resist evolution, making the treatment effective today and into the future. It’s just a concept with plenty of open questions—but one that hints at a universal treatment.

Scientists sequence a full woolly rhino genome found in a mummified wolf.

Researchers extracted the 14,400-year-old genome from a partially digested slab of meat in a frozen wolf pup’s stomach. The team had a task separating the rhino and wolf DNA from each other, as well as piecing the age-shattered genome back together. But once complete, the genome extended the tale of an embattled species several thousand years closer to its extinction. Surprisingly, just 400 years before the woolly rhino disappeared from the fossil record, the breeding population of rhinos, though shrunken, showed little sign of inbreeding. That means the end came relatively fast, perhaps in the span of a few hundred years or less.

MEEP_Banner Ads_1200x628 (3)

Upcoming Events

APRIL 26-30 | Singularity Executive Program | Silicon Valley, California Apply here

MAY 11-13 | Middle East Executive Program | Dubai, UAE Apply here

Thanks for reading. We hope you enjoyed this month's updates and found something to inspire you on your exponential journey.

See you next month!

The Singularity Team

Singularity

Singularity's team of internal thought leadership works to develop interesting resources, articles and insights about our core areas of expertise, programs and global community.

Unlock Access