Anthropic, the prominent generative AI startup co-founded by OpenAI veterans, has raised $450 million in a Series C funding round led by Spark Capital.
Anthropic would not reveal what the round valued its business on. But The Information reported in early March that the company was looking to raise capital at a valuation of more than $4.1 billion. It wouldn’t be surprising if that figure eventually made it to the ballpark.
Notably, tech giants including Google (Anthropic’s preferred cloud provider), Salesforce (through its Salesforce Ventures wing), and Zoom (through Zoom Ventures) participated in the funding, alongside Sound Ventures and other secretive VC parties. It seems to indicate a strong belief in the promise of Anthropic’s technology, which uses AI to perform a wide variety of conversational and word processing tasks.
“We are excited to have these leading investors and technology companies support Anthropic’s mission: AI research and products that put safety first,” CEO Dario Amodei said in a statement. “The systems we build are designed to provide reliable AI services that can positively impact businesses and consumers now and in the future.”
In fact, Zoom recently announced a partnership with Anthropic to “build customer-centric AI products focused on reliability, productivity, and security,” following a similar partnership with Google. Anthropic claims to have more than a dozen clients across industries including healthcare, HR, and education.
Perhaps not coincidentally, the Series C also comes after Spark Capital hired Fraser Kelton, the former head of product at OpenAI, as a venture partner. Spark was an early investor in Anthropic. But the VC firm has redoubled its efforts to look for early-stage AI startups, particularly in the generative AI space, which remains red hot.
“All of us at Spark are excited to partner with Dario and the entire Anthropic team on their mission to build reliable and fair AI systems,” Yasmin Razavi, a general partner at Spark Capital who joined the board of directors of Spark Capital Anthropic associated with the Series C , according to a press release. “Anthropic has assembled a world-class engineering team dedicated to building secure and capable AI systems. The overwhelmingly positive response to Anthropic’s products and research points to the broader potential of AI to unlock a new paradigm of thriving in our societies.”
With the new $450 million tranche, Anthropic’s coffers stand at a whopping $1.45 billion. That’s near the top of the list of the best-funded startups in AI, eclipsed only by OpenAI, which has raised more than $11.3 billion in total to date (according to CrunchBase). Competitor Inflection AI, a startup building an AI-powered personal assistant, has raised $225 million, while another Anthropic rival, Adept, has raised about $415 million.
Amodei, the former vice president of research at OpenAI, launched Anthropic in 2021 as a public benefit institution, bringing with him a number of OpenAI employees, including OpenAI’s former policy chief Jack Clark. Amodei split from OpenAI after a disagreement over the company’s direction, namely the startup’s increasingly commercial focus.
Anthropic now competes with OpenAI as well as startups such as Cohere and AI21 Labs, all of which are developing and producing their own text-generating – and in some cases image-generating – AI systems. But it has bigger ambitions.
As BlogRanking previously reported, Anthropic plans to create – as it describes in an investor pitch deck – a “next-gen AI self-learning algorithm.” Such an algorithm could be used to build virtual assistants that can answer emails, conduct research, and generate art, books, and more, some of which we’ve already had the chance to sample with, say, GPT-4 and other major language models.
The next-generation algorithm is the successor to Claude, Anthropic’s chatbot, still in preview but available through an API, which can be instructed to perform a range of tasks including searching documents, summarizing, writing and coding and answering questions on certain topics. In these ways it is similar to OpenAI’s ChatGPT. But Anthropic claims that Claude, released in March, is “much less likely to produce harmful output,” “is easier to talk to” and “[far] more controllable” than the alternatives.
Why is Claude superior according to Anthropic? In the pitch deck, Anthropic argues that its technique for training AI, called “constitutional AI”, makes systems’ behavior both easier to understand and easier to modify by infusing systems with “values” defined by a “constitution”. Constitutional AI basically tries to provide a way to align AI with human intentions, allowing systems to respond to queries and perform tasks using a simple set of guiding principles.
In its quest for generative AI superiority, Anthropic recently expanded its context window – essentially Claude’s “memory” – from 9,000 tokens to 100,000 tokens, where “tokens” represent parts of words.) With perhaps the largest context window of any public AI model , Claude can converse relatively coherently for hours — even days — rather than minutes, and process and analyze hundreds of pages of documents.
That progress does not come cheap.
Anthropic estimates that the next-gen model will require on the order of 10^25 FLOPs or floating point operations – several orders of magnitude larger than even today’s largest models. Of course, how this translates into computation time depends on the speed and scale of the system performing the computation. But Anthropic implies (in the deck) that it relies on clusters with “tens of thousands of GPUs” and that it needs about a billion dollars over the next 18 months.
In fact, Anthropic is aiming to raise as much as $5 billion over the next two years.
“With our Series C funding, we hope to grow our product offerings, support companies that will responsibly market Claude, and further AI security research,” the company wrote in a press release this morning. “Our team is focused on AI alignment techniques that allow AI systems to better handle hostile conversations, follow precise instructions, and generally be more transparent about their behavior and limitations.”