Goodfire AI Inc., a startup that helps developers understand how their large language models work, has raised $50 million in funding to support research initiatives.
The company announced the Series A round on Thursday. Menlo Ventures led the investment with participation from Anthropic PBC, Lightspeed Venture Partners and other backers. The cash infusion comes six months after Goodfire’s initial $7 million raise and less than a year after its launch.
An LLM comprises a large number of code snippets called artificial neurons. Each artificial neuron performs a tiny portion of the calculations involved in processing prompts. Historically, identifying which neurons are involved in answering a given prompt and how they interact with one another was a challenge for developers.
San Francisco-based Goodfire offers a platform called Ember that promises to ease the task. Using the tool, developers can enter a prompt into an LLM and map out which of the model’s components are involved in processing it. That kind of visibility lends itself to a wide range of tasks.
If an LLM generates an inaccurate response to a given prompt, developers can use Ember to identify which model components generated the response and disable them. Without visibility into the LLM’s inner workings, fixing such errors is considerably more difficult. Developers can likewise use Ember to disable LLM components that are susceptible to prompt injection attacks, or malicious prompts designed to generate harmful output.
“Nobody understands the mechanisms by which AI models fail, so no one knows how to fix them,” said Goodfire co-founder and Chief Executive Officer Eric Ho. “Our vision is to build tools to make neural networks easy to understand, design, and fix from the inside out.”
Model customization is another task that Goodfire promises to ease. Using Ember, a company developing a customer support chatbot could take an open-source LLM and identify the parts of the LLM that aren’t needed for the project. The company could then remove those components to create a new, more efficient model.
To use Ember, developers simply enter a prompt describing how they wish to modify an LLM. An engineer could, for example, request that the model incorporate puns into all its prompt responses. Ember can then find and automatically update the model components that must be changed to include puns in prompt responses.
The platform also provides a number of other features. A capability called conditionals makes it easier to implement RAG, or retrieval-augmented generation. RAG is a machine learning method that enables LLMs to incorporate data from external systems into their prompt responses. Another Ember feature helps developers map out their LLMs’ capabilities.
Alongside Ember, Goodfire has released a number of open-source sparse autoencoders, or SAEs. An SAE is a specialized AI model that is used to understand how another AI model works. The technology automates much of the manual work involved in mapping out a neural network’s inner workings.
Last year, Goodfire developed an SAE for Meta Platforms Inc.’s Llama 3.3 70B. It followed up the release by opening-sourcing two SAEs for DeepSeek’s R1 reasoning model earlier this month. Goodfire says that the latter project shed light on the steps R1 takes to reduce errors in its output.
The company will use its newly announced funding round to enhance its Ember platform. Additionally, it will develop new methods of understanding how reasoning and image processing models work. Goodfire plans to carry out the research through partnerships with AI model providers.
Image: Unsplash
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU