Amazon invests USD $50 billion in major OpenAI AWS deal
OpenAI and Amazon have struck a multi-year strategic partnership that combines new product development on Amazon Web Services (AWS), a major Amazon investment in OpenAI, and a significant expansion of OpenAI's spending commitment on AWS infrastructure.
They will co-create a Stateful Runtime Environment using OpenAI models, to be offered through Amazon Bedrock. AWS will also become the exclusive third-party cloud distribution provider for OpenAI Frontier, a platform aimed at organisations building and running teams of AI agents.
Amazon is investing USD $50 billion in OpenAI, starting with an initial USD $15 billion. A further USD $35 billion is expected in the coming months once certain conditions are met.
OpenAI is also expanding its commercial relationship with AWS. The companies outlined an additional USD $100 billion over eight years, on top of an existing USD $38 billion multi-year agreement. Under the arrangement, OpenAI expects to consume around 2 gigawatts of Trainium capacity through AWS infrastructure.
Stateful runtime
The Stateful Runtime Environment sits at the centre of the partnership's product roadmap. It is designed to keep context across work, remember prior steps, and operate across software tools and data sources, while also providing access to compute resources. It is intended for ongoing projects and workflows.
The companies plan to make the environment available through Amazon Bedrock, AWS's managed platform for accessing foundation models. It will integrate with Amazon Bedrock AgentCore and other AWS infrastructure services.
The Stateful Runtime Environment is expected to launch in the next few months. It is positioned as a way for customers to run AI applications and agents in a manner consistent with other applications on AWS.
Frontier distribution
AWS will serve as the exclusive third-party cloud distribution provider for OpenAI Frontier, an enterprise platform for building, deploying, and managing teams of AI agents. The companies said the agents can operate across business systems and share context, with governance and security features built in.
The deal expands AWS's role in how OpenAI packages and sells products to business customers. It also adds to a market where major cloud providers increasingly differentiate on how they host and integrate leading model providers, not just on access to raw compute.
Trainium capacity
OpenAI's commitment to consume around 2 gigawatts of Trainium capacity is unusually specific. The company said it will support demand for the Stateful Runtime Environment, Frontier, and other advanced workloads.
The expanded infrastructure agreement spans Trainium3 and the next-generation Trainium4 chips. Trainium is Amazon's in-house AI chip line for training and inference on AWS, which Amazon has promoted as a way to lower costs and reduce reliance on third-party silicon.
Trainium4 is expected to begin delivery in 2027. The companies said it will include higher FP4 compute performance, expanded memory bandwidth, and higher high-bandwidth memory capacity-factors that matter for workloads that move large model parameters and intermediate data quickly through memory.
Custom models
The partnership also covers development of customised models for Amazon's own use. The companies said they will collaborate on models for Amazon developers and customer-facing applications.
Amazon said its teams will be able to tailor OpenAI models across AI products and agents that serve customers directly. This work is positioned as complementary to the models already available to Amazon developers, including Amazon's Nova model family.
For Amazon, the arrangement signals a dual-track approach: continuing to build its own model portfolio while deepening relationships with external model developers through Bedrock and new co-development agreements.
For OpenAI, the partnership adds a major distribution channel through AWS and a large source of long-term compute capacity, while aligning OpenAI products more closely with AWS agent tooling and managed services.
Sam Altman, Co-Founder and CEO, OpenAI, said:
"OpenAI and Amazon share a belief that AI should show up in ways that are practical and genuinely useful for people. Combining OpenAI's intelligence with Amazon's infrastructure and global reach helps us put powerful AI into the hands of businesses and users at real scale."
Andy Jassy, President and CEO, Amazon, said: "We have lots of developers and companies eager to run services powered by OpenAI models on AWS, and our unique collaboration with OpenAI to provide stateful runtime environments will change what's possible for customers building AI apps and agents. We continue to be impressed with what OpenAI is building, and we're excited not only about their choosing to go big on our custom AI silicon (Trainium), but also our opportunity to invest in the company and partnership over the long-term."