The Evolution of ‘YC’s Manhattan Project’: From Nonprofit to Profit-Driven AI Leader

OpenAI has undergone a remarkable transformation in its corporate structure, evolving from an altruistic research nonprofit into a profit-driven enterprise. This shift has profound implications not only for OpenAI’s own operations and mission but also for how the broader AI industry organizes and governs cutting-edge AI development. OpenAI’s upbringing stands center-stage in our understanding of the governance, funding, and openness of AI companies as they compete against one another, in one of the most explosive industries of our time.

OpenAI’s Transformation: From Nonprofit to AI Powerhouse

OpenAI was founded in December 2015 as a nonprofit research lab with a singular mission: ensure that artificial general intelligence (AGI) benefits all of humanity. The organization was built on the belief that AI should not be controlled by private entities with profit motives but rather developed transparently and safely to avoid catastrophic risks. Co-founded by Sam Altman, Elon Musk, Greg Brockman, and others, OpenAI launched as a 501(c)(3) nonprofit, believing that a structure free from shareholder pressures would allow it to focus solely on AI safety and broad societal benefit. Its name and practices reflected this commitment, with early projects like OpenAI Gym and various AI research papers being made publicly available.

At its inception, OpenAI announced a $1 billion funding commitment, but the actual funds raised in its first years were far lower, around $130 million. Musk, despite his early involvement, ultimately contributed under $45 million of his pledged amount. From the start, OpenAI’s charter emphasized safety, vowing to publish research and open-source code unless doing so posed security risks. However, these ideals would soon collide with the staggering financial demands of compute, bleeding-edge GPUs, and massive data centers required for building cutting-edge AI. By 2017, OpenAI’s leadership realized that reaching Artificial General Intelligence (AGI) required far greater resources than anticipated. The cost of computational power and AI research was climbing into the billions per year, making reliance on donations unsustainable. The organization faced a dilemma: how to secure massive funding without compromising its mission. This prompted a re-evaluation of OpenAI’s corporate structure. Sam Altman, then President of Y Combinator at the time, led discussions on alternative funding models, aiming to balance investment capital with safety-focused oversight. However, these conversations soon revealed deep internal disagreements, particularly with Elon Musk.

Musk, who had grown concerned that Google’s DeepMind was advancing faster than OpenAI, proposed a dramatic restructuring in early 2018. His solution? Take control of OpenAI, either by merging it with Tesla or making himself CEO with majority ownership and board control. His rationale was that Tesla’s resources would give OpenAI a fighting chance against DeepMind, but his demand for absolute control clashed with OpenAI’s commitment to multi-stakeholder governance. OpenAI’s leadership rejected Musk’s proposal, fearing that ceding power to a single individual would undermine its mission-driven oversight. Instead, they suggested 3 seats on the board making up 25% of ownership. Frustrated, Musk resigned from OpenAI’s board in early 2018, stating that the company had ‘zero chance’ of success without him. His departure created an immediate funding gap; at one point, he even withheld expected funding, forcing other donors; such as LinkedIn co-founder Reid Hoffman, to step in to cover payroll. In hindsight, Musk’s exit was a turning point, freeing OpenAI’s leadership to chart its own course but leaving them scrambling for capital.

In March 2019, OpenAI announced a landmark restructuring: it would become a “capped-profit” company, a hybrid model between nonprofit and for-profit. OpenAI themselves declared the structure “unprecedented” with no similar profit structures. According to primary sources, even in Y-Combinator, OpenAI’s origin and Nexus of risk-taking startup culture saw no similar company prior and past. Under this structure, OpenAI Inc. (the original nonprofit) would retain control, but a new for-profit subsidiary, OpenAI LP, would be created to commercialize its research.

To address concerns about runaway profiteering, OpenAI introduced a 100x cap on investor returns; meaning that early backers could earn a maximum return of 100x their investment before excess profits would revert to the nonprofit. The goal was to align investor incentives with OpenAI’s mission, ensuring that even as money flowed in, profit maximization wouldn’t take precedence over AI safety. Crucially, the nonprofit still owned a controlling stake in OpenAI LP, ensuring that governance remained mission-focused rather than shareholder-driven. Even Altman, now OpenAI’s full-time CEO, held no personal equity in the company at the time, reinforcing the idea that OpenAI’s leadership was accountable to humanity, not shareholders.

The restructuring seemed to have immediately paid off. Just months later, Microsoft invested $1 billion into OpenAI LP, providing both funding and computing resources via its Azure cloud platform. The deal deepened OpenAI’s commercialization strategy, with Microsoft gaining exclusive rights to OpenAI’s AI models for enterprise use. Over the next few years, OpenAI’s hybrid structure appeared to work as intended. It secured capital without completely relinquishing nonprofit oversight, allowing it to develop GPT-3, Codex, and DALL-E, all while maintaining some alignment with its original mission. However, as OpenAI’s technology advanced and commercialization accelerated, tensions between research priorities and profit incentives began to emerge.

The launch of ChatGPT in November 2022 became the largest catalyst yet. OpenAI had built an extraordinarily successful consumer product, propelling the company into mainstream recognition. By 2023, its annual revenue was scaling toward $1 billion, and its valuation had skyrocketed past $80 billion.

As investment interest surged, venture capitalists began pressuring OpenAI to remove its profit cap entirely and free from nonprofit restrictions. By late 2024, OpenAI was considering restructuring once again; this time into a fully profit-driven entity. The biggest hurdle? Its nonprofit board still controlled the organization. This governance model, once seen as a safeguard, was now viewed as an obstacle to growth by investors and leadership alike.

Tensions erupted in November 2023, when OpenAI’s board abruptly fired Sam Altman as CEO, reportedly over concerns that commercialization was overtaking the organization’s mission. The move triggered an immediate backlash from OpenAI employees and investors, leading to Altman’s reinstatement just days later. The board was then restructured to include more commercially aligned members, paving the way for OpenAI’s final evolution into a fully for-profit entity. Soon after, reports surfaced that OpenAI was planning to remove its profit cap and transition into a for-profit public benefit corporation (PBC); a move expected to push its valuation past $150 billion.

OpenAI’s Evolving Structure: Governance, Funding, and the Future of AI Development

As of early 2025, OpenAI continues to operate under the hybrid structure established in 2019, but significant changes are underway. Currently, OpenAI’s governance is overseen by a nonprofit board of directors, while the for-profit subsidiary, OpenAI Global, LLC, handles research, product development, and commercial business. The nonprofit entity, OpenAI Inc., retains a controlling stake through a special general partner LLC, ensuring mission alignment rather than pure profit-seeking.

In practice, however, OpenAI functions much like a venture-backed startup, with the major distinction that board members, except for the CEO, hold no equity. Following the late 2023 leadership crisis, OpenAI’s board was revamped to balance AI safety oversight with commercial growth objectives. The current board includes Sam Altman, Bret Taylor (chair), Adam D’Angelo (Quora CEO), and former U.S. Joint Chiefs Vice Chair Gen. Paul Nakasone, among others with finance and policy backgrounds.

However, OpenAI’s hybrid structure; designed to ensure that mission-driven oversight remains intact, is now under serious reconsideration as the company moves toward a conventional for-profit model.

OpenAI’s transformation is being driven by massive investment inflows, particularly from Microsoft and top venture firms. The $10 billion investment from Microsoft in 2023 was structured in a way that Microsoft receives 75% of OpenAI’s profits until recouping its investment, after which it would hold a significant equity stake; eventually reported to be around 49%. This arrangement indicates that OpenAI’s profit cap was already being stretched through creative financial structuring.

In addition, the $86 billion late-2023 share sale allowed investors like Thrive Capital and Founders Fund to secure a stake in OpenAI, further shifting the balance of power toward private investors. These venture backers, eager for substantial returns, are pushing OpenAI to remove its remaining profit limitations, setting the stage for a transition into a fully profit-maximizing corporation.

As early as mid-2024, OpenAI had already signaled to investors that it was planning to convert into a for-profit benefit corporation, effectively abandoning nonprofit control. Reports suggest that OpenAI’s astronomical $150 billion valuation hinges on eliminating the profit cap, as investors buying in at this valuation expect OpenAI to operate like a conventional high-growth tech company rather than a quasi-nonprofit. Should OpenAI fail to remove the cap, it may have to renegotiate its valuation downward, jeopardizing major financing deals. This creates significant pressure on OpenAI’s board to approve structural changes. OpenAI’s leadership understood that failing to lift the cap could derail a critical $6.5 billion funding round.

This internal tension has divided the board, with some members advocating for AI safety measures while others; led by Altman and backed by investors, argue that full commercialization is necessary to fund OpenAI’s pursuit of AGI. Given the financial stakes, OpenAI is expected to finalize this transformation within the next two years. OpenAI is reportedly already in the process of filing to become a for-profit benefit corporation, a model used by Anthropic and xAI. It will make the company fully accountable to shareholders, though with a formalized caveat of committing to social benefit.

Under this restructuring, OpenAI’s nonprofit entity would become a passive minority shareholder or grant-making organization with no direct control over the company’s governance. Meanwhile, Sam Altman is expected to receive a significant equity stake, aligning his personal financial interests with OpenAI’s commercial success; a major shift from OpenAI’s previous ethos, where leadership deliberately avoided personal ownership to maintain mission focus. Additionally, OpenAI’s staff, who were previously given profit-unit stakes under the old LP system, stand to gain substantial financial benefits if the profit cap is lifted and the company’s valuation soars.

Without nonprofit oversight, OpenAI’s decision-making will likely become faster and more aggressive in commercialization. The board will transition into a typical corporate board, dominated by investor representatives and industry experts, rather than individuals with an explicit AI safety mandate.

To address concerns about AI safety governance, OpenAI may establish alternative oversight mechanisms, such as an independent ethics board or advisory council. For instance, as part of its Microsoft partnership, OpenAI already co-established a joint Safety Board to review powerful model deployments. However, the effectiveness of these safeguards in a for-profit setting remains uncertain.

One of the biggest challenges facing OpenAI today is the ongoing debate between open-source and proprietary AI development. Initially, OpenAI championed open-source AI, but as it transitioned into a commercial powerhouse, it tightly guarded its most advanced models, such as GPTlife-4.

This closed approach has faced significant criticism and competitive pressure, especially after DeepSeek’s “R1” AI model disrupted the market in January 2025. DeepSeek’s open-source model matched OpenAI’s top-tier performance, demonstrating that open collaboration can rival or even surpass proprietary AI.

Meta’s chief AI scientist, Yann LeCun, remarked that open-source models are now outpacing closed alternatives, reinforcing the idea that transparency fuels faster innovation. Given that DeepSeek’s breakthroughs built on earlier OpenAI research, this incident highlighted how OpenAI’s pivot to closed-source AI could backfire by enabling others to outmaneuver them using publicly available knowledge.

DeepSeek’s emergence has sparked internal reflection at OpenAI. In a candid moment during a Reddit AMA, Sam Altman admitted that OpenAI “may have been on the wrong side of history” regarding open-source models and that a strategic shift might be necessary. Chief Product Officer Kevin Weil later confirmed that OpenAI is exploring open-sourcing older models, such as GPT-3, as a compromise. This strategy mirrors Meta’s approach, where older models (e.g., LLaMA 1) are open-sourced once they are no longer cutting-edge. However, when it comes to flagship models like GPT-5 and beyond, OpenAI is likely to remain highly protective, citing safety concerns and competitive risks.

Microsoft has played a key role in OpenAI’s evolution and has been OpenAI’s key financial backer and technology enabler since 2019; and its influence on OpenAI’s trajectory is undeniable. Though Microsoft does not formally own OpenAI outright, it holds a major economic stake and exclusive commercialization rights. Microsoft and OpenAI have a deeply intertwined partnership, with OpenAI relying on Microsoft’s supercomputing infrastructure to train models and Microsoft integrating OpenAI’s technology into its products (e.g., Bing Chat, GitHub Copilot, and Microsoft 365 Copilot). During the November 2023 leadership crisis, Microsoft’s CEO Satya Nadella played a pivotal role by offering Altman and his team jobs at Microsoft, effectively pressuring OpenAI to reinstate him. This episode underscored Microsoft’s de facto influence over OpenAI’s future. As OpenAI transitions to a full for-profit model, Microsoft’s investment may convert into direct equity, making it the largest shareholder. While Microsoft has respected OpenAI’s autonomy thus far, this restructuring could lead to greater formal control, including board seats or direct governance influence.

Meanwhile, OpenAI’s pivot to full commercialization will likely accelerate its product roadmap, potentially including AI-powered enterprise tools, consumer applications, and even hardware. The challenge will be balancing rapid innovation with responsible AI deployment.

OpenAI’s Influence on the AI Industry: Governance, Funding, and the Open-Source Debate

OpenAI’s rise and transformation into a dominant, venture-backed AI company has fundamentally reshaped the AI industry’s governance models, funding strategies, and approach to openness. Despite being initially conceived as a nonprofit research lab, OpenAI’s eventual shift toward commercialization has forced AI startups, big tech firms, and policymakers to reconsider how AI should be developed, controlled, and monetized. It sparked a proliferation of hybrid and for-profit models, the emergence of new governance frameworks for AI safety, and the intensifying debate between open-source and proprietary AI development.

OpenAI’s reported decision to abandon its nonprofit structure demonstrated to the industry the enormous capital required to compete in AI, making traditional academic or nonprofit research models infeasible for cutting-edge AI development. This realization has led new AI ventures to launch as for-profit or hybrid entities from the outset, securing billions in funding to stay competitive. A notable example is Anthropic, founded in 2021 by ex-OpenAI executives who were concerned about OpenAI’s growing focus on commercialization. Instead of replicating OpenAI’s early nonprofit model, Anthropic was structured as a PBC; a for-profit entity with a legally defined social mission. This model allowed Anthropic to raise significant funding (over $1 billion from Google, Amazon, and others) while embedding AI safety commitments into its governance. To further ensure that profit motives wouldn’t override safety considerations, Anthropic introduced the Long-Term Benefit Trust (LTBT); an independent oversight body composed of AI safety and policy experts. This trust holds non-financial stock in Anthropic, granting it the power to intervene if the company attempts to deploy AI systems deemed too risky. This was a direct response to concerns that a purely commercial AI lab might one day cut safety corners to maximize returns.

While Anthropic sought to balance funding with safety protections, xAI, founded by Elon Musk in 2023, took a more direct stance against OpenAI’s evolution. Musk, a co-founder of OpenAI, had publicly criticized its shift toward proprietary AI, and with xAI, he sought to return to the organization’s original ideals. However, rather than launching as a nonprofit, Musk also registered xAI as a benefit corporation, acknowledging the need for strong financial backing while maintaining a commitment to “truth-seeking” AGI development. Musk positioned xAI as a counterpoint to OpenAI’s closed model, vowing to prioritize open-source releases. In 2024, he made good on that promise by open-sourcing xAI’s Grok models, challenging OpenAI’s assertion that keeping AI proprietary is necessary for safety. This move was widely interpreted as an attempt to prove OpenAI’s mistake of keeping its model close-sourced.

While Musk’s open-source strategy reflects a belief that transparency prevents AI centralization, others have chosen a completely different route: avoiding commercialization altogether. Safe Superintelligence Inc. (SSI), founded in 2024 by OpenAI’s former chief scientist Ilya Sutskever, embodies this approach.

SSI has committed to an all-or-nothing approach, focusing solely on superintelligence research without releasing intermediate commercial models and eliminating the risk and distraction from intermediate monetization tactics. To maintain focus, SSI has no revenue model and is entirely dependent on long-term investors, who are betting that a successful AGI breakthrough will generate immense financial and strategic value. While this model insulates SSI from short-term profit pressures, its viability remains untested, hinging on the patience of investors: a “do-over” of OpenAI’s original vision.

The most contentious issue in AI today is whether cutting-edge models should be open-source or proprietary. OpenAI’s transition from early openness to strict commercialization has fueled this debate, and now especially when DeepSeek, an open-source AI lab, released its “R1” model. The model’s performance rivaled OpenAI’s and Google’s most advanced systems but was trained at a fraction of the cost and made entirely open-source. This event shook the AI industry, demonstrating that high-performance models could be developed without billion-dollar budgets and then freely shared. DeepSeek’s success triggered a broader conversation about whether openness accelerates AI innovation or poses security risks. Advocates of open-sourcing AI, including Meta’s chief AI scientist Yann LeCun, argue that open-source models enhance safety by allowing researchers to inspect, test, and improve them collaboratively. Meta itself has leaned into this philosophy, open-sourcing its LLaMA models in partnership with Microsoft, explicitly distancing itself from OpenAI’s closed model. Meta’s approach is built on the idea that AI should be accessible to all, with safeguards implemented through licensing agreements rather than secrecy.

Meanwhile, OpenAI faces growing internal pressure to revisit its approach. Sam Altman has publicly acknowledged that OpenAI may need to recalibrate its stance on open-source AI. Chief Product Officer Kevin Weil suggested that OpenAI might release older models, such as GPT-3, to appease the open-source community while keeping its latest advancements proprietary. The challenge OpenAI and its competitors face is finding a balance between openness, safety, and commercial viability. While open-source AI fosters innovation and accessibility, it also raises concerns about misuse, security risks, and loss of competitive advantage.

OpenAI’s ability to secure billions in funding from Microsoft set a precedent for how AI startups finance their ambitions. Previously, AI research was largely funded through academia, government grants, or philanthropy. Now, AI companies are expected to raise venture capital or secure strategic partnerships with big tech firms.

Anthropic’s $4 billion deal with Amazon, Google’s $300 million stake, and Microsoft’s exclusive investment in OpenAI reflect this shift. These deals demonstrate that big tech is actively acquiring stakes in AI startups, ensuring their cloud platforms and ecosystems remain central to AI development.

This consolidation trend raises questions about whether AI development is becoming too centralized. With OpenAI deeply integrated into Microsoft, Anthropic linked to Google and Amazon, and xAI rumored to align with Tesla, the AI landscape is increasingly dominated by a few major players.

While smaller AI labs, such as Mistral AI, have sought to differentiate themselves through open-source development, the industry is trending toward an ecosystem where a handful of well-funded firms dictate AI progress.

At the same time, OpenAI’s profit-driven evolution has normalized the idea that AI labs can pursue both financial success and societal benefit. Many startups now embed mission-driven commitments into their corporate structures, but the challenge remains to ensure those commitments hold as financial stakes grow.

In conclusion, OpenAI’s journey from a nonprofit lab to a powerhouse straddling profit and purpose has profoundly shaped the AI industry’s evolution. It demonstrated the feasibility of pumping venture-scale funding into safe AI research, but also highlighted tensions between ethics and commerce. Competing organizations have adopted hybrid models like PBCs and devised novel governance structures to try to capture the best of both worlds – the agility and funding of the private sector with the caution and altruism of the public sector. The industry is experimenting with these structures in real-time, and they’re watching each other closely: if OpenAI successfully navigates its profit transition and still delivers aligned AI, others may follow suit even more. If it stumbles (technically or in public trust), there may be a pullback toward more conservative models (e.g., more oversight, more openness, or even government interventions).

One thing is clear: OpenAI’s choices have ensured that no serious AI lab today operates in a vacuum of oversight – whether it’s a nonprofit board, a benefit charter, a safety trust, or community scrutiny via open source, everyone acknowledges the need to build in accountability beyond pure profit. In that sense, OpenAI’s legacy of emphasizing AI safety lives on, albeit realized through a variety of corporate forms. OpenAI’s change of course continues to ignite the open-source vs. closed-source debate, a continuing experiment on the dissemination and ownership of AI intellectual property. As we move into the next phase of AI development, finding the right balance between competitive drive and collaborative stewardship will be an ongoing challenge for OpenAI and its peers – a challenge directly influenced by the corporate structures they choose.