by Ankit Bisht and Roger Roberts, with Brittany Presten and Katherine Ottenbreit
This post is part of a research collaboration between McKinsey, the Mozilla Foundation, and the Patrick J. McGovern Foundation.
Over the past two years, we have seen rapid growth in AI awareness, usage, and investment, particularly in gen AI and large language models (LLMs). As enterprises experiment with and evaluate solutions to incorporate AI into their businesses, they are exploring an array of tools across multiple layers of the AI technology stack and data architecture. In the quest to learn fast and experiment, it is no surprise that many developers are turning to open source AI technologies.
Interest in open source AI is growing as the performance of more open foundation models closes the gap to proprietary AI platforms. In light of the development and broad distribution of Meta’s Llama family; Google’s Gemma family; and, most recently, DeepSeek-R1 and Alibaba Qwen 2.5-Max, such platforms have drawn attention for their competitive performance on industry benchmarks. These platforms offer a spectrum of open source and “partially open” capabilities to help enterprises build their solutions (see sidebar, “What is open source AI?”). The question at hand is how organizations are adopting a range of both open and proprietary technologies as they scale up AI exploration and move from pilots toward value capture at scale from their investments.
A recent survey of more than 700 technology leaders and senior developers across 41 countries by McKinsey, the Mozilla Foundation, and the Patrick J. McGovern Foundation provides the largest and most detailed analysis of how enterprises are thinking about and using open source AI. While the AI landscape is constantly changing, the survey provides a snapshot of how technology leaders are thinking about open source within their AI strategy. This article is a preview of our findings; a forthcoming full report will dive deeper into the usage trends and preferences of enterprise users and offer detailed suggestions, based on interviews with AI experts, for how leaders can better evaluate integrating open source AI in their technology stacks.
Our research shows that enterprises are using open source more than one might expect. Across several areas of the AI technology stack, more than 50 percent of respondents’ organizations report using open source AI technologies (often alongside proprietary tools from players such as OpenAI, Google, and Anthropic). Organizations placing a high priority on AI are most likely to use open source technologies. Organizations that view AI as important to their competitive advantage are more than 40 percent more likely to be using open source AI models and tools than respondents from other organizations. The technology industry is leading the way, with 72 percent of respondents’ organizations using an open source AI model (compared with 63 percent of all respondents’ organizations).
Organizations are attracted to open technologies in AI because of potential cost savings, an often deeper understanding of the underlying model, and interest from their developer communities. In our survey, 60 percent of decision makers reported lower implementation costs with open source AI compared with similar proprietary tools. And 81 percent of developers and technologists surveyed reported that experience with open source tools is highly valued in their field. AI leaders interviewed as part of our research also indicated that open source models were particularly interesting in situations requiring full visibility when modifying a model for a specialized use case.
Leaders are also clear-eyed and pragmatic about the potential trade-offs when using open source tools. When asked to cite the leading barriers to adopting open source AI, respondents answered “security and compliance” (56 percent) and “uncertainty about long-term support and updates” (45 percent). When the leaders of responding organizations expressed a strategic preference for proprietary AI tools, “security, risk, and control over system” was selected as a top reason 72 percent of the time (exhibit). Decision makers also more frequently reported that proprietary AI technologies had faster time to value and ease of use than open source ones.
Nuances will, of course, vary across each organization, and there is a range of leadership perceptions on the risks and the kinds of use cases best suited for open source. AI leaders who we interviewed shared that both proprietary and open source AI tools present unique sets of risk, from software maintenance to protecting organizations’ intellectual property.
Overall, 76 percent of respondents expected their organization to increase use of open source AI technologies over the next several years. This may be in part because open source has been part of a vibrant software ecosystem in many categories of enterprise software, as well as a foundational resource for developer communities, for decades. As AI continues to improve, business and technology leaders should pay close attention to all the opportunities and innovations that emerge. Much like the cloud and software industries, a multimodel approach will likely be prevalent for many companies, with open source and proprietary technologies coexisting in multiple areas of the AI technology stack.
For more on these and other trends, stay tuned for our upcoming full report on enterprise use of open source AI, which will be available in March.
Ankit Bisht is a partner in McKinsey’s Dubai office, and Roger Roberts is a partner in the Bay Area office, where Brittany Presten is an associate partner and Katherine Ottenbreit is a consultant.
We wish to thank our research partners at the Mozilla Foundation and the Patrick J. McGovern Foundation; our colleagues in QuantumBlack Labs, the software development and R&D arm of QuantumBlack, AI by McKinsey, who bring AI innovations to clients; Cayla Volandes in McKinsey’s New York office; and our external academic collaborators for their insights and perspectives on the survey draft and analysis, including Knut Blind at Fraunhofer ISI, Luca Vendraminelli at Stanford University, and Sayash Kapoor at Princeton University.