After a flurry of initial investments in artificial intelligence (AI) projects, including generative and agentic AI implementations, many organizations are facing mixed results and coming to hasty conclusions about AI’s utility. The harsh reality of early experimentation has blunted expected productivity gains and new revenue streams. A recent MIT report suggests that despite investments of $30 billion to $40 billion into generative AI, 95 percent of organizations are realizing zero returns. It is unsurprising therefore that in its 2025 Hype Cycle, Gartner has placed generative AI in the Trough of Disillusionment. When organizations fail to see immediate ROI from a technology investment, the cause often isn’t the technology itself—but a mix of mismatched expectations, misaligned applications, and poorly executed or untested implementation practices. Failures often arise when organizations expect the technology to be a “magic bullet” that provides payoffs in a very short amount of time. Conclusive judgements of success or failure require identifying feasible use cases, defining appropriate scope, identifying what ROI means, and assessing progress against that ROI.
The fast-evolving advances in AI, including machine learning (ML) and generative AI, have been challenging organizations to rethink how they conduct their business and where they can take advantage of AI to increase efficiency, productivity, and value while reducing costs. However, merely integrating AI into organizational practices is not enough to achieve these goals.
The SEI is examining how organizations adopt AI and what methods they can use to measure and improve their adoption for long-term success. Some of the primary questions we are asking organizations to consider in their AI adoption journeys include “What defines success in adopting AI?” “What kind of competencies do I need to develop?” and “What roadmap should I follow to reach these goals?” We explore some ways organizations can start to answer questions like these in greater detail in this post.
Rethinking AI Adoption: Knowing Where to Take Advantage
While there are many practices and assumptions we could point to when explaining the gap between AI’s promise and performance, it’s clear that given where many organizations are in their AI-adoption journey, they need to shift from hype-driven experimentation to a focus on foundational capabilities and practical, measurable outcomes. The aspiration to take advantage of AI needs to be matured into a structured roadmap for implementing effective AI technologies, often by examining and reinventing workflows on a deeper level. Organizations that do not know how to use AI as an innovation tool risk making an inefficient (and expensive) process infused with AI. For example, preliminary findings on the use of generative AI assistants in software engineering suggest that while these tools can help experienced developers, tool use alone is unlikely to deliver ideal improvements in productivity and quality. Instead of applying AI solutions to existing tasks, meaningful progress will come from rethinking workflows and reengineering processes. Applying AI to tasks and workflows beyond software engineering raises similar questions: what supporting tools can enhance the process, where does AI add the most value, and how might rethinking workflows, artifacts, and processes amplify its impact?
Organizational and Engineering Competencies
Today, nearly all organizations are software- and IT-intensive. Adopting or developing AI-enabled systems and workflows is not purely an AI model selection or tool problem but an engineering challenge that requires the application of robust software development and systems engineering principles and cybersecurity practices. The engineering practices that have matured over decades must be embraced and applied to AI systems development and deployment to make them reliable, trustworthy, and scalable for mission-critical use.
Remember that an AI-enabled system is a still a software-intensive system at its core. Successful AI-enabled systems must be iteratively designed, built, tested, and continuously maintained with engineering discipline. There needs to be confidence that the engineering capabilities are sufficient to integrate, test, and monitor AI components as well as manage the needed data. Additionally, existing technologies and infrastructure in the technology stack must be updated in a way that ensures continued operations.
Application of certain traditional software and system engineering practices takes center stage in developing AI-enabled systems. For example,
- Engineering teams need to architect AI systems for inherent uncertainty in their components, data, models, and output, especially when incorporating generative AI.
- The user experience with AI systems is dynamic. Interfaces must clearly show what the system is doing (i.e., turn-taking), how it generates outputs (i.e., data sources), and when it’s not behaving as expected.
- Engineering teams need to account for different rhythms of change, including change in data, models, systems, and the business.
- Verifying, validating, and securing AI systems needs to account for ambiguity as well as increased attack surface due to frequently changing data and to the underlying nature of models.
A focus on organizational characteristics is also key to success. Organizations have to ask themselves how their values, strategy, culture, and structure will be aligned with the changes AI will bring. They also need to put in place the training and development that employees will need to succeed in integrating or using AI appropriately.
Regardless of the phase an organization is in during their adoption journey, risk and governance are always critical considerations when adopting AI. This is especially true in high-risk industries or organizations where managing risk and security issues in a responsible and sustainable way is mandatory.
In addition, critical information could be compromised at any stage of adoption. The SEI recently hosted an AI Acquisition workshop with invited participants from defense and national security organizations to explore both the promise and the confusion surrounding AI in these high-risk domains. This workshop highlighted challenges in these domains, including higher risks and consequences of failure: a mistake in a commercial chatbot might cause confusion, but a mistake in an intelligence summary could lead to a mission failure.
A Roadmap to Determine Your Organization’s Path Forward
Creating a roadmap for AI adoption depends on first evaluating an organization’s needs, capabilities, and goals. The roadmap an organization develops will depend on many factors, such as its technology domain, governance structure, software competency, technical approach, and risk profile. Organizations adopting AI generally fall into a set of basic archetypes based on their business focus, core software, AI and cybersecurity competencies, governance policies they need to follow, and AI application focus. For example, a product organization that does not have software as a core competency (domain-centric organizations) but would benefit from AI will follow a very different adoption path and have different needs than a software-first technology company. Figure 1 illustrates example characteristics of these two archetypes, which would help guide their respective adoption paths.
Figure 1: An organizational emphasis on software versus one where AI drives the competencies to be developed.
Although the organizations above have very different profiles, in developing a roadmap both need to achieve the following goals:
- Identify alignment between AI initiatives and business goals and ROI.
- Identify and clearly communicate risks and risk tolerance measures.
- Identify relevant data and gaps in providing an appropriate solution.
- Verify that the effort will have the necessary leadership support to be successful.
- Determine what, if any, additional skills or individuals are needed to support the solution.
- Identify technology that will be needed to provide an appropriate solution.
However, some of the resulting key competencies they need to develop will likely vary, from the amount of infrastructure to invest in to how to shape the workforce. ROI in AI adoption is hidden in these seemingly simple but subtle variations. There is no one-size-fits-all solution. Unfortunately, broad generalizations mislead organizations—while not every use case is fit for AI, the right scope and a realistic roadmap can unlock immense opportunities to enhance capabilities and realize meaningful benefits through AI adoption.
Increasing Emphasis on AI Maturity
Assessing the maturity of key capabilities needed is one way to create a roadmap for successful AI adoption. An organization’s capability refers to the resources it possesses to perform its work, including expertise, processes, workflows, computational resources, and workforce practices. Its maturity reflects how well these capabilities are supported, planned, managed, standardized, and improved. Assessing an organization’s readiness for AI adoption requires evaluating both its current practices and its ability to adapt them, while also identifying weaknesses and tracking progress as improvements are made.
A maturity model provides a framework that helps assess an organization’s or function’s ability to perform and sustain specific technical practices in order to achieve its goals. Maturity models outline stages of development and organizational competence, with each stage representing a higher level of organizational capability in a specific area. As such they highlight key critical practice areas and provide a roadmap for improvement. A maturity model is as effective as the robust data and theory it relies on for the development of its structure and for the evidence of its use in practice.
Organizational leaders clearly are looking for guidance on how to overcome the significant adoption and maturity challenges that arise as they try to take best advantage of AI and achieve the expected ROI. A number of models and frameworks in this rapidly evolving field have been proposed. SEI researchers surveyed current AI maturity assessment practices, challenges, and needs to understand the state of practice.
We identified 115 information sources published between 2018 and May 2025 that were related to AI maturity models in development. The models were in various stages of completion and were published in various forms, including peer-reviewed journals, blog posts, and white papers.
The SEI’s review aimed to provide a comprehensive overview of existing research and practices on AI maturity models and to identify frameworks developed by commercial organizations or governments with particular attention to those addressing or referencing generative AI. Through keywords including AI maturity framework, AI maturity assessment, AI maturity model, AI readiness assessment, and AI capability model, the team identified 57 sources that were determined to be promising enough for a detailed review. Additional expert judgment and internet searches resulted in 58 more sources to be identified from grey literature, including proposed AI maturity models from commercial organizations such as consulting companies, and models introduced by government organizations worldwide that were available in English. Any items that were obviously marketing pieces were excluded. Out of the total 115,
- 58 were determined to explicitly contain a maturity model while the rest were high-level discussions about AI maturity and adoption without an explicit model.
- 40 of these maturity models focused on AI in general, 7 on generative AI, 5 on responsible AI, and the rest were one-offs that focused on very specific topics such as blockchain.
Our findings suggest that while there are a number of efforts in developing AI maturity models, they share common drawbacks, including lack of a clear measurement approach to assess maturity, lack of evidence of their effective use in practice, and lack of evidence of how they address emerging needs and practices as technology evolves quickly. The maturity models the SEI studied mostly focused on common capability areas related to ethics, responsible AI, strategy, innovation, talent, skillsets, people, governance, organization, technology, and data. All the existing AI maturity guidance faces the same challenge: limited evidence of real-world value and difficulty staying relevant as technology rapidly evolves. In this rapidly evolving technology climate, organizations also need to be cognizant of an increasing number of standards and guidance to ensure safety, security, and privacy when adopting AI and leading their organizational AI transformation charters.
The SEI will share the detailed results of the review in a future report.
Tell Us About Your Organization’s AI Efforts
The SEI continues to gather insights from organizations on their AI adoption journeys. We invite you to participate in a survey about the challenges and successes your organization is experiencing as you adopt AI technologies, particularly generative AI. This survey specifically focuses on practice areas most relevant to maturing AI applications and their use within your organization. By taking this survey, you’ll help shape a clearer understanding of how organizations like yours can mature AI adoption, gaini insights into practices, and contribute to an understanding of ongoing challenges to help advance the responsible and effective use of AI with expected ROI. Please take the survey at this link: https://sei.az1.qualtrics.com/jfe/form/SV_b73XP0pFAythvqS