We have limited Spanish content available. View Spanish content.

Report

AI’s Trillion-Dollar Opportunity
en
Executive Summary
  • The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
  • Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
  • Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.

This article is part of Bain's 2024 Technology Report.

The pace of technological change has never been faster, and senior executives are looking to understand how these disruptions will reshape the sector. Generative AI is the prime mover of the current wave of change, but it is complicated by post-globalization shifts and the need to adapt business processes to deliver value.

Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.” Bain estimates that the total addressable market for AI-related hardware and software will grow between 40% and 55% annually for at least the next three years, reaching between $780 billion and $990 billion by 2027 (see Figure 1). Fluctuations in supply and demand will create volatility along the way, but a long-term, durable trajectory seems like it is here to stay.

Figure 1

The AI market could reach $780 billion to $990 billion by 2027

Three centers of innovation. So far, the largest cloud service providers (CSPs), or hyperscalers, have led the market in R&D spending, talent deployed, and innovation. They’ll continue to lead but will look for more innovation from the next tier of CSPs, software-as-a-service providers, sovereigns, and enterprise as well as independent software vendors to fuel the next wave of growth.

  • High end: bigger models, better intelligence, more compute. The big players will push ahead, developing larger and more powerful models and continuous gains in performance and intelligence. Their larger models will require more computational power, infrastructure, and energy, pushing the scale of data centers from today’s high end (around 100 megawatts) to much larger data centers measured in gigawatts. This will strain the power grid and create readiness and resilience challenges in the supply chain for a wide spectrum of inputs, including graphics processing units (GPUs), substrates, silicon photonics, and power generation equipment and many others.
  • Enterprises and sovereigns: smaller models, RAG implementations, devices, tailored silicon. Generative AI inference is set to become the killer app for edge computing as enterprises try to manage suppliers, protect data, and control total cost of ownership. Latency, security, and cost become increasingly relevant for inference workloads that need real-time processing and use owned data sets. Algorithms that use RAG (retrieval-augmented generation) and vector embeddings (numeric representations of data) handle a lot of the computing, networking, and storage tasks close to where the data is stored. This can reduce latency, lower costs, and keep data private and secure. Small language models that have been trained or tuned for a specific domain or task will become increasingly important in this context, as they can be less costly and more energy efficient to run than large general-purpose language models. The rapid growth of new models, both open-source (Meta’s Llama, Mistral, TII’s Falcon) and proprietary (Anthropic’s Claude, Google AI’s Gemini), is extending the range of cost- and energy-efficient options.  
  • Independent software vendors (ISVs): racing to incorporate AI capabilities. Large language model (LLM)-enabled software as a service is already providing AI-powered applications at Adobe, Microsoft, Salesforce, and many other companies. This will create a flood of new capabilities in the coming years, giving enterprises the option to deploy generative AI as part of their existing application suite rather than develop custom applications.

Disrupted industry structure with more verticalization. The AI workload is challenging and will continue to grow (see Figure 2).  The underlying matrix algebra and data-heavy computation strains parallelism, memory and system bandwidth, networking, infrastructure, and application software. Technology vendors are responding by optimizing the technology stack vertically to deliver more efficiencies. For example, most hyperscalers have developed their own silicon for training and inference, like Amazon’s Trainium and Graviton, Google’s TPU, or Meta’s MTIA. Nvidia has expanded its “unit of compute” beyond the GPU alone, now integrated with fabrics, hybrid memory, DGX, and cloud offerings. Nvidia is also enhancing its software stack and offering hosted services, providing tailored solutions that leverage its hardware and create a more efficient ecosystem for developers and users. Apple is developing its own on-device LLM and already has its own silicon. 

Figure 2

AI workloads could grow 25% to 35% per year through 2027

Other segment-specific disruptions include:

  • Large language models (LLMs): The underlying models are proliferating. OpenAI's ChatGPT held a near monopoly among production-grade generative AI solutions until 2023. Since then, the growth of open-source and proprietary models has improved to provide many more diverse options, including segmented versions of OpenAI’s offerings.
  • Storage: Storage technology will advance to accommodate the needs of generative AI, including accelerated consolidation of data siloes, increasing use of object vs. file and block storage, and selected upgrades to highly vectorized database capabilities.
  • Data management and virtualization: The growing need for data preparation and mobility will spur growth in data management software. This will be particularly important as data-hungry AI apps mobilize data stored in public clouds with ingress and egress fees.
  • Tech services: In the medium term, tech services will be in high demand while customers lack the skills and expertise needed for AI deployment and data modernization. Over time, significant portions of tech services themselves will be replaced by software. Clients in these domains are racing to design the new services to sustain their growth trajectories.

AI’s disruptive growth will continue to reshape the tech sector, as innovation spreads beyond the hyperscalers (where it is centered today) to smaller CSPs, enterprises, sovereigns, software vendors, and beyond. Bigger models will continue to push the boundaries, while smaller models will create new, more focused opportunities in specific verticals and domains. AI’s workload demands will also spark innovation in storage, compute, memory, and data centers. As the market becomes more competitive and complex, companies will need to adapt rapidly to capture their share of this potential trillion-dollar market.

Read our 2024 Technology Report

Tags

Ready to talk?

We work with ambitious leaders who want to define the future, not hide from it. Together, we achieve extraordinary outcomes.

Vector℠ is a service mark of Bain & Company, Inc.