On 4 May 2023, the CMA launched an ‘initial review’ of AI foundation models.[1]  The review follows the UK government’s request in its AI White Paper[2] for UK regulators to consider their role in the development and deployment of AI.  The review is intended to develop competition and consumer protection guidance/principles that will “best guide the development of these markets going forward.

The CMA’s readiness to carry out an initial review is consistent with its widely stated ambition to be at the forefront of digital regulation, and the recent explosion of interest in AI globally (including from antitrust regulators, such as  the US FTC’s Lina Khan, who penned an article calling for AI regulation only the day before the launch of the review[3]).

Sarah Cardell, CMA CEO, commented that “AI has burst into the public consciousness over the past few months but has been on our radar for some time … It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information”.[4]

What will the initial review involve?

The review will focus on three core themes:

  • Competition and barriers to entry in the development of foundation models, including in relation to firms’ access to data, computational resources, talent, funding and “ways in which foundation models could disrupt or reinforce the position of the largest firms”. The CMA is interested in potential economies of scale and other network effects that could lead to consolidation.
  • The impact foundation models may have on competition in other markets. The CMA states that foundation models are “likely to become an input to other markets”. It notes concerns that access to AI tools may become “necessary to compete” but may be “restricted unduly or controlled by a few large … companies facing insufficient competitive constraint”. 
  • Consumer protection. Although the CMA considers that AI tools “could empower consumers and increase their welfare”, it will consider “potential risks to consumers … including from false and/or misleading information, and assess AI providers’ compliance with UK consumer protection law.

The CMA has stressed that its remit is to consider the impact of AI on competition, not other policy areas, and has identified a number of areas that are explicitly outside the scope of its review.   These include risks of AI “general intelligence”; labour market impacts; compliance with the future UK online safety regime; compliance with IP, data protection, and privacy rules; investment in public cloud infrastructure; and the supply of semiconductors and advanced chips.

The CMA will gather evidence by drawing on existing research; issuing short information requests to key stakeholders (it specifically mentions leading technology firms); and holding bilateral meetings with key interested parties.

What are the next steps?

The CMA will start evidence gathering and publish a short report in early September 2023.

The deadline for interested parties to submit their views is 2 June 2023.

The CMA has stated that it will use its findings to: (i) inform its implementation of the government’s approach to AI regulation and any recommendations the CMA may make to the government or other regulators, as well as guidance to businesses and users; and (ii) decide whether further consideration of an issue is appropriate (e.g., under its markets, competition, and/or consumer law powers).

[1]              See CMA, AI Foundation Models: Initial Review, 4 May 2023.

[2]              See Department for Science, Innovation & Technology, A pro-innovation approach to AI regulation, March 2023.

[3]           See Lina Khan, The New York Times, We Must Regulate A.I. Here’s How., 3 May 2023.

[4]           See CMA, Press release: CMA launches initial review of artificial intelligence models, 4 May 2023.