The explosion in the development of generative AI has been referred to as an “Oppenheimer” moment.  Just last week, a group of more than 350 executives and scientists jointly stated that “[m]itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”  And more than 1,000 tech leaders have called for a moratorium on AI development until regulations governing its safe use are devised. 

Governments, authorities, and companies globally are therefore moving quickly to agree principles to regulate AI.  The UK is positioning itself as a torchbearer for initiatives that balance the need for regulation and international alignment against the promotion of innovation that can ultimately benefit consumers, businesses, and society.  Prime Minister Rishi Sunak has said that AI should be regulated in a way that makes innovations “safe and secure”, while embracing potentially revolutionary applications of AI such as helping paralyzed people regain physical movement and discovering new antibiotics. 

In March 2023, the Government published a White Paper setting out its proposed approach for regulating AI and launched a consultation (see our previous blog post here).  On 1 June 2023, the Competition and Markets Authority (CMA) responded to the consultation, adding its views from a competition and consumer law perspective.  In parallel, the CMA has launched its own study of AI foundation models (see our previous blog post here), commenting that the study may “result in recommendations to other regulators, or to government, with respect to its approach to AI regulation as outlined in the White Paper”. 

This blog post summarizes the CMA’s response, which, among other things, provides useful insights into how it might use the expected new powers under the Digital Markets, Competition and Consumers (DMCC) Bill published on 25 April 2023 (see our previous blog post here) in regulating AI. 

CMA’s approach to the UK Government’s proposed AI principles

The UK Government does not propose introducing AI-specific formal regulation like the EU’s Artificial Intelligence Act.  Rather, it envisages “tailored, context-specific approaches” by existing expert regulators such as the CMA.  To guide and inform the consistent regulation of AI, the White Paper sets out five cross-sectoral principles to be implemented by existing regulators.[1]  The CMA’s response considers how these principles could apply to its current and future work, recognizing that some are “more directly relevant to other regulatory bodies”, e.g., the Information Commissioners’ Office (ICO) and the Office for Product Safety and Standards.

  1. Safety, security, and robustness.  The CMA notes that, in well-functioning markets, firms should “face the correct incentives to determine and implement the appropriate level of security and testing to ensure that their systems function robustly.”  In other words, in an ideal market, the market should correct itself with customers taking their business elsewhere if a firm’s AI products fall short of the required standards.  But developing AI safely suffers from a collective action problem under which individual firms may lack sufficient market incentives to develop and implement AI safely and securely by themselves.  Authorities therefore recognize the need for regulation, standardization, or other forms of industry cooperation.

    The CMA notes, in this connection, that it may need to intervene to protect consumers’ interests as consumers “may not in a position to assess technical functioning or security” of an AI product (e.g., where a website’s AI detection system fails to prevent bad actors from posting fake and misleading reviews).  The DMCC Bill provides for a significant strengthening of the CMA’s consumer protection enforcement powers, including the power to impose fines of up to 10% of worldwide turnover.
  1. Appropriate transparency and explainability.  The CMA highlights the relevance of this principle to the “trust and transparency” objective under the DMCC Bill.  This is one of three overarching objectives that the CMA’s Digital Markets Unit must, under the Bill, pursue when it imposes conduct requirements on firms designated as having “Strategic Market Status”.  The CMA comments that consumers “are entitled to be informed of how companies’ use of AI influences their decision-making when making choices and decisions about products and services online” and “should not be misled,” giving the topical example of an AI language model providing false and misleading information[2]in the context where the consumer is making an economic decision.

    Similar transparency obligations could apply under the EU’s forthcoming AI Act.  Under the European Parliament’s recent proposed amendments, generative foundation models would have to comply with additional transparency measures, such as disclosing that content is generated by AI, designing models to prevent them from generating illegal content, and publishing summaries of copyrighted data used for training.

    The CMA also notes that, for digital platforms, transparency “could take the form of guarantees that no self-preferencing or undue discrimination is occurring against competitors, or that provided data is being used only for certain purposes”.  Significantly, the CMA recognizes that certain considerations may limit the extent of transparency that is appropriate, such as the need to protect confidential information and intellectual property rights, as well as the risk of “gaming, manipulation, or facilitation of collusion”.

    Interestingly, the CMA also refers to AI technology as being relevant to remedies, such as using AI to spot when social media users have not disclosed paid endorsements (included in undertakings given by Meta).  Consumer and competition enforcement, in addition to the CMA’s forthcoming new functions under the DMCC Bill, allow the CMA to order firms to disclose information about their AI systems or conduct algorithmic risk assessments, among other things. 
  1. Fairness.  The CMA notes the “considerable overlap” between this principle and its remit, giving the example of AI-powered recommender engines that might give rise to unfair hindrances to the extent their operators engage in self-preferencing.  The CMA also called out AI systems that produce discriminatory outcomes, such as pricing, and the attendant risks of exploitation of vulnerable consumers and exclusionary practices (e.g., offering selective prices to potential customers of smaller rivals to drive them out of business).
  1. Accountability and governance.  The CMA refers to the proposed ex ante functions under the digital markets regime in the proposed DMCC Bill as an example of how it could hold firms accountable, in addition to its current competition and consumer law tools.  The CMA also notes that novel challenges might arise regarding accountability of certain AI systems that learn to reach collusive outcomes “without any explicit coordination, information sharing or intention by human operators”.
  1. Contestability and redress.  Owing to “the opacity of algorithmic systems and the lack of operational transparency”, the CMA considers that it would be hard for customers effectively to discipline firms, either through private actions or complaints to regulators.  In the CMA’s view, it is therefore “essential that regulators are adequately equipped with the resources and expertise to monitor potential harms in their remits, and the powers to act where necessary”.

Next steps

As previously noted, the Government will consult until 21 June 2023 on its proposed AI framework.  Other than the CMA, the ICO has also published its response to the White Paper (see our IP and Technology blog post here).  Both the CMA and the ICO highlight the value of coordination among regulators to foster greater coherence and certainty for businesses developing and using AI, including through the Digital Regulation Cooperation Forum.[3]  This is welcome and consistent with the Government’s wish to support coherent implementation of the principles, and could help avoid the risk of overlapping (and irreconcilable) guidance being issued by different regulators over the coming months.

Meanwhile, the UK Government is equally focused on international consensus on AI regulation.  Rishi Sunak is due to meet President Joe Biden this week to discuss regulatory “guardrails” and is reported to be organizing a global summit in autumn aimed at formulating international rules on AI.  Sunak is also rumored to be considering setting up a global AI watchdog in London modelled on the International Atomic Energy Agency founded in 1957 and based in Vienna.  The quick succession of major developments in AI regulation highlights the rapidly-evolving regulatory landscape against the backdrop of seemingly inexorable technological advancements and an explosive growth of interest in AI globally.  The rest of 2023 can be expected to brim with further fast-moving technological innovations, with regulatory developments to match.


[1]              According to the White Paper, these principles are issued on a “non-statutory” basis during an “initial period of implementation”.  Nevertheless, the UK Government anticipates introducing a statutory duty on regulators requiring them to have due regard to the principles in future.

[2]              It has been widely documented that many AI language models are prone to “hallucinations”, i.e., the fabrication of facts and references and links.  See, e.g., Roula Khalaf, “Letter from the editor on generative AI and the FT”, Financial Times, 26 May 2023.

[3]              In addition to the CMA and the ICO, the Financial Conduct Authority and the Office of Communications are also members of the Digital Regulation Cooperation Forum (DRCF).  The DRCF was “established to ensure a greater level of cooperation, given the unique challenges posed by regulation of online platforms”.