Rapidly emerging artificial intelligence (AI) technology is poised to transform how businesses operate across almost all sectors, from social media to education to healthcare. Globally, governments and regulators are starting to react to the potential risks, but also opportunities, that AI and machine learning models can bring. Earlier this month, data protection authorities in Italy, Canada and South Korea have opened a series of investigations into data privacy issues related to OpenAI’s ChatGPT, with the Italian agency temporarily banning the use of ChatGPT in the country.
Against this backdrop, on 29 March 2023, the UK Department for Science, Innovation and Technology (DSIT) published a white paper setting out its proposed approach for regulating artificial intelligence. The white paper outlines five principles that should guide the use of AI in the UK and should be taken into account by regulators, including the Competition and Markets Authority (CMA). In the DSIT’s view, AI regulation should be “context-specific” because AI technology can be deployed in many different ways with varying degrees of risk. For instance, the risks and implications associated with AI being used in chatbot applications will be different from those associated with medical use-cases. The government therefore concluded that a one-size-fits-all approach would not be appropriate. Instead, existing regulatory bodies with long-standing expertise in different sectors and areas of regulation should lead on the implementation of the government’s framework by using existing regulatory tools and issuing guidance to the industry. The DSIT is seeking feedback on its proposed framework by 21 June 2023.
The white paper does not propose setting up a dedicated regulatory body for AI or introducing AI-specific formal regulation like the EU’s Artificial Intelligence Act. In the government’s words, it “will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI.” Instead, the white paper envisages “regulating the use [of AI] – not the technology” through “tailored, context-specific approaches” by existing expert regulators such as the CMA.
The white paper sets out five principles to guide consistent regulation of AI:
- Safety, security and robustness: AI applications should function in a secure, safe, and robust way where risks are carefully managed
- Transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain decision-making processes in an appropriate level of detail that matches the risks posed by the use of AI.
- Fairness: AI should be used in a way that complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes
- Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
- Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI
The UK government’s approach to digital sector regulation, including emerging AI regulation, has differed from the approach followed in other jurisdictions such as the EU or Germany that have passed new rules to govern competition in digital markets. Those rules set fairly rigid do’s and dont’s that apply to certain digital platforms (see Digital Markets Act: What to expect as the new act comes into force?). Instead, the UK government has opted for a more flexible approach (see Digital Markets Regulation Handbook: United Kingdom), with the CMA still waiting to receive statutory powers for its Digital Markets Unit that will regulate digital markets in the UK.
The new ‘pro-competition’ regime for digital firms in the UK is unlikely to come into force before 2023/2024, as the Government indicated in the Queen’s Speech in May 2022 that only a draft bill would be published in the 2022/2023 Parliamentary session. New competition law rules for AI are not yet imminent in the UK.
In line with the government’s white paper, the CMA and other regulators (such as the Information Commissioner’s Office, the data protection regulator) are expected to use their current toolboxes and issue practical guidance to organisations over the next 12 months. The government will also consult until 21 June 2023 on its proposed AI framework, including new processes to improve coordination between regulators and monitor and evaluate the regulatory framework. Finally, the government has announced it will create a regulatory sandbox “where businesses can test how regulation could be applied to AI products and services, to support innovators bringing new ideas to market without being blocked by rulebook barriers.” Overall, the government’s package of proposals indicates a willingness to take a flexible and measured approach to regulating emerging digital markets, such as AI.
 See UK unveils world leading approach to innovation in first artificial intelligence white paper to turbocharge growth (29 March 2023); Policy Paper – AI regulation: a pro-innovation approach, (29 March 2023).
 The European Commission proposed a draft AI regulation in April 2021 that is currently in the process of being approved by EU legislators. According to press reports, the latest rounds of proposed amendments include obligations for “general purpose” AI, such as ChatGPT. See Euractiv, Leading EU lawmakers propose obligations for General Purpose AI (14 March 2023).
 The EU Digital Markets Act has come into force on 1 November 2022 and its behavioural obligations for digital gatekeepers will start applying in early 2024. See Cleary Gottlieb, Digital Markets Act: What to expect as the new act comes into force? (27 September 2022). Similarly, in Germany, the 10th Amendment of the Act against Restraints of Competition came into effect in January 2021 introducing new rules for undertakings with “paramount cross-market significance.” See Cleary Gottlieb, Digital Markets Regulation Handbook: Germany. The head of the German competition authority has recently indicated that AI tools warrant a “very close look” from a competition law perspective and that his agency is well-equipped under Germany’s new digital markets rules to do so. See MLex, March of AI poses serious challenges for competition, Germany’s Mundt says (29 March 2023).
 See Cleary Gottlieb, UK Government Confirms Plans for Digital Markets Regulatory Regime (May 10, 2022), and Queen’s Speech 2022, background briefing notes (May 10, 2022).