While the United Nations hashes out regulations, the UK’s ‘context-based’ approach is intended to spur innovation but may cause uncertainty in the industry.
Attempts to create standards and regulations for the way generative AI intersects with many aspects of society are underway across the world. For instance, in March, the UK government released a white paper promoting the country as a place to “turbocharge growth” in AI. According to the white paper, 500,000 people in the UK are employed in the AI industry, and AI will contribute £3.7 billion ($4.75 billion) to the national economy in 2022.
In response, on July 18, the independent research body Ada Lovelace Institute, in a lengthy report, called for a more “robust domestic policy” in order to regulate AI through legislation that clarifies and organizes the UK’s effort to promote AI as an industry.
Lovelace institution cautions government
“The UK’s diffuse legal and regulatory network for AI currently has significant gaps. Clearer rights and new institutions are needed to ensure that safeguards extend across the economy,” Matt Davies and Michael Birtwistle of the Ada Lovelace Institute wrote.
Both groups are essentially calling for more clarity around AI regulation, but the UK government is focusing on being “pro-innovation,” while the Ada Lovelace Institute promotes an emphasis on oversight. The UK government is also working on gradually shifting away from the GDPR as part of post-Brexit reshuffling.
What are the Lovelace Institute’s recommendations?
The Ada Lovelace Institute’s recommendations include:
- Taking another look at the UK’s adoption of GDPR and the proposed Data Protection and Digital Information Bill, which could replace GDPR in the country.
- Publishing a statement of citizens’ rights and protections as related to AI.
- Clarifying laws and creating new government positions around AI.
- Supporting the development of standards.
- Establishing funds and government support for consumer groups, trade unions and advisory organizations that might want to hold AI makers accountable.
Meanwhile, the UK prefers to let existing governmental bodies decide how to handle AI on a case-by-case basis. Specifically, the white paper recommends the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority work on their own “context-specific approaches” to generative AI.
The art of balancing regulation and innovation
Gerald Kierce Iturrioz, co-founder and chief executive officer at AI governance management platform Trustible, said his organization agrees with many of the Ada Lovelace Institute’s recommendations.
Governments that want to be pro-innovation should “clarify the legal gray areas such as use of data for training, how bias and fairness should be evaluated, and what the burden of proof standards should be,” he said in an email to TechRepublic.
“The UK must swiftly establish guardrails to ensure that AI systems are developed and used responsibly within the public sector,” Iturrioz said.
If the government doesn’t establish guardrails, more risks could arise. For example, Iturrioz pointed out the use of automated facial recognition by the UK police, which a human rights study from the University of Cambridge last year found to be ethically and legally dubious.
UK stands in contrast to EU security concerns
The UK’s relatively laissez-faire approach stands in contrast to the European Union’s focus on regulation. The EU is working on an AI draft law for a risk-based approach that focuses on reducing bias, coercion or biometric identification such as automated facial recognition. In June, the European Parliament approved draft legislation for the AI Act, which establishes guidelines for the use of AI and prohibits some uses, including real-time facial recognition in public places.
Representatives from countries across the world and from many of the leading AI makers presented similar concerns at the first United Nations Security Council meeting on the topic.
“The UK seems to be waiting to see how implementation and reception of the EU’s AI Act should influence their approach towards AI regulations,” said Iturrioz. “While this makes sense on the surface, there are risks to sitting back while others move ahead on AI regulation.”