Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

UK will ‘do own thing’ on AI regulation — what could that mean?


Jacques Silva Nurfoto | Getty Images

LONDON – The United Kingdom says it wants to do its “own thing” when it comes to regulating artificial intelligence, hinting at a possible divergence from approaches taken by its main Western peers.

“It’s really important that we, as the UK, do our own thing in terms of regulation,” Feryal Clark, Britain’s minister for AI and digital government, told CNBC in an interview that was aired on Tuesday.

He added that the government already has a “good relationship” with AI companies such as OpenAI and Google DeepMind, which have voluntarily opened up their models to the government for security testing purposes.

“It’s really important that we bake in that security at the beginning when the models are developed … and that’s why we have to work with the sector on any security measures that come forward,” Clark added.

The UK can do its thing

His comments prompted Prime Minister Keir Starmer to point out on Monday that Britain has “the freedom now in relation to regulation to do it in a way that we think is best for the United Kingdom” after Brexit.

“You have different models in the world, you have the EU approach and the US approach – but we have the ability to choose the one that we think is in our best interest and we intend to do so” , Starmer said. in response to a reporter’s question after announcing a 50 point plan to make the UK a global leader in AI.

Divergence from the US, EU

So far, Britain has refrained from introducing formal laws to regulate AI, instead leaving it to individual regulatory bodies to enforce existing rules on companies when it comes to development and use. use of AI.

This is unlike the EU, which has introduced comprehensive pan-European legislation aimed at harmonizing rules for technology across the bloc with a risk-based approach to regulation.

The United States, meanwhile, lacks any AI regulation at the federal level and has instead adopted a patchwork of regulatory frameworks at the state and local level.

During Starmer’s election campaign last year, the Labor Party pledged in its manifesto to introduce regulation focused on so-called “frontier” AI models – referring to major language models such as OpenAI’s GPT .

However, so far the UK has yet to confirm details on the proposed AI safety legislation, instead saying it will consult with industry before proposing formal rules.

“We’re going to work with the sector to develop that and bring that in line with what we said in our manifesto,” Clark told CNBC.

Chris Mooney, partner and head of business at London law firm Marriott Harrison, told CNBC that the UK has taken a “wait and see” approach to AI regulation even as the EU moves forward with its Act AI.

“While the UK government says it has taken a ‘pro-innovation’ approach to AI regulation, our experience working with clients is that they find the current position uncertain and therefore unsatisfactory,” Mooney told CNBC via email.

One area the Starmer government has talked about reforming the rules for AI has been around copyright.

At the end of last year, the United Kingdom opened a country copyright framework review consultation to assess possible exceptions to the existing rules for AI developers who use the works of artists and media publishers to train their models.

Companies left uncertain

Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC that while the government’s AI action plan “shows ambition,” proceeding without clear rules is “borderline reckless.”

“We’ve already missed crucial regulatory windows twice — first with cloud computing and then with social media,” Duggal said. “We cannot afford to make the same mistake with AI, where the stakes are exponentially higher.”

“UK data is our crown jewel; it must be harnessed to build sovereign AI capabilities and create British success stories, not just feed overseas algorithms that we cannot effectively regulate or control,” he added.

The details of the work plans for the AI ​​legislation were originally expected to appear in King Charles III’s speech opening the UK Parliament last year.

However, the government is only committed to establishing “appropriate legislation” on the most powerful AI models.

“The UK government needs to provide clarity here,” John Buyers, international head of AI at legal firm Osborne Clarke, told CNBC, adding that he has learned from sources that a consultation on formal AI security laws “waiting to be released.”

“By issuing consultations and short-term plans, the UK has missed an opportunity to provide a holistic view of where its AI economy is headed,” he said, adding that the lack of disclosure of the new data security laws AI leads to investor uncertainty. .

Still, some figures in the UK tech scene think a more relaxed and flexible approach to regulating AI may be the right thing to do.

“From recent discussions with the government, it’s clear that considerable efforts are underway to safeguard AI,” Russ Shaw, founder of the advocacy group Tech London Advocates, told CNBC.

He added that the UK is well-positioned to adopt a “third way” on AI safety and regulation – “sector-specific” regulations governing different industries such as financial services and healthcare.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *