Home » UK AI Copyright Rules May Backfire, Causing Biased Models & Low Creator Returns

UK AI Copyright Rules May Backfire, Causing Biased Models & Low Creator Returns

by Adrian Russell


The Big Ben in London.
Image: pichetw/Envato Elements

Barring companies like OpenAI, Google, and Meta from training AI on copyrighted material in the UK may undermine model quality and economic impact, policy experts warn. They say that it will lead to bias in model outputs, undermining their effectiveness, while rightsholders are unlikely to receive the level of compensation they anticipate.

The UK government opened a consultation in December 2024 to explore ways to protect the rights of artists, writers, and composers when creative content is used to train AI models. It outlined a system that permits AI developers to use online content for training unless the rightsholder explicitly opts out.

Bodies representing the creative industries largely rejected this proposal, as it put the onus on creators to exclude their content rather than requiring AI developers to seek consent. Tech companies didn’t like it either, arguing that the system would make it difficult to determine which content they could legally use, restrict commercial applications, and demand excessive transparency.

During a recent webinar hosted by the Centre for Data Innovation think tank, three policy experts explain why they believe any solution short of a full text and data mining exemption in UK copyright law risks producing ineffective AI systems and stalling innovation.

Opt-out regimes may result in poorly trained AI and minimal income for rightsholders

Benjamin White, the founder of copyright reform advocacy group Knowledge Rights 21, argued that regulations on AI training will affect more than just the creative industries, and since copyright serves to stimulate investment by protecting intellectual property, he said the broader economic impact of any restrictions should also be taken into account. “The rules that affect singers affect scientists, and the rules that affect clinicians, affect composers as well. Copyrights are sort of a horizontal one-size-fits-all all,” he said.

He added that the scientific community is “very concerned at the framing of the consultation,” noting that it overlooks the potential benefits of knowledge sharing in advancing academic research, which, in turn, offers widespread advantages for society and the economy.

White said: “The existing exception doesn’t allow universities to share training data or analysis data with other universities within proportionate partnerships, doesn’t allow NHS trusts to share training data derived from copyright materials like journal articles or materials scraped off the web.”

SEE: Why Artists Hate AI Art

Bertin Martens, senior fellow at economic think tank Bruegel, added: “I think media industries want to have their cake and eat it at the same time. They’re all using these models to increase their own productivity already at this moment, and they benefit from good quality models, and by withholding their data for training, they reduce the quality… so it cuts into their own flesh.”

If AI developers signed licensing agreements with just the consenting publishers or rightsholders, then the data their models are trained on would be skewed, according to Martens. “Clearly, even big AI companies are not going to sign licenses along that long tail of small publishers,” he said. “It’s far too costly in terms of transaction costs, it’s not feasible, and so we get biased models with partial information.”

Julia Willemyns, the co-founder of tech policy research project UK Day One, stated that the opt-out regime is unlikely to be effective in practice, as jurisdictions with less restrictive laws will still allow access to the same content for training. Blocking access to outputs from those jurisdictions would ultimately deprive the UK of the best available models, she warned. She said this “slows down technology diffusion” and has “negative productivity effects.”

SEE: UK Government Releases AI Action Plan

Furthermore, artists are unlikely to earn meaningful income from AI licensing deals. “The problem is that every piece of data isn’t worth very much to the models, these models operate at scale,” said Willemyns. Even if licensing regimes were enforced globally and rightsholders’ content could only be used with explicit legal consent, the economic benefit for creators would still be “likely very, very minimal.” “So, we’re trading off countrywide economic effects for a positive that seems very negligible,” she said.

Willemyns added that overcomplicating the UK’s copyright approach by, say, requiring separate regimes for AI training on scientific and creative materials, could create legal uncertainty. This would overburden courts, deter business adoption, and risk losing out on AI’s productivity gains. A text and data mining exemption would ensure simplicity.

ChatGPT’s Ghibli controversy underscores blurred lines in AI creativity

The debate over artistic protection versus innovation also surfaced last month during a controversy involving AI-generated art in the style of Studio Ghibli, the Japanese animation house behind ‘Spirited Away’ and ‘My Neighbor Totoro.’ Critics argued it risked appropriating a distinctive artistic style without permission, and OpenAI eventually introduced a refusal mechanism that activates when users attempt to generate images in the style of a living artist.

The panel disagreed with this approach. Willemyns said that the stock of Studio Ghibli’s parent company “clearly upticked” as increased attention drove more people to watch its films. “I feel like the arguments that AI slop is not going to actually take over content were kind of re-reaffirmed by the instance,” she said. Martens agreed, arguing that “if there are many Ghibli lookalikes that are being produced it increases competition around a popular product, and that’s something that we should welcome.”

SEE: UK Pledges Public Sector AI Overhaul

White added that cartoons with Ghibli’s art style are produced by lots of different Japanese studios. “They’re all people with big eyes, Western-looking, that’s the style,” he said. “That’s not protected by copyright, what copyright law protects is substantial similarity.”

Martens noted that how close a particular AI-generated work can come to an original is “up to the courts,” but this can only be determined on a case-by-case basis. Ultimately, the panel agreed that models should not be able to directly reproduce training content, but that training on publicly available material should remain permissible. “Having flexibility on how the systems are built and how technology learns from content that’s publicly available is most likely the best way forward,” said Willemyns.



Source link

You may also like

© 2025 cryptopulsedaily.xyz. All rights reserved