People need stronger protection from the effects of AI, the EU’s rights agency argued as one expert warned against the “blind adoption” of AI.

People need stronger protection from the effects of artificial intelligence, the EU’s rights agency argued in a report Monday, as one expert warned against the “blind adoption” of such technology.

Much of the attention on developments in AI “focuses on its potential to support economic growth”, said the report, by the Fundamental Rights Agency (FRA). But it added: “How different technologies can affect fundamental rights has received less attention.” There is a risk “people are blindly adopting new technologies without assessing their impact before actually using them”, David Reichel, one of the experts who worked on the report, told AFP.

“There are many people who think that when you don’t have any data linked to gender or ethnic origin in your data set then you’re fine and that is not discriminating,” Reichel added.

On the contrary, he argued, caution was needed as “there is a lot of information that can be linked to protected attributes”.

In August, for example, London’s Appeal Court found that the use of facial recognition by police in Cardiff was unlawful, in part because not enough had been done to ensure the technology was not prone to bias.

“Technology moves quicker than the law,” FRA director Michael O’Flaherty said in the report.

“We need to seize the chance now to ensure that the future EU regulatory framework for AI is firmly grounded in respect for human and fundamental rights.” More research funding was needed into the “potentially discriminatory effects of AI”, the agency added.

“Any future AI legislation has to consider” possible discriminatory effects and impediments to justice “and create effective safeguards”, the agency said in a statement accompanying the report.

The issue was all the more pressing given that the “Covid-19 pandemic has potentially quickened acceptance of innovative technologies”, particularly in improving healthcare and helping track the spread of disease, the report noted.

Already at the start of 2020, 42 percent of companies used AI-dependent technologies, said the report, citing recent research.

Organisations using such technology needed to be more transparent and more accountable, said the report.

“People need to know when AI is used and how it is used, as well as how and where to complain,” it added.

Originally published at Urdu Point