The European Centre for Algorithmic Transparency (ECAT) is expected to play a major role in interrogating the algorithms of mainstream digital services.
The European Union is creating a new research unit to support oversight of large platforms under the Digital Services Act (DSA). The European Centre for Algorithmic Transparency (ECAT) is expected to play a major role in interrogating the algorithms of mainstream digital services.
The EU’s Joint Research Centre (JRC) is a long-established science facility that conducts research in support of EU policymaking. However, the ECAT has a dedicated focus on the Digital Single Market (DSM), supporting lawmakers to gather evidence to build cases for any platforms that don’t take their obligations seriously.
The function of ECAT is to identify “smoking guns” to drive enforcement of the DSM, such as AI-based recommender systems serving discriminatory content despite the platform claiming to have taken steps to “de-bias” output.
The EU prioritized a major retooling of its approach to regulating digital services and platforms in 2019, leading to the adoption of the Digital Services and Platforms Act (DSA) and its sister regulation, the Digital Markets Act (DMA). Both regulations will come into force in the coming months, but a subset of VLOPs and VLOSE face imminent oversight.
The Commission is set to designate which platforms will be subject to the special oversight regime, which requires them to assess systemic risks their algorithms may pose.
It is not yet confirmed which platforms will get the designation, but set criteria such as having 45 million+ regional users encourage educated guesses. Twitter may have painted a DSA-shaped target on its feathered back, but we should find out in the coming weeks.
Tech giants will have four months to comply with the DSA’s requirements, including producing their first risk assessment reports.
Risks include the distribution of disinformation and illegal content, negative impacts on freedom of expression and users’ fundamental rights, and limits on profiling-driven content feeds and the use of personal data for targeted advertising.
EU lawmakers are already claiming credit for certain iterations in platform trajectories, such as the open sourcing of the Twitter algorithm.
The DSA is a bloc’s goal to set new standards in online safety by using mandatory transparency as a flywheel for driving algorithmic accountability. It aims to force tech giants to open up about the workings of their AI “Black Boxes” and take a more proactive approach to addressing data-driven harms.
Big Tech has been accused of profiting off of toxicity and/or irresponsibility, and mainstream marketplaces and social media giants have been accused of failing to meaningfully address myriad harms attached to how they operate their powerful sociotechnical platforms.
The DSA is being positioned as the jewel in the crown of the Commission’s DSA toolbox, bringing scientific rigor, expertise, and human feeling and experience to the complex task of understanding AI effects and auditing immediate impacts.
It will help end the era of platforms’ PR-embellished self-regulation, as tech giants will have to show their workings at arriving at such statements. The unit is being positioned as the jewel in the crown of the Commission’s DSA toolbox.
The EU hopes ECAT will become a hub for world-leading research in algorithmic auditing and help regional researchers unpick longer term societal impacts of mainstream AIs. However, the mission is complex and poor results could make the bloc a lightning rod for anti-innovation criticism.
Brussels is working to shape a digital decade marked by strong human centric regulation and strong innovation, as Renate Nikolay, deputy DG for Communications Networks, Content and Technology, said when she cut ECAT’s virtual ribbon today.
OpenAI’s ChatGPT was mentioned at the ECAT launch event, dubbed “one more reason” to set up ECAT. Mikel Landabaso, a director at the JRC, argued that algorithmic transparency is a timely mission to take on, as generative AI is causing concerns. ECAT is leading the world in terms of non-standard research technology in this field, and is a good opportunity for all of us and our scene.
The EU’s Nikolay highlighted the importance of the DSA mission, arguing that it will protect users and citizens as they navigate the online environment. She also suggested that the world is watching and is looking for reference points when designing their approach to the digital economy. She suggested that the European model should be taken into account when designing their approach.
Nikolay addressed doubters by stressing that the EU is ready to be Big Tech’s algorithmic watchdog, and that the Commission is getting ready for the role. The ECAT is also involved, as they are not doing it alone, but together with important partners.
The ECAT is a research team on artificial intelligence with a “unique” focus tied to policy enforcement. The plan is for a team of 30-40 people to staff the unit, possibly reaching full capacity by the end of the year.
The initial recruitment drive attracted significant interest, with over 500 applications following their job ads last year. Funding for the unit is coming from the existing budget of the JRC, but a 1% supervisory fee on VLOPs/VLOSE will be used to finance the ECAT’s staff costs.
ECAT staff presented four projects, including examining racial bias in search results, designing voice assistant technology for children, and researching social media recommender systems. Other early areas of research include facial expression recognition algorithms and algorithmic ranking and pricing.
ECAT staff have developed a data analysis tool to help the Commission with the task of parsing risk assessment reports. They also aim to shine a light on societal impact by studying longer term effects of interactions with algorithmic technologies, including gender-based violence, child safety and mental health. This is in response to tech giants’ attempts to flood the channel with noise.
The JRC is taking a multidisciplinary approach to hiring talent, including computer and data scientists, social and cognitive scientists, and other types of researchers.
They want to be able to apply a broad variety of expertise and perspectives to interrogate AI impacts, and to partner with the wider European research community. The future home for ECAT has been designed as a visual metaphor for the spirit of openness they are aiming to channel.
ECAT is aiming to catalyze the academic community in Europe to focus on AI impacts. It is working to build bridges between research institutions, civil society groups and others to create a regional ecosystem dedicated to unpicking algorithmic effects.
One early partnership is with France’s PEReN, which has devised a tool to study how quickly the TikTok algorithm latches on to a new target when a user’s interests change.
The DSA is a new EU regulation that takes a centralized approach to enforcement of EU rules. It has penalties of up to 6% of global annual turnover for tech giants that don’t take transparency and accountability requirements seriously. It also puts legal obligations on platforms to cooperate with regulatory agencies, such as providing data to support Commission investigations.
The GDPR also has large penalties, but its application against Big Tech has been stymied by forum shopping. The hope is that this centralized enforcement structure will lead to more robust and reliable enforcement, and act as an irresistible force to switch platforms to focus on common goods.
The debate over how to measure AI impacts on subjective considerations like well-being and mental health is ongoing. The Commission’s readiness for dealing with Big Tech’s policy staffers will depend on how it sets the tone on enforcement. If it comes out swinging early, Big Tech can set the timeline, shape the narrative, and engage in other bad faith tactics.
The Commission had to face questions from press members about its preparedness to crack open Big Tech’s algorithmic black boxes.
It responded by professing confidence in its abilities and expressing confidence that the Digital Services Act is the enabling framework to pull this massive, public service-focused reverse engineering mission off.
One official noted that the Digital Services Act already has transparency obligations for platforms, so they need to be more concerned about the algorithmic systems and recommender systems.
Rumman Chowdhury, the former head of Twitter’s AI ethics team, predicted that the EU’s quasi-Sisyphean task would be a “very messy 3-5 years” but beneficial in the end. She has since established a consultancy firm focused on algorithmic auditing and has been co-opted into the DSA effort, sharing her take on how to devise algorithmic assessment methodology.
Chowdhury has been working with the EU on research and implementation for the regulation, sharing her take on how to devise algorithmic assessment methodology.
The Digital Services Act and the European Data Protection Authority are two important pieces of legislation that will help move concepts of benefit to humanity and society from research and application into tangible requirements. In her public remarks, Chowdhury praised the Digital Services Act and its work to move concepts of benefit to humanity and society from research and application into tangible requirements.
She also hit out at the latest AI hype cycle that is being driven by generative AI tools like ChatGPT, warning that the same bogus claims are being unboxed for human-programmed technologies with known flaws, while platforms are simultaneously dismantling their internal ethics teams. This is cynical opportunism as tech giants attempt to reboot the same old cycle and keep ducking responsibility.
The slow demise of internal accountability teams at most technology companies has led to the launch and imposition of generative AI algorithms and solutions.
This has led to the shuttering of AI ethics teams by major platforms, but Chowdhury’s presence at the EU event implied one tangible upside: Insider talent is being freed up to take jobs working in the public good. She hopes that laws and methodologies can appeal to the conscience of many people who want to be doing this kind of work, and there is a gap that needs to be filled.