CEU was home to a unique workshop on Monday the 16th of September, with the title ‘Populism, Technology and Law’. The event brought together CEU faculty, international scholars and representatives of NGOs and was organized by the Center for Ethics and Law in Biomedicine in collaboration with the Comparative Populism project. Participants sought to explore technological challenges to the rule of law and to analyze the contribution of new digital technologies like artificial intelligence and machine learning to the increasing manifestation of populism around the world.
The workshop’s opening speech was delivered by Pro-Rector Zsolt Enyedi, who emphasized the novelty, timeliness and importance of considering the role and impact of new technologies in the study of populism. In her welcome address, Professor Judit Sándor, director of the Center for Ethics and Law in Biomedicine, highlighted how the combined study of populism, technology and law has been quite rare until now. However, she argued, human rights lawyers and other scholars need to turn to the scrutiny of the most recent advances in robotics, artificial intelligence and various surveillance technologies to reflect on how these developments transform our world, including our work, our private life, and the political landscape.
The first session was devoted to assessing the stakes involved in the dangerous liaison between populism and new technologies. Paul Nemitz, Principal Advisor in the Directorate-General for Justice and Consumers of the European Commission centred his keynote speech around the idea of approaching the topic from the perspective of a power analysis. It is vital, argued Mr Nemitz, to recognise that the forces of technology combined with new modes of capitalism represent the most important factors shaping contemporary societies. Therefore, the key challenge of our time is to find ways of containing and controlling that power to ensure the primacy of democracy and the rule of law.
In her presentation, Ms Fanny Hidvegi, European Policy Manager at the Brussels-based digital rights NGO, AccessNow reflected on the limitations of ongoing efforts at the European level to address the challenges of AI. Instead of focusing on AI uptake and ethics guidelines, Ms Hidvegi emphasised the need to create legally binding instruments based on the protection of human rights. This approach is necessary to curb possible abuses of new technologies by autocratic and populist regimes in Europe and elsewhere.
The second session was devoted to two specific areas that are of central importance in current discussions, namely, misinformation networks, and technology-based proposals to address data privacy.
Mr Marius Dragomir, Director of the Center for Media, Data and Society at CEU presented preliminary findings from an ongoing study about the ownership structure and funding behind misinformation networks especially in the CEE region. While there is a lack of transparency around ownership and considerable country differences exist, advertising appears to be the dominant revenue stream of such websites. Importantly, misinformation networks are enabled by social media platforms, which serve as anchors for fake news pages that persist even after particular domains had been taken down. This highlights the role internet intermediaries play in shaping public discourse, which was a recurrent theme during the workshop.
Dr Michael Veale, lecturer in Digital Rights and Regulation at University College London introduced emerging challenges related to cryptographic solutions to data privacy. While these technologies may guarantee confidentiality and appear to embody the ideals of ‘privacy by design’, nevertheless, they preserve the problems associated with existing business models and informational power. Thus, rather than focusing on data as the locus of attention, Dr Veale suggested that regulation and governance should consider optimization infrastructures, which determine the inferences that are drawn on the basis of our data.
The third session looked at policy options for the future and was kicked off with a talk by CEU alumna Mackenzie Nelson, who currently oversees the Governing Platforms project at AlgorithmWatch. Ms Nelson gave an overview of the ways in which online platforms have reshaped the traditional media landscape and presented examples of current regulatory attempts. As she argued, such attempts are mostly constrained by a lack of evidence on platforms, the persistence of policy silos, and overall information asymmetry, whereby platforms know a lot about us but we know very little about how they operate. Finally, Ms Nelson introduced the Governing Platforms project, which uses a multi-stakeholder approach to come up with a rights-respecting and evidence-based proposal for platform governance.
The last speaker was Harry Armstrong, Head of Technology Futures at Nesta, a UK-based innovation foundation. Mr Armstrong gave an overview of how technology regulation has changed over recent decades and outlined the novel set of challenges that regulators face in this space. Specifically, Mr Armstrong described a new approach called ‘anticipatory regulation’, which relies on a set of principles meant to address the limitations of current technocratic solutions to technology governance. According to this model, regulation should be inclusive and collaborative, future-facing, proactive, iterative, outcomes-based, and experimental. However, Mr Armstrong also emphasized that in order to curb the rise of populism, policies must address underlying structural issues that exacerbate inequalities and cause a sense of alienation among many.
The panels were followed by lively discussions with members of the audience and some preliminary conclusions began to take shape. The time appears ripe to regulate online platforms, automated decision-making processes and AI-based tools that might impact upon the public sphere. Europe should take the lead in crafting this regulation, which must follow a human rights-based approach.
The event proved to be highly stimulating and successful and we look forward to continuing our engagement with questions around the ethics and governance of artificial intelligence.
photos: Kinga Lakner