“Human Rights in the Era of AI: Europe as International Standard Setter for Artificial Intelligence” (Conference Report)

As courts and authorities around the world try to find the balance with Big Tech, the German Presidency of Council of Europe (CoE) held an online conference named “Human Rights in the Era of AI: Europe as International Standard Setter for Artificial Intelligence”. It hosted a very divergent and distinguished list of speakers on January 20, 2021. The article summarizes some key findings and oberservation on AI and human rights.

A conference report by Idil Kula

“What sort of society do we want to live in?”, a quote from the documentary iHuman directed by Tonje Hessen Schei, echoed throughout the event, while the debate scrutinized the relationship between AI and human rights, democracy & rule of law. In his opening speech, Heiko Maas (Federal Minister for Foreign Affairs for Germany) pointed out autonomous weaponization in warfare, authoritarian regimes’ mass surveillance policies and the radicalization of people within the social networks as current challenges. He underlined the widely acknowledged need for public regulations when it comes to defining the thin line between freedom of speech and hate crime. All regulations should, according to Maas, be conducted through multilateral cooperation and international collaboration that is free from any geopolitical bipolarity. His words clearly referred to the Chinese model of digitalization and Silicon Valley’s profit-prioritizing approach, which cannot uphold any kind of value on monopolies. He called for expanding Europe’s AI capability that manifests a good practice of privacy and concluded with a call to set standards for human centered-AI.

As next speaker, the CoE Commissioner for Human Rights Dunja Mijatović underlined how unwanted outcomes could be created when a society relies too heavily on calculations. She gave examples of errors and failures, like algorithmic decision making systems (ADM), social payments in public services, tracking and tracing devices which are used for enforcing lockdowns and for measuring the spread of infections. Thus, she raised discrimination and data protection concerns when government delegate decisions to ADM or AI. Mentioning the Recommendation on the human rights impact of algorithmic systems by the Committee of Ministers of CoE last year, she pointed out the Recommendation on AI and human rights published under her commissionary in 2019. Among ten areas of action, she deemed transparency rules and business compliance with human rights standards to be the most important ones. Mijatović crowned her speech with a call for more inclusive and interdisciplinary cooperation between all stakeholders from governments to civil society.

The first of the highly dedicated three panel debates scrutinized AI’s impact on human rights, democracy and rule of law and possible way outs for problematic cases. Following the presentation on CAHAI’s feasibility study and prospective CoE legal framework for AI, the next panel debate examined the likelihood of new international frameworks as well as the concerning uses and consequences of AI. The last high-level panel debate questioned the how’s and why’s of the further enhancement and development of the proceedings performed by the CoE and the EU. Keeping the human rights, democracy and rule of law as core indicators, the most remarkable takeaways from the debate follow as:

  • Public use of AI -mostly automated decision-making systems- can result in a potential denial of services. It can cause people to lose out on salary promotions, to be denied loans and to have low rates for obtaining insurance packages. More importantly, it can even (though on a questionable statistical foundation) be used to measure whether citizens are prone to commit a crime or not and result in unfair outcomes.
  • AI use in social welfare programs can lead to increasingly excluded vulnerable portions of society. Underrepresented people like youth, indigenous communities, people of color can be impacted by unchecked decisions even if they cannot access the Internet. Two concrete examples of this are the UK’s A-level algorithm scandal last year and an algorithm named ‘Amigo-boras’ used in Dutch migration control.
  • In the long run, deep societal infrastructures such as human rights, rule of law and democracy can be damaged by algorithmic systems. These negative consequences might have different appearances on human dignity -by physical and mental means-, human autonomy and freedom.
  • AI use in social networks, with the political and social debates shifted into cyberspace, can cause radicalization and polarization by “being a safe haven” to those who exacerbate hate speech and extremism. The latest example of this is how social media has been used to motivate the stormers in the attack on the Capitol of Washington, on January 6, 2021. The polarisation can easily amplify its effects in such circumstances by scale and scope. This also proves that there is a subtle border between real life and cyberspace which can have reciprocal and irreversible consequences regardless of the time and space.
  • One of the frequently raised dilemmas was the broad proliferation of smart cities. However, all these developments can  lead to function creep, just like in San Diego or in a more explicit way like Chinese surveillance panopticon.
  • With regard to FRTs and automated gender recognition, the use of AI in smart public billboards, public toilets and in the rest of vast applications like Giggles, can cause many discriminative problems, e.g. by excluding trans* people, and violate the very right to equality and non-discrimination.
  • Accountability and transparency of AI were also of the highly remarkable considerations. Since many manifests on responsibility are being suggested, it may be time to embed the responsible deployment of these technologies into innovation policies within  participatory co-design patterns. At this point, Microsoft can serve as an example of incorporating ethical considerations and human rights sensitivity into engineering processes: With an inhouse department to develop responsible and transparent technology, the corporation wants to commit to respect the rule of law.
    On the flip side,      attitudes of each stakeholder on accountability within AI issues can make a difference In order to prevent legal gaps in the responsibility context, it is crucial for policymakers to diligently regulate the accountability aspect within jurisprudence, and for the society to have a more demanding attitude about human responsibility by being fully aware that AI is not magic but human-made.
  • ”Explainable AI” was another critical issue through the discussion. Regarding the non-linearities and high-dimensional correlations within the high-profile calculations in artificial intelligence algorithms, there is no ‘one size fits all’ solution. Effective remedies for the so-called black-box problem can remain in socio-technical approaches and good practices by diminishing unconscious bias among developers, deployers and recruiters. Therefore, the explainability can be provided by education and awareness training right from the start of the processes.

Grey lines, red lines and hybrid legal solutions

Having AI everywhere in our lives requires a diligent inspection and maybe a new and different legal solution within a hybrid ecosystem. In accordance with the feasibility study of the CAHAI, a mixture of binding and non-binding rules might provide an acceptable and reasonable solution. The majority of the stakeholders have iterated and underlined the ‘risk-based approach’ within legislative processes. However, the governments’ reliance on the data and the algorithms provided by private companies, from which they purchase and deploy these algorithmic systems into the public sphere, can be an issue because of the grey lines of these two poles that have their very own dynamics.

The definition problem of AI is another question at stake. At this point, one example can be the Oviedo Convention on Human Rights and Biomedicine. It was stated, that there is no need for thorough definitions to prohibit human cloning and design babies. As such, the same attitude can be adopted in AI practice. Anyone can agree that there could be some injustices within the outputs when an algorithm is trained by biased and imbalanced data sets or data sets that do not fit for the purposes of the application of the system (e.g. data collected by selective or purposeful bias, poor data visualization) without agreeing on a single definition.   

Overall, the entirety of the concerns raised in the talks should be understood not only through the lens of European values of modern democratic society, but also from an international humanitarian angle, considering how the technology affects every one of us as citizens of the biosphere. Stressing the ethical sensitivity is embedded into human rights, it would be exciting to witness the creation of new frameworks which put the human at the center and respect the universal values of the humanist hive mind, with collaborative participatory approach that involves all divergent and valuable stakeholders. A broad mapping of AI applications seems like a challenging but profitable path to unveil even the least risky uses of AI, with the aim of keeping the technology more traceable, verifiable and under human control. Overall, it’s hopeful to believe  that the policymaking processes will have fruitful conclusions in the trust and excellence ecosystem.      

Published under licence CC BY-NC-ND. 

This article is a shortened version of a full conference report that has first has been published by Bilgi University (Istanbul), Information Technology Law Institute.

This Blogpost was written by

Author

  • Idil Kula

    Idil is a legal trainee and a MSc Student in Software Management Program at Middle East Technical University (METU) in Ankara. She assists projects within the Internet Society Turkey Chapter (ISOC-TR). Her study covers information technologies and cyber law. She is amazed by science, space and nature.

Idil Kula Written by:

Idil is a legal trainee and a MSc Student in Software Management Program at Middle East Technical University (METU) in Ankara. She assists projects within the Internet Society Turkey Chapter (ISOC-TR). Her study covers information technologies and cyber law. She is amazed by science, space and nature.