Introducing the symposium on ai and human rights


Introducing the symposium on ai and human rights

Play all audios:


The use of AI is advancing at almost incomprehensible speeds, driving decision making that has broad impacts on society, and with the potential to dramatically impact human rights. Indeed,


AI has the potential to affect nearly every recognized human right, including the rights to freedom of expression, thought, assembly, association and movement; the right to privacy and data


protection; the rights to health, education, work and an adequate standard of living; and non-discrimination and equality.  AI may also give rise to the need to instantiate new forms of


rights, such as the right to a human decision.   The human rights abuses that can occur through the implementation of AI systems include facilitating mass surveillance as well as


perpetuating bias in the criminal justice system, healthcare, education, the job market, access to housing, and access to banking, thereby exacerbating discrimination against already


marginalized groups.  AI is also having a negative impact on democracies through facilitating the spread of disinformation, the creation of deep-fakes and synthetic media to sow chaos and


confusion, and the removal of content documenting human rights abuses. At the same time, AI has the potential to benefit human rights, from facilitating advances in healthcare to tracking


supply chain compliance.   Both governments and corporations have a duty to respect human rights. The international human rights regime is an ecosystem of established laws, frameworks, and


institutions at the international, regional, and domestic levels within which individuals can seek respect for their human rights as well as remedies for human rights violations. Although


government and industry leaders often affirm the centrality of human rights in the development and deployment of “responsible” AI systems, all too often this takes the form of general


principles or statements that are either difficult to implement in practice or neglect to consider the full range of potential use cases. As advances in AI accelerate, human rights need to


be integrated into every level of AI governance regimes.  In February 2024, the Promise Institute for Human Rights at UCLA School of Law convened a symposium on _Human Rights and Artificial


Intelligence__, _bringing together leading experts to examine some of the critical questions arising from the rapid expansion of AI and the lagging governance models. The purpose of this


symposium – a collaboration between the Promise Institute and _Just Security _– is to share some of the insights captured by our speakers with a broader audience, and to elevate some of the


most pressing questions about the relationship between AI and human rights. New articles in the series will run each week, tying together the following themes: GENERATIVE AI AND HUMAN RIGHTS


  * _RAQUEL VAZQUEZ LLORENTE AND YVONNE MCDERMOTT REES, “__TRUTH, TRUST, AND AI: JUSTICE AND ACCOUNTABILITY FOR INTERNATIONAL CRIMES IN THE ERA OF DIGITAL DECEPTION.” _Drawing on their deep


expertise of synthetic media – artificially produced or manipulated text, image, audio or video – and its impact, Llorente and McDermott Rees highlight how the use of deep fakes has the


potential to undermine trust in the online information ecosystem. They point to concerns about the “liars dividend,” which creates an environment where deepfake content can both make it


easier to question the veracity of all content and act as a mechanism to further entrench beliefs and narratives. They explore what impact AI-generated content may have on justice and


accountability for human rights violations and suggest some ways in which we can prepare for a hybrid AI-human media ecosystem. * _SHANNON RAJ SINGH, “__WHAT HAPPENS WHEN WE GET WHAT WE PAY


FOR: GENERATIVE AI AND THE SALE OF DIGITAL AUTHENTICITY.”_ Coupled with the rise of deepfakes, Singh writes about how the degeneration of verified accounts – a visual indicator historically


used to show that the account is from a trusted source such as a legitimate media outlet or public figure – is leading to an overall crisis about the legitimacy of our information


environment. She stresses that social media account verification should be considered a public good as it allows us to know what information to trust, and that social media users are not


well equipped for the “coming wave of AI-generated misinformation” under the current pay-to-play verification system. * _NATASHA AMLANI, “AI EXPLOITATION AND CHILD SEXUAL ABUSE: THE NEED FOR


SAFETY BY DESIGN.”_ Continuing our exploration of the impact of AI generated materials on human rights, Amlani’s piece challenges us to think about the ways in which deepfake child sexual


abuse imagery affects child safety and the impact on the child safety reporting system. To mitigate some of these harms, she implores tech companies to start thinking about safety by design


when launching new products and features that have the potential to impact children. THE IMPACT OF AI ON MARGINALIZED COMMUNITIES   * _REBECCA HAMILTON, “THE MISSING AI CONVERSATION WE NEED


TO HAVE: ENVIRONMENTAL IMPACTS OF GENERATIVE AI.” _Hamilton reveals the hard truths about the environmental impact of generative AI as it consumes vast quantities of energy and water. She


notes that the communities most likely to be impacted are those already most marginalized, particularly in the Global South, and that the narrative of AI development needs to be rewritten to


ensure that these high environmental costs are understood by the global community. * _S. PRIYA MORLEY, “AI AT THE BORDER: RACIALIZED IMPACT AND IMPLICATIONS__.” _Morley examines how AI is


being used as the latest tool of U.S. border externalization policies that impede migrants from reaching U.S. territory and seeking asylum, as well as a tool to continue surveillance at and


within borders. She argues that AI is exacerbating and compounding the racial discrimination already driving these policies, having a particularly harmful impact on Black migrants. AI


GOVERNANCE, RIGHTS, AND INTERNATIONAL LAW  * _MICHAEL KARANICOLAS, “__GOVERNMENTS’ USE OF AI IS EXPANDING: WE SHOULD HOPE FOR THE BEST BUT SAFEGUARD AGAINST THE WORST.”_ Karanicolas examines


how AI expansion across U.S. government agencies has the potential to chip away at fundamental rights. He underscores the need to ensure appropriate oversight of AI tools, and suggests a


couple of different models for how that could be structured. In particular, he recommends the creation of a specialized, independent, multi-stakeholder body that can push back against poor


decision making to ensure transparency and increase public trust in AI systems. * _SARAH SHIRAZYAN AND MIRANDA SISSONS, “HOW CAN AI EMPOWER PEOPLE TO EXERCISE RIGHTS AND FREEDOMS GUARANTEED


UNDER INTERNATIONAL HUMAN RIGHTS LAW?” _Writing from the perspective of team members at Meta, Shirazyan and Sissons discuss some of the ways in which AI can empower the rights and freedoms


guaranteed through international human rights law. In particular, they explore how AI can strengthen freedom of opinion and expression through improving access to information, skills,


knowledge and empowering expression; freedom of equality and non-discrimination through increasing accessibility and language inclusivity, and; freedom from physical and psychological harm


through AI-driven consistent content moderation and protecting human moderators from the most harmful content.  * _MARLENA WISNIAK AND MATT MAHMOUDI, “BEYOND AI SAFETY NARRATIVES: HOW TO


CRAFT TECH-AGNOSTIC AND NEO-LUDDITE FUTURES.” _Human rights advocates Wisniak and Mahmoudi pronounce that our future must be centered on social justice and the realization of rights, rather


than the pursuit of techno-solutionism – the idea that technology can solve any problem – driven by AI. They underscore how approaches to AI governance must be grounded in the existing


international human rights law framework, which provides substantive and procedural rights that can protect individuals from some of the worst potential impacts of AI. Taken together, this


symposium provides a rich picture of how AI can be used both to uphold and violate human rights.  Widespread AI use is inevitable – policymakers must move quickly to ensure fundamental


rights and freedoms are protected around the world. _IMAGE: ABSTRACT RENDERING OF AI (VIA GETTYIMAGES). _ FEATURED IMAGE: 3D Motion graphic of AI or artificial intelligence Innovation


Technology concept, AI Font 3D in digital cyber space, AI Chatbot and Generative AI technology, Futuristic abstract background for Business Science and technology.