Veronica Maimescu

Peace, Conflict and Diplomacy

AI, Privacy, and Consent

Published by

on

Implications for Human Rights

Artificial Intelligence (AI) is a powerful technological advancement to be praised and feared. It is defined as “the simulation of human intelligence by a system or a machine” (Xu Y et al., 2021). For the AI to work, huge amount of data is collected as the foundation for training AI algorithms to understand patterns, make predictions, and generate insights.

The AI models are categorised into predictive, which are designed to make predictions or forecasts based on input data, and generative AI capable of creating new content that resembles the input data they were trained on. Both categories registered a rapid growth and the usage increased exponentially, which also implies unpredictable capabilities.

These can present opportunities but also major challenges, risks. On one hand, it can enable AI systems to adapt, generate creative solutions, and handle complex or uncertain environments. On the other hand, unpredictability introduces biases in decision-making, unexpected errors, failures and most importantly the misuse.

Without doubt, the technological progression is crucial with numerous benefits for various fields and human rights is no exception. However, sweeping the risks and failures under the rug it is not a viable approach.

Photo by ThisIsEngineering on Pexels.com

AI companies succeed in reiterating the transformative power but fail to consider the impact technology has on human rights, negative implications in particular such as privacy violation, deepfake, manipulation. As argued by Jones (2023), organisations and individuals must place human rights at the foundation for AI governance, but it seems that many companies overlook the significance of the rights in the development of technologies.

The truth is that companies focus on innovation and capitalisation of these advancements but lack ethical consideration. Moreover, the weak legal protection or no legal protection at all allows the AI violations to flourish. Lets take for example the AI-generated Taylor Swift fake nude images that went viral on X (Tenbarge, 2024). These highlight the harmful rising possibilities presented by artificial intelligence, particularly towards sexualising women and girls.

Once shared, forever stored .

The prevalence of deepfakes in pornography, social media, politics is disturbing as it could be very difficult to distinguish fiction from reality. Through lack of regulations and accountability, these companies increasingly infringe on human rights, on women rights through inability to effectively develop risk management strategies (Singh, 2020). Are we neglecting to address gender disparities once more? It’s not a new revelation that significant gender imbalances in the field have overlooked existing biases, leading to the circulation of women’s and girls nude photos across the internet. See the Spanish town case (Hedgecoe, 2023).

To make things worse, the increased capabilities of AI technology exacerbate the already concerning issues of online harassment, cyberbullying, and sexual abuse. As stated by Bazile (2023), there is a widespread proliferation of child sexual abuse images generated by AI on the internet. While the crimes increase, the laws to prevent and protect victims are dormant. The uncertainty surrounding the legality of this matter indeed poses irreparable damage.

“In short, you type in what you want to
see; the software generates the image.

These images can be so convincing that they are
indistinguishable from real images.” IVF Report

Besides that, another major gap within AI algorithms are the existing biases. The algorithms are trained on skewed or incomplete datasets, leading to discriminatory outcomes in decision-making processes (Rijmenan, 2023). For example, AI-powered recruitment tools aimed to remove gender and race from the systems may inadvertently perpetuate gender or racial biases by favouring candidates from certain demographic groups or penalising others based on irrelevant criteria (Drage & Mackereth, 2022).

Photo by Atypeek Dgn on Pexels.com

Additionally, AI technology is increasingly being used in surveillance systems for monitoring public spaces, tracking individuals’ movements, or analysing behaviour patterns. It offers benefits such as automatically detecting threats, identifying criminal activity, and safeguarding public property. While these systems may have legitimate security or law enforcement purposes, the concerns about privacy violations and civil liberties must not be ignored.

As mentioned previously, the AI systems may make decisions or predictions that have significant implications for individuals’ rights and interests. In these contexts; obtaining informed consent from affected individuals is essential to respect their autonomy, dignity, and right to self-determination. For example, in healthcare settings, patients should have the opportunity to provide explicit consent for the use of AI technology in medical diagnosis, treatment planning, or decision support (NHS, 2023). Similarly, in employment or criminal justice contexts, individuals should be informed about the use of AI-driven algorithms in decision-making processes and have the opportunity to challenge or contest automated decisions that may affect their rights or opportunities (Fair Trials, 2020).

Informed consent is essential in AI systems, particularly regarding decisions with significant implications for individuals’ rights.

Artificial Intelligence has significant implications for human rights, which are already fragile. Privacy and consent stand as fundamental human rights principles essential for the ethical and responsible development and deployment of AI technology. As the usage of AI continues to increase it is questionable whether organisations are actually capable to forecast and address potential risks associated with AI including but not limited to equal protection, economic rights, and basic human freedoms.

Despite multiple benefits and well intentioned AI, there is no guarantee the users will not continue to misuse it. Without urgent and adequate measures to combat potential violations, including robust safeguards for data privacy, transparency in AI decision-making processes, and mechanisms for accountability, the door to abuse, exploitation, or manipulation by malicious actors—such as authoritarian regimes, criminal organisations, or unethical businesses—remains wide open.

Coming Soon

Untangling the Diplomatic Mess: The UN’s Role in Global Governance

Don’t Miss

Leave a comment