Skip to main content

Justice in the Shadow of Algorithms: Discrimination by Artificial Intelligence

-

Artificial Intelligence: A New Tool for Discrimination?

As artificial intelligence (AI) rapidly integrates into our lives, concerns about its potential to exacerbate inequalities are growing. Allegations of discrimination in AI systems used by major corporations highlight the urgency of this issue.

Prof. Dr. Mehmet Rıfat Tınç, a faculty member at Yeditepe University Faculty of Law, analyzed the role of AI in discrimination and discussed the measures being taken both in Türkiye and globally to address this issue.

Prof. Dr. Tınç explained that AI algorithms can discriminate based on personal attributes such as gender, age, and ethnicity, stating, "When a technological tool engages in discriminatory behavior, it can legally be considered an instrument of crime. Understanding how and why AI might discriminate is crucial."

Prof. Tınç highlighted that since algorithms lack both legal knowledge and social awareness, the possibility of discriminatory behavior exists. Prof. Tınç continued his words as follows:
"Even if software systems fully adopt a legal framework, disputes involving allegations of discrimination are inevitable. Neither software nor robots can halt the evolution of society and law."

"Lawyers Must Develop Solutions"

Prof. Dr. Mehmet Rıfat Tınç emphasized that lawyers, in particular, must address the new and critical questions that AI-driven discrimination will raise. According to Prof. Dr. Tınç, AI can discriminate in two distinct ways: Arithmetic Discrimination, where individuals in the same situation are treated differently—for example, two people demonstrating the same performance for the same job being paid different salaries—and Geometric Discrimination, where individuals in different situations are treated the same—for instance, giving an individual with disabilities the same amount of time as others to complete an exam.”

"AI May Become Obsolete in the Face of Law and Society"

"Algorithms cannot possess morality like humans," Prof. Dr. Tınç remarked, adding:
"They cannot deviate from the ethical rules imposed on them or develop them independently. Therefore, AI must remain open to external intervention, human revision, and updates. Without the ability to intervene and revise, a self-improving AI could lead to unintended consequences. Over time, it may regress rather than progress or lag behind our moral and cognitive frameworks. To address this, new regulations must be introduced into our legal system, similar to those in Europe and Japan, to account for errors and offenses caused by AI."

Who Is Responsible?

The lack of legal personhood for AI complicates the determination of responsibility. Prof. Dr. Tınç emphasized that companies and individuals developing and using AI can be held accountable for any harm caused by this technology.
To prevent AI-driven discrimination, he suggested the following measures: 

  • Algorithmic Measures:Regular auditing of algorithms to eliminate biases and enforce corrective formulas against systematic tendencies in software.
  • Institutional Measures: Periodic audits of institutions utilizing AI.    
  • Human Oversight: Keeping AI systems under human supervision. For example, establishing an AI complaint unit or a "Court of AI."

Prof. Dr. Tınç also noted that existing institutions specializing in cybercrimes could support victims of AI-related issues by setting up judicial bodies dedicated to AI cases.

The EU's Approach

Recalling that the European Union's AI legislation will fully take effect on August 2, 2026, Prof. Dr. Tınç highlighted that the EU's measures in this area are still relatively new.
He concluded:
"While AI offers immense opportunities, it also poses significant risks, such as discrimination. Minimizing this risk requires legal, technical, and societal measures. Developing and utilizing AI ethically and fairly is essential for it to truly serve humanity."

Press:  Sözcü | T24 | Gazete Duvar | Haberler | Son Dakika | DHA | AA