Skip to main content

Prof. Dr. Mehmet Rıfat Tınç: “The Scales of Justice Should Be Held by Humans, Not Machines”

-

Artificial intelligence promises efficiency for the justice system from contract drafting to writing petitions, even to comparative case law analysis. Yet according to experts, it also carries significant risks. Yeditepe University Faculty of Law academic member Prof. Dr. Mehmet Rıfat Tınç states, “Artificial intelligence can offer great benefits to the field of law. But we must never forget: the scales of justice should be held by humans, not machines.”

To address these opportunities and concerns, Prof. Dr. Tınç outlined ten key legal issues and possible solutions related to the use of artificial intelligence in the judicial system::

  1. Without Reasoning, Justice Is Wounded: When an algorithm reaches a conclusion, we often cannot see the reasoning behind it. In law, justification is fundamental. Without reasoning, both justice and the sense of justice are damaged.

Solution: The most essential measure is to use only transparent and auditable algorithms. Oversight committees composed of both judicial professionals and computer scientists should be established within the Ministry of Justice, the Council of Judges and Prosecutors, or the High Courts. These bodies should continuously review algorithmic transparency, verify updates, and identify problematic operations for removal from the “Artificial Intelligence Judiciary” (AIJ). Although the Presidency’s Digital Transformation Office, and the Department of Big Data and Artificial Intelligence Applications under it, have been conducting important national-level work, Türkiye currently lacks a specialized body to supervise the AIJ. A good example can be found in the Courts and Artificial Intelligence Advisory Committee established in 2024 under the Supreme Court of the State of New York..

  1. Algorithms Reproduce and Amplify Bias:If AI training datasets contain gender or ethnic biases, algorithms can reproduce and even amplify them, undermining the principle of equality.

Solution:Independent Ethics Committees should be established to regularly audit algorithms, and their recommendations should be integrated into AIJ training datasets. Unlike the legal oversight boards mentioned above, these ethics committees should also include representatives from various segments of society—experts in children’s rights, gender equality, and patient rights—who hold recognized authority in their respective fields. Such a Judicial Artificial Intelligence Ethics Committee would be responsible for monitoring, critiquing, and making recommendations to prevent biased outcomes.

  1. Violation of Personal Data: When case files and client information are processed by artificial intelligence, the risk of privacy breaches increases. Protecting client data is a cornerstone of legal security.

Solution: In Türkiye, the Personal Data Protection Law (KVKK) and, in the European Union, the General Data Protection Regulation (GDPR) have already established relevant norms and standards. The EU Artificial Intelligence Act, partially enacted in 2024–2025 and to be fully in force by 2026, further reinforces these standards by mandating roles, procedures, and technical requirements specifically addressing the processing of personal data by judicial AI systems. However, in Türkiye, additional laws or regulations are needed to ensure the KVKK is applied specifically to artificial intelligence and the AIJ. As metadata systems, deep web and dark web channels, and spyware applications can pose security risks, any AI systems that do not comply with KVKK standards must be strictly prohibited for use by judicial institutions, personnel, lawyers, and even litigants..

  1. Unclear Responsibility: When an error occurs, who bears responsibility? The programmer, the lawyer, or the judge? There should be no uncertainty in law.

Solution: Currently, there is no clear legal framework for assigning responsibility in cases involving self-coding or autonomous AI systems. In cases of chain liability, all parties should be held responsible to the extent of their contribution to the damage. The EU Artificial Intelligence Act also foresees this type of shared responsibility model, which can broadly be adapted to Turkish law as well.

  1. Concern Over Job Loss Among Lawyers: AI can draft contracts quickly. However, defining legal strategy, communicating with clients, and maintaining a sense of justice are inherently human tasks. Artificial intelligence should be viewed as an assistant, not a rival.

Solution: Artificial intelligence can never fully replace lawyers. Law is not linear mathematics or arithmetic—it is creative reasoning. It is the art of producing balanced and fair solutions to complex social situations that combine logic, conscience, intuition, and natural intelligence. Nevertheless, legal professionals, especially lawyers, must learn to use AI effectively.

  1. Trust Issues in Legal Consultancy: Clients may hesitate to rely solely on AI-generated advice. Legal consultancy is built not only on information but also on trust.

Solution: AI-generated outputs should always be subject to the review and approval of a qualified legal professional. In complex cases, even formulating the right questions for AI requires at least undergraduate-level legal knowledge.

  1. Uncertainty About the Evidential Value of AI Outputs: Can AI-generated analyses be accepted as legal evidence in court? This remains a major debate. Evidence law should permit only verifiable and auditable data.

Solution: Only reliable and auditable outputs should be accepted as evidence. While electronic evidence is now widely recognized, AI-generated materials should still undergo specific verification procedures before being accepted.

  1. Conflicts in International Law: Each country regulates AI differently. An application permitted in one jurisdiction may be prohibited in another, creating legal confusion in cross-border disputes.

Solution: Specific international standards and cooperation mechanisms must be developed for judicial AI. The EU Artificial Intelligence Act again provides a relevant model in this respect.

  1. Ethical Dilemmas: The scales of justice are delicate. If they are handed over to machines, decisions may become efficient but devoid of conscience.

Solution: Justice is inherently linked to the human conscience. Therefore, entrusting justice entirely to artificial courts or machines contradicts the essence of justice itself. Law can only remain trustworthy and strong when guided by human conscience. AI systems developed for legal purposes should be monitored by ethics committees and remain under human supervision at all times. A robot cannot possess a moral conscience.  

10.  The Risk of Misinformation: AI may provide inaccurate advice based on outdated data.

Solution: Justice is inherently linked to the human conscience. Therefore, entrusting justice entirely to artificial courts or machines contradicts the essence of justice itself. Law can only remain trustworthy and strong when guided by human conscience. AI systems developed for legal purposes should be monitored by ethics committees and remain under human supervision at all times. A robot cannot possess a moral conscience.
 

Press:  Son Dakika | Haberler | Yeni Asya | DHA