Artificial intelligence promises efficiency for the justice system from contract drafting to writing petitions, even to comparative case law analysis. Yet according to experts, it also carries significant risks. Yeditepe University Faculty of Law academic member Prof. Dr. Mehmet Rıfat Tınç states, “Artificial intelligence can offer great benefits to the field of law. But we must never forget: the scales of justice should be held by humans, not machines.”
To address these opportunities and concerns, Prof. Dr. Tınç outlined ten key legal issues and possible solutions related to the use of artificial intelligence in the judicial system::
Solution: The most essential measure is to use only transparent and auditable algorithms. Oversight committees composed of both judicial professionals and computer scientists should be established within the Ministry of Justice, the Council of Judges and Prosecutors, or the High Courts. These bodies should continuously review algorithmic transparency, verify updates, and identify problematic operations for removal from the “Artificial Intelligence Judiciary” (AIJ). Although the Presidency’s Digital Transformation Office, and the Department of Big Data and Artificial Intelligence Applications under it, have been conducting important national-level work, Türkiye currently lacks a specialized body to supervise the AIJ. A good example can be found in the Courts and Artificial Intelligence Advisory Committee established in 2024 under the Supreme Court of the State of New York..
Solution:Independent Ethics Committees should be established to regularly audit algorithms, and their recommendations should be integrated into AIJ training datasets. Unlike the legal oversight boards mentioned above, these ethics committees should also include representatives from various segments of society—experts in children’s rights, gender equality, and patient rights—who hold recognized authority in their respective fields. Such a Judicial Artificial Intelligence Ethics Committee would be responsible for monitoring, critiquing, and making recommendations to prevent biased outcomes.
Solution: In Türkiye, the Personal Data Protection Law (KVKK) and, in the European Union, the General Data Protection Regulation (GDPR) have already established relevant norms and standards. The EU Artificial Intelligence Act, partially enacted in 2024–2025 and to be fully in force by 2026, further reinforces these standards by mandating roles, procedures, and technical requirements specifically addressing the processing of personal data by judicial AI systems. However, in Türkiye, additional laws or regulations are needed to ensure the KVKK is applied specifically to artificial intelligence and the AIJ. As metadata systems, deep web and dark web channels, and spyware applications can pose security risks, any AI systems that do not comply with KVKK standards must be strictly prohibited for use by judicial institutions, personnel, lawyers, and even litigants..
Solution: Currently, there is no clear legal framework for assigning responsibility in cases involving self-coding or autonomous AI systems. In cases of chain liability, all parties should be held responsible to the extent of their contribution to the damage. The EU Artificial Intelligence Act also foresees this type of shared responsibility model, which can broadly be adapted to Turkish law as well.
Solution: Artificial intelligence can never fully replace lawyers. Law is not linear mathematics or arithmetic—it is creative reasoning. It is the art of producing balanced and fair solutions to complex social situations that combine logic, conscience, intuition, and natural intelligence. Nevertheless, legal professionals, especially lawyers, must learn to use AI effectively.
Solution: AI-generated outputs should always be subject to the review and approval of a qualified legal professional. In complex cases, even formulating the right questions for AI requires at least undergraduate-level legal knowledge.
Solution: Only reliable and auditable outputs should be accepted as evidence. While electronic evidence is now widely recognized, AI-generated materials should still undergo specific verification procedures before being accepted.
Solution: Specific international standards and cooperation mechanisms must be developed for judicial AI. The EU Artificial Intelligence Act again provides a relevant model in this respect.
Solution: Justice is inherently linked to the human conscience. Therefore, entrusting justice entirely to artificial courts or machines contradicts the essence of justice itself. Law can only remain trustworthy and strong when guided by human conscience. AI systems developed for legal purposes should be monitored by ethics committees and remain under human supervision at all times. A robot cannot possess a moral conscience.
10. The Risk of Misinformation: AI may provide inaccurate advice based on outdated data.
Solution: Justice is inherently linked to the human conscience. Therefore, entrusting justice entirely to artificial courts or machines contradicts the essence of justice itself. Law can only remain trustworthy and strong when guided by human conscience. AI systems developed for legal purposes should be monitored by ethics committees and remain under human supervision at all times. A robot cannot possess a moral conscience.
Press: Son Dakika | Haberler | Yeni Asya | DHA