Introduction

Artificial Intelligence (AI) is rapidly transforming the legal profession, including legal research, drafting, document review, and even predictive analytics in litigation. While AI tools offer efficiency and cost advantages, their increasing use in litigation has raised serious concerns regarding professional ethics, evidentiary admissibility, authenticity of documents, and the administration of justice. Courts in India and abroad have begun confronting the risks associated with AI-generated legal content, especially fabricated case citations, deepfakes, and AI-generated evidence. The issue is no longer theoretical; it is now a practical litigation risk.

This article examines the legal and evidentiary risks arising from the use of AI in litigation, particularly in the Indian legal system.

  1. AI in Litigation: Emerging Use Cases

AI is currently being used in litigation for:

  1. Legal research
  2. Drafting pleadings
  3. Contract analysis
  4. Evidence review
  5. Forensic and expert evidence generation

The Supreme Court of India has itself constituted an Artificial Intelligence Committee to explore the use of AI in the judicial system, indicating institutional acceptance of technology in legal processes. However, the use of AI in litigation raises a fundamental question: Can AI-generated content be relied upon as evidence or legal submissions in court?

  1. The Problem of AI Hallucinations and Fake Citations

One of the most serious risks associated with generative AI tools is “hallucination,” where AI generates false information that appears authentic. This has already resulted in fabricated case citations being submitted in courts.

The Supreme Court of India recently warned that citing AI-generated fake judgments amounts to professional misconduct, not merely an error.

In one instance, a trial court order relied on AI-generated fictitious judgments, prompting the Supreme Court to take serious note of the issue. In another Instance the USA District Court fined lawyers representing a patent holding company a combined $12,000 for filing documents with non-existent quotations and case citations that were generated by artificial intelligence, in the latest instance of lawyers facing sanctions for submitting “hallucinated” material in court.

Internationally, courts have imposed monetary sanctions on lawyers for submitting AI-generated fake citations, emphasizing that lawyers remain responsible for verifying all content filed before the court. This creates serious litigation risks like:

  1. Misleading the court
  2. Professional misconduct
  3. Contempt of court
  4. Rejection of pleadings
  5. Loss of client trust

The legal principle is clear: AI is a tool, but responsibility remains with the advocate.

  1. Evidentiary Risks of AI-Generated Evidence
  2. Authenticity and Manipulation

AI can generate hyper-realistic deepfakes, making it difficult to distinguish real evidence from fabricated content. Courts must verify, (i) Source of the data, (ii) Algorithm reliability, (iii) Metadata and hash verification, and (iv) Chain of custody.

The Bharatiya Sakshya Adhiniyam, 2023 requires authentication certificates and source verification for electronic records, which becomes crucial for AI evidence.

  1. Reliability and Explainability

AI systems often operate as “black boxes,” meaning their decision-making process is not easily explainable. Courts may hesitate to rely on evidence that cannot be cross-examined effectively. Research shows that lack of standardized validation protocols and reproducibility creates reliability concerns in AI forensic evidence.

  1. Bias and Due Process Concerns

AI systems are trained on datasets that may contain bias. If AI-generated evidence is biased, it may lead to rejection of pleadings, wrongful convictions or incorrect judicial findings.

  1. Chain of Custody Issues

If, by chance AI evidence is allowed by any court, it must show:

Without this, the evidentiary value becomes weak.

  1. Expert Opinion – Can AI Be an Expert?

Under Indian evidence law, expert opinion is admissible in technical matters. However, courts rely on human experts whose credentials can be tested. If an AI system generates an expert report, who will be cross-examined? This remains a major unresolved legal issue.

  1. Ethical Duties of Lawyers Using AI

The recent judicial trend makes it clear that lawyers have a duty to:

  1. Verify AI-generated research
  2. Confirm authenticity of citations
  3. Disclose use of AI where necessary
  4. Maintain confidentiality of client data

Courts have emphasized that using AI without verification may amount to professional negligence. Thus, AI should be treated like a junior assistant, not like a final authority. Legal scholars suggest that AI use in courts must be governed by transparency, accountability, and human oversight to protect due process.

  1. Conclusion

Artificial Intelligence is undoubtedly going to become an integral part of litigation practice. It can improve efficiency, reduce costs, and assist in complex data analysis. However, AI also poses serious risks to the integrity of judicial proceedings, especially in relation to fake citations, deepfake evidence, algorithmic bias, and evidentiary admissibility.

The legal system is built on authenticity, accountability, and verifiability. AI-generated material challenges all three. Courts have therefore made it clear that while technology may assist lawyers, it cannot replace professional responsibility.

The future of AI in litigation will depend on how well the legal system balances innovation with evidentiary reliability and ethical responsibility. Until specific laws are enacted, lawyers must exercise extreme caution while using AI in litigation, particularly when submitting pleadings, evidence, and other legal submissions before the court.

Leave a Reply

Your email address will not be published. Required fields are marked *