AI In Digital Forensics: Are Indian Evidence Laws Equipped To Handle Machine-Generated Proof?

Vagisha Mandloi

27 April 2026 10:00 AM IST

  • AI In Digital Forensics: Are Indian Evidence Laws Equipped To Handle Machine-Generated Proof?
    Listen to this Article

    As algorithms begin to testify in our courtrooms, the scales of justice are facing a digital recalibration that challenges centuries of traditional jurisprudence. There is a need to move beyond the Section 65B straitjacket to frame AI not merely as a digital record, but as a distinct form of machine-generated expert opinion. By leveraging the historic transition from the Indian Evidence Act to the Bhartiya Sakshya Adhiniyam, a concrete statutory roadmap specifically a particularly designed provision to bridge the analytical gap between opaque algorithmic logic and the fundamental requirements of judicial truth is required. The integration of these tools offers a glimpse into a high-efficiency future, but it also forces to confront the “black box” at the heart of the courtroom.

    THE TECHINCAL LANDSCAPE: AI DRIVEN PRECISION IN FORENSIC INVESTIGATION AND EVIDENCE EXTRACTION

    AI has evolved from simple database searches into a sophisticated engine for patter recognition across forensic domains that once relied solely on the steady hand of human examiners. In modern investigation, AI tools like TrueAllele use statistical machine learning to untangle complete DNA mixtures that baffle traditional analysis. Beyond the laboratory, AI identifies microscopic striations on ballistics shell casings with hyper-precision and utilizes deep learning for automated handwriting comparison to establish probabilistic matches, effectively reducing the subjectivity of manual script analysis.

    In the digital sphere, these tools recover deleted data, identify malware, and trace anonymous blockchain transactions in cryptocurrency cases, while AI- powered electronic discovery platforms can review millions of documents for relevance far more accurately than human teams prone to fatigue. Can an AI's expert opinion be considered under BSA? AI has none of its own reasoning per se, however it can give objective analysis according to the subjectivity and prejudice fed into it, which is humanly. This further indicates its opinion would be more objective than a subjective human being.

    JUDICIAL SCRUTINY AND THE “BLACK BOX”: NAVIGATNG THE INDIAN LEGAL FRAMEWORK FOR MACHINE-GENERATED PROOF

    In India, the admissibility of AI generated evidences would fall under Section 60 and 63 BSA to manage electronic records, landmark cases like Anvar P.V. and Arjun Panditrao Khotkar have established that compliance with Section 65B(4) certificates is mandatory to ensure authenticity, yet this framework remains a straitjacket for dynamic AI outputs. The primary hurdle is the “black-box problem”, where the internal logic of an algorithm is opaque even to its developers, often creating an “analytical gap” between raw data and the conclusion that may lead a court to exclude the evidence. Consequently, AI generated proof in India is currently viewed as being in a rudimentary stage, requiring either a judge or a separate human expert to re-verify the methodology and facts before it can be legally relied upon.

    A COMPARITIVE GLOBAL PERSPECTIVE: DIVERGENT JURISDICTIONAL APPROACHES TO ALGORITHMIC EVIDENCES

    Jurisdictions worldwide have adopted varied approaches to filter AI evidence before it reaches the trier of fact. In the US, judges serve as gatekeepers under the Daubert standard, evaluating AI based on testability, peer review, error rates, though cases like State v. Loomis highlight the tension between proprietary trade secrets and a defendant's due process rights. The European Union has taken a regulatory path with the 2024 AI Act, classifying forensic and judicial systems as high-risk and demanding strict human oversight and transparency. In contrast, China has integrated AI directly into its bench via 206 systems that assist judges by identifying evidentiary issues and suggesting precedents, while Colombia recently launched UNESCO-backed guidelines in December 2024 to ensure AI implementations uphold judicial integrity and fundamental rights.

    STATUTORY EVOLUTION: EVALUATING AI INCLUSIVITY UNDER THE INDIAN EVIDENCE AND V/S BHARTIYA SAKSHYA ADHINIYAM

    The transition toward the BSA presents a critical opportunity for statutory reform to move past the limitations of IEA and explicitly define machine-generated proof. Legal experts suggest the introduction of a dedicated provision specifically for AI oversight, which would mandate disclosure obligations regarding a system's architecture, training data and known biases. Furthermore, there is a proposed amendment to S. 45 IEA/39 BSA to explicitly recognise validated algorithmic systems as a legitimate source of expertise, provided they meet prescribed standards of reliability and validation. This evolution would shift the judicial focus from merely authenticating the storage device to interrogating the scientific validity of the algorithm itself.

    FORGING A RELIABLE DIGITAL SOCIETY: INTEGRATING GLOBAL BEST PRACTICES FOR ALGORITHMIC ACCOUNTABILITY

    To foster a truly reliable digital society, India must bridge the technical literacy gap among judges and lawyers so they can interrogate technical evidence without being intimidated by its complexity. A vital necessity is the implementation of parity of arms, ensuring that the defense has access to technical experts possibly through legal aid to challenge the sophisticated tools often used by the state. Courts must also remain vigilant against automation bias, the dangerous tendency to over-rely on machine outputs as objective truth, by maintaining meaningful human oversight as a mandatory safeguard. Ultimately, India can learn from models like UK's Forensic Science Regulator to establish an independent oversight body or national registry that audits and validates forensic tools, ensuring that technology serves the cause of justice without sacrificing constitutional protections.

    With India on its way to the Bharatiya Sakshya Adhiniyam (BSA), it is imperative for the country to go past the mere certification of AI systems as a black-box technology and adopt the glass-box approach. It is recommended by legal scholars that a separate provision be drafted which calls for the revelation of the system design, training set, and error rates, thus enabling the judges to become the scientific guardians of the law rather than mere spectators of computer code. Drawing inspiration from foreign jurisprudence such as the 2024 EU AI Act and Colombian judicial instructions, India may close the knowledge deficit to ensure that forensic AI passes stringent tests of forensic relevance.

    Views are personal.

    Next Story