Analysis

The Master and the machine

Legal professionals voice their views as AI enters the listing game.

The Registry of the Supreme Court of India has long been described, only half in jest, as a black hole. It is where cases go to wait their turn. As of March 2026, pendency in the top court has crossed 93,000 matters, and for most lawyers, listing still feels unpredictable. 

It is against this backdrop that the Court is now exploring use of artificial intelligence to manage the cause list. Automated systems are expected to bring order, consistency and transparency to one of the most tightly held functions within the Court, driven by the Chief Justice as Master of the Roster. However, listing is not merely a matter of logistics. It raises questions that go beyond efficiency, to more complex matters of privacy, accountability and the need for regulation. Even in administrative capacity, can AI really share the driver’s seat with the CJI?

An administrative tool, not a constitutional shift

Advocate-on-Record Mahfooz A. Nazki resists the idea that AI use marks a structural change. “The process of listing is already automated. AI would only enhance the existing system,” he said, explaining that the CJI’s authority will remain untouched as AI assists with administrative sequencing, “an entirely different exercise.”

The introduction of AI use in listing, forms part of a broader technological push within the judiciary. The government’s February 2026 report, From Digitisation to Intelligence, outlines a system of AI-tool use; ASR-SHRUTI and PANINI for transcription and linguistic structuring, SUVAS to assist with translation and SUPACE to support legal research. These tools are designed to convert unstructured court proceedings into structured, machine readable records, organising transcripts into searchable formats, tagging key issues, identifying case metadata, and standardising information across filings. This allows large volumes of judicial data to be sorted, retrieved, and analysed more efficiently.

Where administration requires discretion

The difficulty becomes sharper when the focus shifts from sorting to prioritisation. Advocate-on-Record Malak Bhatt explained that while automated systems can classify cases using objective markers such as limitation periods or subject categories, they are not equipped to deal with factors that depend on context and legal judgment. He stressed that the role of the Master of the Roster is grounded in “control, discretion, and accountability,” which must be preserved in any listing system. Bhatt insisted that the listing process must remain open to challenge through a “transparent and accessible review mechanism.” Litigants should be able to seek urgent listing before a designated authority and receive recorded reasons for the decision that is made, he said.

Flagging the risk of uneven impact, Bhatt pointed out that systems built around structured, technical inputs—such as limitation periods, subject classification, or completeness of filings—may not process all cases equally. “Matters involving vulnerable litigants could be deprioritised,” he noted, particularly where cases lack detailed documentation at the initial stage or fall outside predefined categories.

When the machine gets it wrong

Justice S. Muralidhar, former Chief Justice of the Orissa High Court draws a firm line on how far AI can go. “AI should not be used as a thinking tool,” he said, cautioning against extension of AI use to evaluation of evidence or judicial adjudication, which depend on multiple factors and case-specific considerations. Highlighting that AI tends to follow a formula, Justice Muralidhar gave the example of Siri: “if you ask Siri for an opinion, it will marshal 20 opinions from the internet and give you a summary. We cannot do the same for cases.” Even where AI is used for routine tasks, he iterated the need for constant verification and observed that we do the same with every tool— for instance, re-reading what we have typed and printed before signing it.

This concern is no longer theoretical. In Gummadi Usha Rani v. Sure Mallikarjuna Rao (February 2026), the Supreme Court had to intervene after a trial court relied on fabricated precedents generated through AI, including a non-existent judgement.

UNESCO’s 2025 Guidelines similarly emphasises the need for “human oversight” alongside auditability and information security safeguards. In practice, however, that level of visibility and control is difficult to achieve, making it harder for litigants to understand why their matters were prioritised, delayed, or effectively sidelined. 

At present, no formal framework governing AI-assisted listing has been publicly notified in India. However, the India AI Governance Guidelines signal the contours of such a framework. Emphasising that AI systems must be “understandable by design,” the Guidelines require disclosures and explanations that allow users and regulators to comprehend how outcomes are generated. They also foreground a “people-first” approach, where human oversight remains central to decision-making, and accountability is clearly assigned across the AI value chain. This reinforces the idea that such systems are assistive rather than determinative.

How courts across the world are drawing the line

India’s exploration of AI in court administration is part of a wider global shift, but the way courts are approaching it varies in both ambition and caution. 

In Brazil, for instance, the Supreme Federal Court’s Project Victor is designed to process large volumes of appeals and identify those that raise “general repercussions”— a concept in Brazilian law which ensures that only questions of true socio-economic and political relevance to the society are considered. Rather than replacing judicial decision-making, the system is used to identify precedents that can be replicated across similar cases, reducing the burden on higher courts.

The United Kingdom has taken a more cautious route, and is unusually direct about risks involved. In its Guidance for Judicial Office Holders (2025) it clearly states that judicial officers will remain personally responsible for material produced in their name, irrespective of AI use. It also notes that AI tools cannot “replace direct judicial engagement with evidence” and warns of incorrect and fictitious output. The guide explicitly states that information entered into public AI systems “should be seen as being published to all the world.”

In Mertz & Mertz (No 3) [2025], the Federal Circuit and Family Court of Australia dealt with inaccurate and fictitious case law in submissions noted that AI use “does not absolve the author… from any of their professional or ethical obligations.” It relied on a Full Court decision in Helmold & Mariya (No 2) (2025) FLC 94-272, where the Court noted that “[r]eliance upon unverified research generated by AI has the capacity to confuse, to create unnecessary complexity, to result in wasted time and to mislead the Court and other parties” 

How systems adapt and what that means for privacy

Moving from error to adaptation, Nazki identified another emerging concern: as lawyers begin to track patterns in AI categorisation of cases, they may begin drafting petitions in ways that influence listing outcomes. Describing this as “algorithmic gaming,” he explained how use of specific keywords or subtle adjustment of document structure can shape results rather than reflect them. This possibility makes transparency central to the design of any such system. 

AI use also raises questions about privacy and control over judicial information. Nazki frames this as a structural dilemma—the choice between relying on large commercial AI systems for greater capacity and speed, and developing domestic alternatives to ensure confidentiality and local storage. Justice Muralidhar questioned whether the groundwork has been laid at all. Referring to the Digital Personal Data Protection Act, 2023, he asked, “What has the Court done to ensure compliance? If it hasn’t done that first step, then what does all this matter?” The concern, in his framing, is that the conversation around adopting AI risks moving ahead of the legal and regulatory safeguards that are meant to govern it.

Clerk, not colleague

Despite their different emphasis, there is a clear commonality in how legal professionals understand the role of AI. It may assist the system, but it cannot replace the function of the Court. Nazki suggests that its use should remain limited to the initial stage of listing to preserve judicial control. Bhatt warns, “The single biggest risk is the erosion of accountable discretion,” particularly if decision-making becomes difficult to trace or explain.

What is at stake is not simply efficiency, but the ability to locate responsibility within the system. Listing is not a neutral administrative step, for it shapes access to the Court and, in many cases, the trajectory of a dispute. 

Exit mobile version