Back to Blog
AICorporateAI on Corporate Boardscompetition law ScrutinyLegal Tech

AI on Corporate Boards: Liability Traps under the 2025 AI Bill, Companies Act, and CCI Scrutiny

Vidhi Sharma & Shresth KukrejaApril 22, 20267 min read

Indian companies are implementing AI to do analytics and compliance and MCA21 V3 reflects adoption of governance. Nevertheless, AI Bill 2025, the Companies Act responsibilities, the risk of collusion by CCI, duties under the DPDP, and labour or tax legislation led to liability exposure. Risk based controls, human over-rides and third-party audits should be employed by boards. Directors can have their fines, fraud claims, and vicarious liability minimized via safe-harbour reforms.

AI on Corporate Boards: Liability Traps under the 2025 AI Bill, Companies Act, and CCI Scrutiny

More recent developments in artificial intelligence have driven Indian businesses to test AI-based solutions on corporate boards, including, but not limited to, predictive analytics to make decisions and automatic compliance checks. Nonetheless, the overlapping of Artificial Intelligence (Ethics and Accountability) Bill, 2025, the Companies Act, 2013, and Competition Commission of India (CCI) regulation forms a liability minefield, which boards are neglecting at their own risk.

Introducing AI to Corporate Governance

To improve Board’s performance in the face of challenging regulatory requirements, Indian corporations are becoming more and more focused on AI. Financial tools such as AI-based dashboards execute financials, risk flagging engines, and even simulate board discussions in addition to human directors required by Section 149 of the Companies Act, 2013, which requires independent directors in order to be objective. In early 2026, the MCA21 V3 platform of the Ministry of Corporate Affairs, which incorporates AI/ML to streamline filings, is the first government-approved AI in the governance process.

However, this integration brings some fundamental questions: Can AI outputs bind boards? In May 2026 the 2025 AI Bill, which is still under discussion in Parliament in December 2025, deals with high-risk AI systems, which are about AI which affects government decisions, as mandatory ethics audits, transparency logs, and human override. In their absence, boards will face vicarious liability due to AI-generated mistakes, which is similar to Section 166 of fiduciary responsibility requiring directors to consider care, diligence and skill.



Liability Traps Companies Act

Section 149(6) states that the role of independent directors will be to offer unbiased suitable checks in the process, whereas AI augmentation blurs the boundaries. Provided that an AI tool suggests mergers or investments on faulty data, any director who relies on it may violate the business judgment rule implicitly contained in Section 166, and be litigated against under Section 447 on allegations of fraud. An IRCCL analysis of 2026 notes instances of inadvertent authorization on cases that were not properly verified using AI analytics, resulting in oppression and mismanagement claims pursuant to Sections 241-242.

Section 179 delegation also restricts things. It is not possible to outsource essential processes such as approvals to AI unless it is ratified though many companies do initial vetting through AI. The 2025 AI Bill requires high-risk systems to be explainable; that is, boards need to audit AI black boxes or risk fines of up to ₹50 crore or 5% of turnover. This overlaps with Section 134(5) where directors sign the statements as true, where AI overstates projections the personal liability comes into action, as in hypothetical Satyam 2.0 situations.

SEBI LODR Regulations requirements of corporate governance codes also require that directors should meet the criteria of being of the appropriate fit and proper. The AI adviser does not have a personhood, thus over trusting might diminish human responsibility, and an audit failure is open to NFRA audits.



CCI Scrutiny

Algorithms Collusion Threats. It is further addressed by Competition Law, whereby the 2026 guidelines of CCI are aimed at combating anti-competitive behaviour caused by AI. The tendency towards tacit collusion can be made possible by pricing algorithms on boards, as in e-commerce giants, and it is against Section 3 of the Competition Act, 2002, though it does not require human involvement. Such systems are to be mitigated through bias in the 2025 AI Bill, yet CCI cases such as the case against Google Android in 2025 present a perspective of regulators considering algorithmic products to be firm conduct.

Consider a board relying on AI to compare prices with competitors: in case it emulates the rivals carelessly, it will face a fine of up to 10% of global turnover. Section 48 offers a personal punishment to directors who endorse such tools without having protection measures of CCI-compliant formats such as regular human reviews. A 2026 MCA event highlights the incorporation of CCI leniency credits on self-reporting AI risks; however, most boards fall behind, and governance instruments are turned in a competition infringement, a liability trap.



Overlapping: DPDP Act Compliance

Digital Personal Data Protection (DPDP) Act, 2023, which is to be implemented over time since 2025, requires consent and minimalization of AI processing board—related data such as executive performance measures. The high-risk AI under the 2025 Bill should respond to the Significant Data Fiduciary (SDF) provisions of DPDP, such as Data Protection Impact Assessments (DPIAs) of governance AI.

Failure to comply will result in board level fines not exceeding ₹250 crore and directors will be held accountable in case of oversight failures. This is compounded by significant penalties in the case of data breaches as imposed in 2000 in the very Section 43A of the IT Act that holds penalties on such breaches in case AI discloses sensitive discussions. The cross less is clear: not anonymizing data according to the DPDP requirements, a board AI must pay attention to conflicts among directors; otherwise, the MeitY may enact them, and the dismissal by the MCA may be issued.



Labour Law Angles: Decision-making Bias

The intersection of labour laws is through AI discrimination in HR related board decisions. The automated layoff or promotions suggested by AI are evaluated by the Industrial Disputes Act, 1947 and Code on Social Security, 2020. When board AI points to identity gender or caste underperformers, party will bring claims under POSH or SC/ST Acts, and directors will have vicarious liability.

The equity provisions of 2025 AI Bill imply that boards face unprotected against Industrial Tribunals due to a lack of adversarial testing in the tools provided. According to a report by Chambers (2026), there is an increase in litigation where attempt of termination has been done through the influence of AI, and the notification of the termination has not been complied with Section 25F.



Taxation Twists: AI in Compliance and Audits

There are less obvious traps in the taxation laws. Automation of GST returns, or transfer pricing under the Income Tax Act, where AI automates, it runs the risk of erroneous claims, subject to penalties under Section 270A as an understatement. The 2025 AI Bill requires the submission of audit trails, but now faceless assessments made by CBIC compare AI results in faceless evaluations, and directors are liable in the case of a default defined as deeming wilful.

The banking statutes under the 2025 AI Guidelines of NBFCs made by RBI stipulate that the model must be validated, board AI on credit risk may contravene Priority Sector Lending were based on any bias, which returns to the CCI via market distortion.



Trap navigation: Reform Roadmap

The boards need to implement AI governance policies: categorize tools based on the risk, as is done in the 2025 Bill, introduce human vetoes, and provide annual third-party audits. SEBI can require the disclosure of AI on LODR, with CCI leniency on proactive remedies encouraging compliance.

India should not follow the EU AI Act restrictions and instead adopt a policy of sandboxes-first such as the pilots of MeitY in 2026. To clarify obligations, legislative changes such as Companies Act,2013 Section 149 (AI adjuncts) and Competition act Competition Act (algorithmic safe harbours) would be required.



Board Action Plan

By April 2026, when the passage of the AI Bill is likely to take place in the post-monsoon session, boards should conduct audits on available AI tools now. Legal department ought to prepare indemnity provisions and educate directors on the hybrid accountability. Such a convergence requires positive change to ensure that AI, which is designed to empower, does not become the bane of the boardroom.

See Lexi in Action

Explore how Lexi can help your team