Artificial Intelligence And Competition Law: Can Algorithms Collude Without Human Intent?
Tarun Maheshwari
9 April 2026 8:00 PM IST

The rapid growth of digital markets and the use of artificial intelligence in business decision-making have fundamentally transformed how firms compete. Pricing, product recommendations, advertising and matching of buyers and sellers are increasingly driven by algorithms rather than direct human decision-making. In this background, competition law faces a novel question: can algorithms used by competing firms collude or achieve cartel-like outcomes without any express human agreement or intention to do so.
Traditional competition law, including the Competition Act, 2002, is based on the idea of a “meeting of minds” between firms, usually proved through communications, conduct and surrounding circumstances. However, algorithmic systems can independently learn patterns of behaviour that maximise joint profits and sustain supercompetitive prices. This assignment examines how algorithmic collusion operates, whether such behaviour fits within the existing legal framework, and how expectations of compliance are evolving for corporates and start-ups in 2026.
Understanding Algorithmic Collusion
Algorithmic collusion may be described as a situation where algorithms used by competing firms lead to coordination of market outcomes—such as higher prices, reduced output or market sharing—resembling a cartel, even where no explicit human agreement exists. In digital markets, firms rely on algorithms to collect data, predict demand, monitor rivals' prices and automatically adjust their own commercial strategies. When several firms deploy similar tools, their behaviour may start to align in a way that harms competition.
Different forms of algorithmic coordination can be identified. First, monitoring algorithms track rivals' prices and immediately alert firms or automatically retaliate when a competitor attempts to undercut. This makes it easier to sustain traditional cartels because cheating is quickly detected and punished. Second, parallel or matching algorithms continuously adjust prices based on observable market conditions, including competitors' prices, and may end up mirroring each other's conduct without direct communication. Third, there are hub-and-spoke scenarios where multiple firms rely on a common software provider or platform, so that the algorithm effectively becomes a “hub” coordinating their parallel pricing strategies. Finally, self-learning algorithms using machine learning or reinforcement learning may, over repeated interactions, discover that maintaining higher prices rather than aggressively undercutting can maximise long-term profits for all players.
These mechanisms are especially powerful in oligopolistic markets where there are few players, high transparency, frequent interactions and strong incentives to avoid price wars. Digital platforms, app-based services and online marketplaces are fertile settings for such outcomes. The key concern is that even if no manager ever instructs the algorithm to collude, the system may independently converge on collusive equilibria as part of its optimisation process.
Legal Framework: Indian and Global Perspective
Under Indian law, Section 3 of the Competition Act, 2002 prohibits agreements which cause or are likely to cause an appreciable adverse effect on competition in India, including hard-core cartels such as price-fixing, bid-rigging and market allocation. Section 3 also covers “practice carried on, or decision taken” by any association of enterprises, leaving scope to include concerted practices that fall short of a formal agreement. Section 4 addresses abuse of dominant position, which may become relevant when a powerful platform uses algorithms to unfairly discriminate between users or exclude rivals.
The existing statutory language does not expressly mention algorithms or artificial intelligence, but its broad formulation allows competition authorities to treat algorithmic coordination as an anti-competitive agreement where sufficient evidence exists of a common understanding, facilitation or knowing adoption of collusive tools. Globally, similar debates are taking place. In the European Union, Article 101 TFEU prohibits concerted practices and has been interpreted to cover situations where firms knowingly rely on the same algorithmic infrastructure that predictably leads to price coordination. In the United States, enforcement agencies have stated that the use of algorithms to fix prices is treated no differently from traditional cartels: the medium of coordination may change, but the legal standards for collusion remain rooted in effect and intent inferred from conduct.
Commentators and policymakers have noted that self-learning algorithms challenge these traditional concepts. When a system autonomously discovers collusive strategies, there may be no direct evidence of human intention to coordinate. Nevertheless, firms choose the algorithm's objectives, training data, constraints and deployment environment. The emerging view is that competition law can, and should, attribute responsibility to enterprises that knowingly adopt high-risk algorithmic tools without adequate safeguards, even in the absence of explicit human conspiracies.
Can Algorithms Collude Without Human Intent?
The central question is whether algorithms can truly “collude” in a legal sense without human intent. From an economic perspective, the answer appears to be yes: if independently deployed algorithms repeatedly interact in a market, observe each other's reactions and adapt over time, they may reach strategies that sustain high prices and avoid aggressive competition. Such systems can learn that deviating from a high-price equilibrium triggers harmful price wars, whereas tacit mutual restraint yields better long-term profits. In this sense, algorithmic agents can reproduce the outcomes of a cartel.
Legally, however, competition law does not normally recognise “intent” on the part of machines; it is the undertakings behind those machines that can be held liable. The difficulty is that there may be no evidence of an explicit “meeting of minds” between firms. Each enterprise can argue that it merely deployed a profit-maximising tool, and any parallel pricing is simply rational interdependence. The question then becomes whether consciously placing algorithms in a setting where collusion is a reasonably foreseeable outcome can itself be treated as a form of agreement or concerted practice.
Many scholars argue that the focus should shift from subjective intent to objective effects and risk-taking behaviour. If firms choose to use self-learning or common third-party algorithms in concentrated, transparent markets, and this predictably produces durable supra-competitive prices, they should not be allowed to escape responsibility simply by pointing to the autonomy of the system. Instead, knowledge of risks, design choices and failure to introduce compliance constraints could be used to infer an underlying concerted practice or at least to justify stricter regulatory duties.
Enforcement Challenges and Evidence Issues
Despite the theoretical possibility of algorithmic collusion, enforcement authorities face serious practical challenges. First, many advanced machine-learning systems operate as “black boxes”: they generate outputs such as price recommendations based on complex internal parameters and training data that even their designers may struggle to fully explain. Reconstructing the chain of reasoning by which different algorithms converged on collusive outcomes is technically demanding and resource-intensive.
Second, distinguishing between lawful parallel conduct and unlawful coordination becomes harder in algorithmic environments. In oligopolies, firms may rationally react to each other's price changes even without any collusion. Algorithms make such reactions faster and more granular, but the observable pattern—similar prices, quick alignment and stable high levels— may look similar to tacit collusion. Authorities must therefore rely on a combination of econometric analysis, market structure evidence and internal documents to prove that the outcome is not merely conscious parallelism but the result of some form of coordinated strategy.
Third, the question of who should bear liability complicates enforcement. In scenarios where multiple firms use the same third-party pricing software, the software provider may act as a “hub” whose algorithm effectively orchestrates collusion among “spoke” firms. Competition law must determine whether responsibility lies primarily with the users, the software developer, or both. This may require expanding the concepts of association of enterprises and facilitators within the existing legal framework.
Finally, authorities need new investigative tools and expertise. Traditional techniques such as dawn raids, document seizures and witness statements must be supplemented by technical audits of algorithms, access to source code and training data, and expert assessments of model design. There is also discussion around whether, in high-risk sectors, a partial shift in the burden of proof may be justified—requiring enterprises to demonstrate that their algorithms incorporate safeguards against collusive outcomes.
Regulatory Developments and Indian Context in 2026
By 2026, debates on artificial intelligence and competition have gained prominence in India as well. Market studies and policy reports have highlighted the risk that algorithmic tools may facilitate collusion, discriminatory pricing and exclusionary practices in digital markets such as e-commerce, app-based services and online advertising. The Indian framework is gradually moving from a purely ex-post enforcement model to incorporating ex-ante elements, especially for large digital platforms exercising gatekeeper functions.
Proposals for a specialised digital competition regime envisage obligations on systemically significant digital enterprises to ensure transparency, fairness and non-discriminatory operation of their algorithms. These may include duties to avoid self-preferencing in rankings, restrictions on combining user data across services without consent, and requirements to prevent anti-competitive coordination through platform tools. Though primarily aimed at abuse of dominance and unfair practices, such rules can also indirectly limit the risk of algorithmic collusion by constraining how platforms design their recommendation and pricing systems.
At the same time, there is recognition that over-regulation may chill innovation, especially for start-ups that rely on readily available algorithmic solutions to compete with larger incumbents. Policymakers thus face a delicate balance: safeguarding markets and consumers against AI-enabled collusion while preserving the benefits of data-driven efficiencies and personalisation. This tension is particularly acute in India's digital economy, where rapid growth, foreign investment and domestic entrepreneurship coexist with concerns about concentration and gatekeeper power.
Way Forward: Compliance Expectations for Corporates and Start-Ups
In this evolving landscape, competition compliance is no longer confined to training employees not to engage in overt cartels or information exchanges. For corporates, it now includes active oversight of algorithmic tools deployed in pricing, matching and recommendation functions. Firms are expected to understand, at least at a high level, how their algorithms operate, what data they use, and whether the design or deployment context creates foreseeable risks of collusive outcomes.
Enterprises can adopt “competition-by-design” principles, where algorithm developers are instructed to build in safeguards, such as constraints against using competitors' prices as direct optimisation targets or mechanisms that intentionally introduce some noise to avoid stable supra-competitive equilibria. Regular audits, simulations and stress-testing of algorithms interacting with plausible rival strategies can help detect and correct problematic behaviour.
Contractual arrangements with third-party software providers may include obligations to comply with competition law and to cooperate with investigations if needed.
For start-ups, the compliance challenge is more subtle but equally important. Many smaller firms depend on off-the-shelf pricing tools, recommendation engines or cloud-based AI services used by their rivals. This raises the risk of hub-and-spoke style coordination through a common algorithmic intermediary. Even if resources are limited, start-ups should be cautious about blindly delegating all strategic decisions to opaque systems, and should maintain some level of human oversight over key parameters, such as pricing ranges and responses to competitor behaviour.
Both corporates and start-ups must also integrate competition considerations into their broader AI governance frameworks. This includes documenting design choices, keeping records of objectives fed into algorithms, and ensuring that internal compliance teams are consulted before major changes in automated decision-making systems. In 2026, failure to consider these issues can expose firms not only to legal risk but also to reputational damage, as public awareness of AI-driven unfairness and collusion continues to grow.
Artificial intelligence and algorithmic decision-making present one of the most significant new challenges for competition law. While algorithms do not possess legal intent in the human sense, their deployment in concentrated and transparent markets can lead to outcomes that closely resemble collusion, even where managers never explicitly conspire. From the perspective of consumers and market efficiency, the harm caused by such AI-enabled coordination is indistinguishable from that of traditional cartels.
Existing legal frameworks, including the Competition Act, 2002, are flexible enough to address many of these issues by focusing on effects, structural conditions and the responsibility of undertakings that design and deploy high-risk algorithms. However, effective enforcement will require new investigative capacities, clearer guidance on corporate liability, and, in some sectors, ex-ante obligations on powerful digital platforms. In this context, competition compliance in 2026 is increasingly about embedding safeguards and ethical considerations into the architecture of algorithms themselves. Corporates and start-ups that treat algorithmic governance as a core part of their competition strategy will be better placed to innovate confidently while avoiding the serious consequences of AI-driven collusion.
Views are personal.
