Conference Debrief – Highlights from the Competition Law & Artificial Intelligence Summit – 2nd Annual

Event was hosted by Thought Leaders 4 Competition in London

The Competition Law & Artificial Intelligence Summit – 2nd Annual, held on 2 December 2025 at Carpenters’ Hall, right in the heart of London, brought together competition authorities, legal practitioners, and industry experts to explore the critical intersection of AI and competition law. With artificial intelligence rapidly transforming digital markets and raising novel regulatory challenges, the summit provided timely insights into how competition authorities are adapting their approaches and building capabilities to address AI-related concerns. The event, chaired by Jenine Hulsmann, Partner at Weil, Gotshal & Manges, offered a crucial gathering of regulators, practitioners, and thought leaders examining the competitive implications of AI. Anush Ganesh and Oles Andriychuk from the SCiDA team had the pleasure of attending the event.

By Anush Ganesh

Building Capability and Prioritising UK Interests

The summit opened with Dr. Karen Croxson, Chief Data, Technology and Insight (DTI) Officer at the Competition and Markets Authority (CMA), who outlined the CMA’s 2026-2029 strategy. Karen explained the agency’s goals: championing consumer interests, tailoring pro-competition interventions, instilling confidence, and leveraging UK design to prioritize UK interests.

Karen provided insights into how the DTI has developed hands-on understanding of firm strategies and business models in AI through its competence in technology. She detailed collaborative work being undertaken by the CMA using Market Investigations, the Competition Act 1998, the Digital Markets, Competition and Consumers Act (DMCCA), and all available UK tools to consider AI’s impact.

Online choice architecture was identified as a key consideration in AI markets. While Karen noted that heavy-handed regulation may not be the way forward, she emphasized that algorithmic collusion presents a significant challenge. Particularly with the emergence of agentic AI systems, there is a need to assess what benefits businesses versus what benefits consumers. The CMA’s engagement with academic literature and cooperation with other agencies through the International Competition Network (ICN) was highlighted as part of the authority’s approach to best practice sharing.

European Perspectives on AI and Competition

Brice Allibert, Head of Unit for DG Competition at the European Commission, provided updates on the EU’s approach to AI and competition. Brice began by emphasizing competition law’s important role in digital markets, noting the many monopolies in the tech sector that raise long-term concerns.

The EU has released and updated reports on foundation models between 2018 and 2023, reflecting the Commission’s serious engagement with these issues. The European Competition Network (ECN) provides a forum where EU authorities regularly engage with each other. The Commission has also scrutinized partnerships in AI markets, with Brice citing the example of Nvidia’s merger, where no potential anti-competitive effects were found.

National competition authorities have imposed obligations in some cases, such as Google News in France. Brice emphasized the importance of coordination for harmonious application of competition law within the EU, with the ultimate aim of making markets more contestable.

Insights from UK & EU Regulators

The first panel, moderated by Jindrich Kloub, Partner at Wilson Sonsini, featured representatives from various UK and EU regulatory bodies discussing jurisdictional priorities and the roles of different agencies in AI regulation.

San Sau Fung, Interim Economics Director at Ofcom, explained that Ofcom takes a holistic view of AI across its responsibilities for protecting consumers, media, and broadcasting. Ofcom adopts a technology-neutral approach and does not regulate technology itself, but rather looks at risks across the AI stack from infrastructure and cloud services to applications. Under the UK’s cyber security bill, Ofcom will have a resilience mandate.

San Sau highlighted both opportunities and risks. AI innovation can lead to better outcomes, including enhanced online safety and improved services such as helping telecom companies with cyber security and avoiding scams. However, risks include synthetic media creating illegal and harmful content, personalization through recommender systems creating echo chambers, and general cyber security concerns. San Sau mentioned the Digital Regulation Cooperation Forum (DRCF), comprising the CMA, Ofcom, ICO, and FCA, which facilitates knowledge sharing and best practice development.

Brice elaborated on the EC’s approach, explaining that competition law’s ex-post nature allows it to capture issues that arise at later points crucial for new markets where harm does not manifest ex-ante. Distribution of products to consumers is currently a main consideration for the EC. He acknowledged that the EC has not been using interim measures recently but suggested they should perhaps be employed more often. On harmonization challenges, Brice noted that while the ECN faces resource limitations, commitments where companies agree to implement conduct throughout the EU can practically mitigate jurisdictional issues.

Mergers, Acquisitions and Partnerships in AI

The second panel explored the complex landscape of M&A and partnerships in the AI sector, reviewing recent regulatory decisions and theories of harm specific to AI markets.

Becket McGrath, Partner at Euclid Law, explained the diversity of investors in the AI space and the CMA’s flexible use of merger control rules. He provided examples including Amazon/Anthropic and Microsoft/OpenAI partnerships, noting that in the Microsoft/Inflection case, the CMA had jurisdiction but found no concerns. Other jurisdictions, including Germany, have also reviewed such partnerships. On the issue of acqui-hires, Becket noted it is fair for people to decide where to work.

David Dorrell, Director of Data Science at Frontier Economics, argued that the partnership model is ideal for development, allowing firms to hedge bets in different places. He noted that competition authorities want to avoid missing opportunities based on past digital market experiences, which explains why the CMA considers partnerships more carefully under its changed mandate. The current situation is very different from Web 2.0, with a distinctly different investment climate and market dynamics that may alter theories of harm as AI becomes more widely used.

Mikaël Hervé, Vice President at Charles River Associates, emphasized the importance of counterfactuals, arguing that without partnerships, innovation may slow down and that partnerships are mostly pro-competitive. Vertical integration is an alternative, but partnerships are better. He identified ecosystem theories of harm and vertical foreclosures including input restriction and self-preferencing as main issues. Regarding data as a key input, he noted that while large troves of data are used for AI foundation models, not all data is equally relevant.

Ashley Brickles, Senior Managing Director at FTI Consulting, discussed how organizations find it tricky to assess where regulators will focus, as assessments are based on precedents from past cases. She explained how generative AI can help assess documents clearly and decode investor documents to explain concrete issues of concern, potentially helping firms avoid problems later. AI chat logs are also now being requested by regulators as part of merger reviews.

Navigating Legislation: DMCCA, DMA and AI Act (Chatham House Rule)

The third panel operated under Chatham House rule, with speakers discussing approaches to navigating legislation in the UK and EU, recent regulatory investigations, and enforcement arising from new frameworks.

Speakers noted that the CMA was a fore-runner in AI work. On Strategic Market Status (SMS) designations in the UK, it was highlighted that the UK has considerably more flexibility than the EU’s Digital Markets Act, as evidenced through recent designations. The boundaries of search are increasing drastically, though Gemini remains outside the scope as AI systems are not yet disruptors to Search. For mobile, AI can play a major role, though competition law can only play a limited role.

Discussants addressed trade-offs where search engines favour AI features, noting that amendment of the DMA will be key to AI regulation. It was observed that the DMA’s system where chatbots are not caught is an unintentional positive feature, as chatbots regulation may be premature. This could lead to antitrust enforcement in the EU while the UK might see SMS designations. For the future, discussants expressed hope that the European economy becomes the main focus to encourage investment.

Participants noted efforts to plug gaps through consultations on DMA GDPR interactions, while the DMCCA can affect multiple layers of the digital stack. The CMA has focused on AI Overviews and Gemini’s inclusion and exclusion. It was emphasized that zero-price features do not exist in AI markets and that different features apply. The CMA’s work on AI foundation models was described as impressive and positioning the authority well for work on new technology. The growth of agentic AI will require understanding new contexts.

Speakers stressed the importance of a coordinated approach to address AI-related issues at a global level. Global companies need to anticipate developments well as technology moves fast. If regimes diverge, they may lose effectiveness. There have been significant developments in the last year, requiring a holistic approach to regulation that considers the broader context of AI markets. Copyright was identified as a major concern in the EU and acts as a deterrent to innovation. Robotics represents the next frontier with forthcoming use cases. Regarding competition law, it was expressed that the EU should apply laws effectively in the face of US competitive threats.

Data, IP and Copyright in AI Competition

The fourth panel examined the complex intersection of data protection, intellectual property, copyright, and competition in AI development.

Giulia Trojano, Senior Associate at Hausfeld, explained that the Stable Diffusion case addressed copyrightability issues, and the UK has held consultations on AI and copyright. Changes to UK copyright law are under consideration. In the EU, companies that do not train AI models still need to adhere to the AI Act. Giulia noted that differences between AI crawlers and search crawlers create issues for firms, with both ethical and legal concerns emerging.

Ashwin van Rooijen, Partner at Clifford Chance, distinguished between copyright as output and input. Content generated by AI models (output) may be copyrightable, while the input (data needed to train AI models) is also subject to broad copyright law, as raw data can be copyrighted. Input foreclosure can be addressed using either competition law or copyright law. Ashwin referenced US cases involving Anthropic and Meta where fair use defences were raised regarding copyrighted material use. In the US, the nature of use of copyrightable work is considered, whereas Europe’s stringent copyright measures may not be ideal for an AI age.

Alejandro Guerrero, Partner at Simmons + Simmons, observed that while substantial amounts of data exist, competition issues have not yet emerged significantly. However, he raised concerns about agentic AI being addressed under the DMA’s rigid framework. Self-preferencing and interoperability issues will be key considerations going forward.

Binit Agrawal, Head of Strategy at Lucio, noted that foundation models’ use of data has transformed from previous models. GenAI practices have evolved significantly in the last year, with lack of personalization being a key difference. Some firms like ScaleAI have moved to synthetic data, though it lacks real-world data, and synthetic data can reduce algorithmic bias. GenAI can tap into newer markets such as audio-visual content. The AI Act’s reinforcement of credible data use helps support smaller firms. However, Binit highlighted that licensing by publishers to large firms may not be possible for smaller ones, leading to foreclosure effects for new entrants. Agentic AI is very different from LLM models, as agentic AI must finish tasks. This will change the landscape for smaller players. For example, Perplexity’s agentic AI is competing with Google Chrome, and start-ups can create niche bases in this evolving landscape.

Algorithmic Collusion and Competition Impacts

The fifth panel addressed the emerging challenge of algorithmic collusion and its implications for competition law enforcement.

Tamara Todorovic, Director of Competition Enforcement at the CMA, explained that coordinated outcomes leading to higher prices represent a concern, particularly in the autonomous collusion space with free-thinking algorithms. While algorithms can deliver great benefits, these must be balanced against collusion risks through enforcement, when necessary, otherwise incentivizing compliance. The CMA has focused on raising awareness as one approach. Tamara noted the authority takes a wide-ranging perspective, with market study tools available. Both inputs (whether confidential or public information) and outputs (the end products and their basis) need to be understood. If collusive conduct may have taken place, Todorovic encouraged reporting to the CMA and seeking leniency.

Marjolein De Backer, Partner at Eversheds Sutherland, discussed information exchange, noting algorithms are increasingly linked to information exchange issues. Key questions include what constitutes relevant data and what qualifies as sensitive data. Marjolein referenced the CJEU’s finding of infringement by object in a Portuguese banking information exchange case, highlighting the need to see where lines are drawn. Understanding the source of data is important in an iterative process. Future developments with agents getting access to information will raise interesting antitrust issues, and from her perspective, risk assessment would help clients navigate these challenges.

Gregor Langus, Vice President at Cornerstone Research, explained that algorithms set prices through learning to avoid price wars. Interestingly, even if algorithms collude, the benefits to consumers may be higher, and if disabled, consumers might actually be worse off. Gregor raised concerns about LLMs trained on business literature potentially posing issues for antitrust regulators, though he noted examples where algorithms that set prices actually led to lowering of prices.

Jamie Cooke, Partner at Norton Rose Fulbright, referenced the US hub and spoke algorithmic cartel case involving real estate prices, where algorithmic recommendations using public data were ultimately thrown out. Jamie emphasized the need to examine data crawling and subsequent use of data to sell products, focusing on the impact on consumers.

Safety, Ethics and Consumer Protection

The final panel highlighted safety, ethics, and consumer protection matters in the AI space, including AI washing, fake reviews, dark patterns, and the application of the Digital Services Act and Online Safety Act.

Rikki Haria, Partner at Freshfields, discussed how AI misuse can occur in various ways, particularly where AI systems make decisions with lack of oversight. Rikki noted the need for rules addressing safety and ethical issues, though the AI Act has faced delays. US regulation is quite different with its federal versus state structure. The ongoing dynamic between regulator and regulated continues, with questions about whether concerns are fair and consideration of proportionality in regulatory regimes. AI washing has become a focus for advertising standards authorities.

Leonidas Theodosiou, Partner at Morgan, Lewis & Bockius, explored the use of competition law to consider exploitative abuses. Ex-ante rules such as the AI Act are designed to prevent harm before it occurs. Online safety laws in the UK and EU require enforcers, claimants, and defendants to consider how they work with AI products. A comprehensive compliance test needs to be considered for proportionality, and compliance should not turn into a competitive disadvantage. He identified dark patterns that influence consumers as an enforcement priority for the CMA.

Jenine Hulsmann concluded the summit with a closing speech summarizing the key themes explored throughout the day, reflecting on the complex regulatory landscape emerging at the intersection of AI and competition law.

The Competition Law & Artificial Intelligence Summit provided a comprehensive overview of the evolving regulatory landscape for AI, highlighting the delicate balance authorities must strike between fostering innovation and protecting competition and consumers in rapidly developing markets.

Well done to the Peter Miles, Xander Edgal and the entire Thought Leaders 4 Competition Team for putting together such an exciting Summit. Until next nyear!

Subscribe to our Newsletter!

Receive updates about new blog posts, podcasts episodes and events.

Posts created 37

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top