During the last days, we witnessed the second round of the most public aspect of DMA enforcement: the DMA Compliance Workshops. Now in their second year, these events are shaping up to be an annual tradition in which the Commission convenes gatekeepers, their business users, competitors, and other stakeholders. The contrast between last year’s buzz for newly announced compliance measures and this year’s fatigue over the gatekeepers’ modest updates could not have been more striking. Is the DMA already losing momentum? Are these subdued workshops a sign of shifting priorities – perhaps the Commission is indeed sacrificing its landmark regulation for a better trade deal with the US? Below, we unpack what led us to ask these questions.

Between 20 June and 3 July, Microsoft, Amazon, Apple, Alphabet, ByteDance and Meta once again presented their compliance updates in Brussels. Much felt familiar: the Commission moderated with its usual reserve, while the gatekeepers presented themselves as champions of DMA compliance, focused on benefits for users and business partners. This year, however, AI – the elephant in the room – took centre stage, with a dedicated session on the gatekeepers’ AI developments. While the intention was to demonstrate that the DMA also applies to emerging technologies, the effect was rather the opposite. The AI showcases often resembled old TV sales pitches, and DMA obligations were barely referenced. The underlying message was clear: without cloud services being designated, the DMA currently lacks real force when it comes to AI.
Opening each session, the Commission reiterated that compliance is not just about fulfilling the letter of the law, but also about embracing its spirit to restore contestability and fairness in digital markets. The gatekeepers introduced their measures with reference to vaguely defined KPIs, while audience questions seeking concrete data were routinely met with confidentiality disclaimers. Still, the Commission underlined that the DMA is grounded in dialogue – not just between itself and the gatekeepers, but also between business users and gatekeepers, and the Commission.
1. Microsoft – compliant since Day 1?

The Microsoft compliance workshop which took place on the 20th of July 2025 focused on data combination and cross-use (Art. 5(2) DMA), data access and portability (Art. 6(9) and 6(10) DMA) for Microsoft’s designated core platform service LinkedIn, on change of defaults and uninstallation requirements (Art. 6(3) DMA) and vertical interoperability for Windows (Art. 6(7) DMA) and the integration of new AI solutions. The Commission’s team introducing these topics was headed by Rita Wezenbeek, Luca Aguzzoni and Jean-Sébastien Robert. Before diving into the details, the Commission conveyed cautious optimism about its regulatory dialogue with Microsoft. The Commission mentioned constructive exchanges including meetings, information requests, and third-party contributions which have yielded tangible compliance enhancements. Still, as the Commission underscored, the dialogue continues, with ongoing third-party engagement remaining indispensable.
LinkedIn’s compliance updates revolved around a new consent screen for personalised ads. This allows users to customise consent for cross-use of data across LinkedIn Jobs, Marketing Solutions, and Learning. What triggered the compliance shift, however, wasn’t the DMA—but rather a hefty €310 million GDPR fine imposed by the Irish Data Protection Commission in October 2024.

And even though Microsoft finds the DMA and the GDPR consent requirement to largely overlap, it maintains two separate consent screens, one for each law. Its justification – simplifying internal compliance- was met with scepticism. Could two separate screens trigger consent fatigue? Asked by BEUC whether the screen had been tested for neutrality, Microsoft acknowledged it had only conducted comprehension testing.
On data access and data portability under Art. 6(9) and 6(10) DMA, Microsoft cited hundreds of API access requests to demonstrate uptake. Yet, during the Q&A, stakeholders pointed to delays and inadequate support responses. AI features on LinkedIn were presented as innocuous enhancements falling outside the scope of consent obligations: Microsoft stressed that it doesn’t train its AI with personal data from European users (did you hear that, Meta?).
Windows
Turning to Windows, Microsoft began with its three claims for DMA compliance: 1. Proactiveness (“compliance since day 1!”), 2. Collaboration (with the Commission) and 3. Transparency (“our 200+ pages compliance report speaks for itself”). But the fact that Microsoft only one year after 3 the DMA compliance date in March 2024 decided to enable the uninstallation of the Microsoft Store on Windows as required by Art. 6(3) DMA undermined the compliance-since-day-one claim. The audience also raised other concerns about persistent dark patterns, such as resetting defaults to Microsoft Edge disguised as recommended security settings.

Microsoft’s presentation of AI features was notably soporific – which is an achievement given that AI remains the current hot topic elsewhere. Microsoft’s representative explained how AI application run on Windows just like any other app. For developers Microsoft launched platform APIs built into Windows to help them create apps with AI functionality. Lastly, Microsoft presented its AI features used in the Windows OS, for example the possibility to search for a local file by describing it. Not really groundbreaking. But after all, it was Microsoft’s CEO Nadella who said that AI is basically generating no value, somewhat dimming the AI hype.
The session ended early. Rita Wezenbeek closed with a reminder that compliance can also be achieved through regulatory dialogue, not just enforcement. The dialogue, it seems, continues.
2. Amazon – the calm before the storm?
On 23 June, Amazon presented its compliance progress. The Commission’s team—Alberto Bacchiega, Denis Sparas and Sophie Ahlswede—introduced the main themes: data portability and data access (Art. 6(9) and 6(10) DMA), advertising transparency (Art. 5(9), 5(10) and 6(8) DMA), ranking in the Amazon Store (Art. 6(5) DMA) and Amazon’s pricing policies (Art. 5(3) DMA). Also, Amazon introduced its new AI pet: Rufus – the Amazon Store personal shopping assistant.

The Commission offered insights into the regulatory dialogue with Amazon. Discussions on data access and ad transparency revealed ongoing challenges over the scope and granularity of access. For ranking and potential self-preferencing in the Amazon Store under Art. 6(5) DMA, the Commission highlighted that the functioning of the complex algorithms steering features such as ”Amazon Choice” and other widgets and product bundles remain under scrutiny. The assessment seems to be ongoing: “we have sent more questions to Amazon over the last year, and we are continuing the regulatory dialogue on this basis to understand what Amazon’s compliance entails”, as Sophie Ahlswede put it. As for pricing, the Commission flagged its interest in any measures that replicate the effects of price parity clauses, prohibited by Art. 5(3) DMA. Two years into mandatory compliance with the DMA for Amazon, the ongoing dialogue to “understand” Amazon’s compliance and algorithms seems somewhat disappointing. But after all, this might have just been a cry for help as the post of the Commission’s new Chief Technologist seems to be still vacant.
Amazon complaints

The Amazon lead of legal and competition litigation, Chris Meyers, seized the first opportunity to vent about the DMA and fragmented enforcement among EU Member States. Not only would compliance with the DMA, from implementation to engagement and monitoring, absorb significant resources. But these annual compliance costs – which exceed the estimated 10 million Euros for all gatekeepers – would also be a thorn in Draghi’s side who called for a balance of innovation and regulation in his report. Every Dollar spent on DMA compliance is a Dollar less for innovation, Amazon mourns. Admittedly, Rufus, the AI-powered Amazon Store shopping assistant does not look like a groundbreaking invention. But not only the DMA itself seems to infuriate Amazon. How dare the Bundeskartellamt do a solo tour and issue a statement of objections against the online retailer giant regarding its price control mechanisms and exclusion of sellers’ offers from the Amazon Buy Box? This, he argued, contradicts the DMA’s vision as a single European rulebook. The DMA, however, explicitly leaves room for the application of national competition rules that impose further obligations on gatekeepers – such as Germany’s Sec. 19a GWB. Considering the limited enforcement capacity of the Commission, some observers may welcome that certain national authorities have their own rules to counter anticompetitive conduct by gatekeepers.
Price control and self-preferencing at the center
Participants expressed doubts about Amazon’s price control mechanisms. As part of that practice, Amazon removes offers when they are not competitively priced – meaning when they don’t match the lowest price Amazon can find on or outside the platform. To comply with Art. 5(3) DMA, Amazon claimed that offers from the same seller outside the platform are not factored into the comparison. Is that sufficient to prevent price parity clause effects? Not everyone was convinced. One stakeholder noted that many sellers now avoid selling outside Amazon altogether to retain the Buy Box. Concerns about Rufus also surfaced. Amazon insisted that Rufus is not trained on personal data and does not require user consent. As for liability for misleading product information? That’s on the sellers, Amazon said – Rufus just relays their data.

With sandwiches waiting and the agenda wrapped up early, the compliance workshop concluded ahead of schedule. But does that mean Amazon is in the clear? Not quite. The Commission has made it clear that scrutiny continues—particularly over Amazon’s algorithms and their alignment with Article 6(5) DMA. Business users have echoed these concerns, reporting ongoing self-preferencing on the marketplace (“in reality, we see something different”). On top of that, Amazon’s price control mechanisms have drawn sharp criticism from the business user community, suggesting that the Bundeskartellamt’s current investigation into the practice may not be the last.
So far, Amazon has largely sailed below the DMA enforcement radar—but are shifting winds now indicators of a rising storm for Amazon?
3. Apple – don’t expect a free ride

On 30 June, Apple presented its compliance efforts – on a notably warm day, with a room packed full of stakeholders. The Commission’s team started off with a surprise: besides the usual intro the Commission also welcomed discussion around its own decisions and the ongoing cases concerning Apple, signalling a more open and reflective approach to stakeholder feedback, at least in the context of Apple’s compliance. Apple’s representatives started their presentation with the usual corporate self-portrait as the guardian angel of its users’ privacy and security.

Apple mobile devices interoperability: Art 6(7) DMA
The Commission opened the session by introducing basis of discussion of Apple’s mobile device interoperability were the Commission’s own specification decisions, Apple Connected Devices (DMA.100203) and Apple Interoperability Process (DMA.100204).
Apple’s representatives opened the session grounded in the company’s core values, particularly its alleged longstanding emphasis on privacy and security. These principles were framed not only as central to Apple’s brand identity, but also as potentially compromised by the interoperability obligations under the DMA. According to Apple, aligning with these requirements may come at the expense of the user experience and security its ecosystem is built upon.

Following this value-driven introduction, Apple presented twelve new interoperability measures, including infrastructure for browser portability and the creation of a dedicated interoperability team. A key component of its compliance effort, the mobile devices Interoperability Initiative and the related Interoperability Request Review and Development process – a system Apple itself described, half-jokingly, as “bureaucratic”. Apple further noted that nearly one-third of the interoperability requests to date have come from other gatekeepers. So, it seems that the new invasive species occupying Apple’s walled garden are the other Big Techs, instead of smaller, disruptive (European) innovators, the Commission has hoped for.
Apple’s positioning appeared to strike a chord with many participants. Rather than questioning Apple directly, the discussion turned towards broader concerns: the potential trade-offs between enhanced interoperability and the safeguarding of user security. Apple navigated these questions with relative ease, using the opportunity to reiterate its commitment to user protection. Still, concerns were raised by some stakeholders regarding the duration and complexity of the interoperability request process. Apple acknowledged these criticisms and expressed willingness to streamline the process, while once more again stressing the scale of the engineering efforts involved.
Apple’s App Store (Anti-) Steering
The Commission gave a brief overview of its non-compliance decision of 23 April 2025 concerning Apple’s App Store steering under Art. 5(4) DMA, followed by an overview of Apple’s latest compliance attempt, dated 26 June 2025. With that formality out of the way, Apple took the floor to make its position unmistakably clear: the Commission’s demands far exceed the actual wording of Art. 5(4) DMA, effectively requiring Apple to provide core technological infrastructure free of charge – even to cash-rich developers like Spotify. Apple was quick to assure the room that it appreciated the widespread consumer enthusiasm for free stuff, but expressed principled outrage at the notion that it should be forced to subsidize billion-dollar enterprises. This, Apple argued, is a troubling departure from the market economy.
With this framing in place, Apple presented its fourth iteration of a DMA compliance (because practice makes perfect?). Version no. 4 includes: (1) allowing users to be informed about offers across all channels – including in-app views and integration of outside payment; (2) enabling developers to freely design and execute promotions; (3) supporting the use of multiple third-party URLSs; and (4) permitting links with parameters, redirects, and intermediates – presumably after a brief layover in Cupertino.

But the real star of the show was Apple’s newly minted fee model. While advertised as a breakthrough, most stakeholders found overly it to be a masterclass in obfuscation. Apple debuted fresh terminology – such as the Core Technology Commission (CTC) – adding yet another acronym to the growing list of things developers need to decode in Apple’s already intricate pricing system. Despite the complexity, Apple insisted that most EU developers would see their fees slashed by 20% or 10%. This optimistic projection, however, triggered a wave of follow-up questions, mostly centred on one issue: how exactly, is that supposed to work?
In short: There is some steering in Apple’s App Store. But who is at the wheel?
Miscellaneous: browser choice screen, data interoperability APIs and the art of jurisdictional sovereignty
Apple used the session to showcase its latest updates on browser choice screens and default settings under Art. 6(3) DMA. According to Apple, compliance with this provision has – ironically – further entrenched Google Chrome’s dominance, rather than giving smaller browser developers a fighting chance.
Throughout the session, Apple repeatedly emphasized that it was required by the Commission to implement these obligations – requirements that, according to Apple, do not apply to its competitors. This discrepancy was described as not just unfortunate, but fundamentally unfair – beating the DMA with its own weapons. Stakeholder raised questions about Apple’s two-tiered service architecture: one version for the EU, another for the rest of the world. Apple responded with civics lesson on jurisdictional sovereignty, explaining that countries outside the EU are not bound by the DMA and therefore get to keep the “full Apple experience”.

On the topic of data portability, Apple reminded the room again: iOS has long been trusted by consumers as one of the most user-friendly and efficient systems for moving data portability within Apple’s own ecosystem. Now, in a surprising twist, Apple announced its collaboration with Google to improve data transfer between iOS and Android. Stakeholders welcomed this development with guarded optimism, cautiously interpreting it as a potential crack in the walled gardens of the two mobile giants. Perhaps, the DMA is finally starting to nudge open the doors for user control over data flows, releasing the tight grip of Big Tech.
Finally, Apple offered a brief tour of the types of data it collects – and, more importantly, it claims not to collect. Sensitive information, such as bank account details stored in banking apps, is apparently off-limits. Whether this statement is a strategic pretext for limiting future data access requests or a genuine reflection of Apple’s privacy-first philosophy remains to be seen. Time will tell.
4. Alphabet (Google) – Structural Change or Strategic Compliance?

On 1 July, in a sweltering Brussels, Google laid out its latest compliance narrative during the workshop. The day was divided into four sessions covering Google’s responses to key DMA obligations, including Google Android’s defaults (Art. 6(3) DMA), interoperability requirements (Art. 6(7) DMA), alternative app distribution (Art. 6(4) DMA); FRAND access to Google Search (Art. 6(12) DMA), data-related obligations (Art 5(2), 6(9), 6(10), and 6(11) DMA), and – again – AI.
The Commission team – Alberto Bacchiega, Thomas Kramler, Lovisa Mouzaoui, and Anna Taimr – kicked things off with a clarification of expectations. Notably absent from the workshop were Google’s App Store practices and self-preferencing concerns, which remain under separate scrutiny. The focus, instead, was a guided tour through Google’s evolving compliance framework.
Google Android and Chrome: Choice with Nudges?

To live up to the core idea of the DMA to restore user choice, Google explained how it has rolled out new choice screens on Android and Chrome, aimed at simplifying the process of changing defaults and uninstallation. Users can now select default browsers and search engines during device setup – an evolution from earlier remedies following antitrust enforcement. Google also updated uninstallation options, claiming that users can now fully remove preinstalled services like Chrome or Gmail. However, critics pointed out that certain Google services remain deeply embedded in the system, and that attempts to sideload apps still trigger dramatic warning prompts, so-called “scare” – screens, which, according to one stakeholder, led in one case to a 60% drop in downloads. Google defended the alerts as vital for user safety. As for the screens themselves, questions from the room lingered about their neutrality in light of the nudging potential and the lack of transparency around bias testing.
Google Search: FRAND-ish access
Under Art. 6(12) of the DMA, gatekeepers are required to provide fair, reasonable, and non-discriminatory (FRAND) access to certain core platform services (CPSs). Google claimed to have operationalised FRAND access for vertical search providers, particularly in the context of vertical services such as travel, local, or job search providers, allowing rivals to compete for visibility. Yet stakeholders remained unconvinced, noting that Google-owned services like Flights and Maps still dominate results. No specific discrimination cases were aired—but the suspicion hung in the air.
Data: Consent, Control, and Complexity
On data-related obligations – notably Art. 5(2), 6(9), 6(10), and 6(11) DMA – little new ground was covered beyond Google’s March 2025 compliance report, although several rounds of discussion highlighted stakeholders’ true areas of focus.
In relation to Article 5(2) DMA, which prohibits the cross-use of personal data across services without user consent, Google explained four key principles: (1) users are treated as non-consented by default; (2) each CPS is treated as a separate data entity; (3) consent must be given before any data is combined; and (4) users can revoke consent anytime via settings. To operationalize this, Google has introduced backend technical controls tagging data by origin service and blocking cross-service flows without valid permission. According to Google, over 438 million users have seen Google’s DMA consent screens, with 97.9% having made a choice – though Google chose not to share how many actually opted in—or what happens if they don’t.
On Art. 6(9), 6(10), and 6(11) DMA, Google promoted its Data Portability API and Search Data Licensing Program. Providing a real-world example of its Data Portability API in action, Google presented the collaboration between Google Health and UK Research and Innovation where users could authorize the sharing of health-related data with approved researchers. In relation to Art. 6(11) DMA, Google also presented how – under its Search Data Licensing Program – competitors can now license datasets under FRAND terms—though the details (pricing, anonymisation and actual utility) remain fuzzy. Stakeholders raised concerns about the “black box” nature of Google’s technical details, particularly in relation to anonymisation techniques, encryption standards, and the opacity of what data is truly accessible. Questions about revenue-sharing with publishers and who’s actually using these datasets remained unanswered.
Google’s AI: Innovation or self-advantaging
Google also showcased its AI capabilities—from Search query handling to YouTube recommendations—while largely skirting questions on how AI intersects with DMA obligations. The gatekeeper underscored that AI technologies have long been applied in various Google products, for example, in query analysis in Search, YouTube recommendation system, Play Store safety and security mechanisms, Google Meet speech translation and Gmail AI features. Notably, participants were not swayed by Google ’s deliberate hiding. Concerns were explicitly raised about Google’s compliance with the DMA’s prohibition on data combination and cross-using, particularly how data flows between its CPSs and AI services. When asked about self-preferencing in AI-generated results, Google pivoted to the “transformative” nature of AI, avoiding whether these tools might further entrench its dominance and structural advantage of its own services in Search rankings.
After six hours, the room was left wondering: Is Google redesigning its services to empower users—or simply to comply on paper? As enforcement ramps up, that distinction may no longer be academic.
5. ByteDance – open dialogue behind closed doors?
Compared to other gatekeepers, the discussion in the ByteDance workshop was noticeably shorter. According to the Commission, the obligations applicable to ByteDance include Art. 5(2), 5(6), 5(7), 5(8), 6(2), 6(9), 6(10), 6(12), and 6(13) DMA. For the purposes of this workshop, however, the conversation focused on just four: Art. 5(2), 6(9), 6(10), and 6(12) DMA.

Under Art. 5(2) DMA, which prohibits the combination of personal data across services without user consent, ByteDance stressed that user choice is central to TikTok’s data practices. Compliance, it explained, is ensured through in-app consent screens where users can decide whether or not their data is shared across services.
Regarding Art. 6(9), ByteDance claimed to offer full Data Portability APIs for TikTok users. Yet workshop participants weren’t exactly ready to hit “like.” Business users and third parties questioned the usability and clarity of the data provided, describing it as vague, limited, or practically inaccessible. ByteDance did not push back on these concerns. Instead, it leaned into a more diplomatic tone, reiterating openness to further dialogue and framing compliance as a “journey” still in progress—a phrase that has become something of a catch-all in the compliance lexicon.
As for Art. 6(12) DMA, which requires FRAND access to certain CPSs, ByteDance pointed to the TikTok Business Terms of Service as the governing framework. Whether these terms meet the FRAND standard—or simply reflect ByteDance’s standard—remains unclear.
Overall, ByteDance’s approach was notably less combative than some of its peers, its emphasis on constructive engagement and willingness to “keep talking” seemed to resonate with stakeholders. The session featured fewer critical questions, which may reflect a combination of the platform’s relative simplicity and its more conciliatory tone. Whether this signals genuine alignment with the DMA or just good PR remains to be seen—but for now, TikTok appears to be dancing to the right regulatory tune.
6. Meta – bold moves in the face of the Commission

Meta closed the series of workshops on 3 July, and expectations were high. As one of the DMA’s most closely scrutinized gatekeepers, Meta’s compliance journey is shaping up to be the definitive stress test for applying regulation to complex, data-driven ecosystems. The workshop was divided into four thematic sessions covering data combination (Art. 5(2) DMA), data portability and access (Art. 6(9), and 6(10) DMA), advertising transparency (Art. 5(9), 5(10) and 6(8) DMA), and messaging interoperability (Art. 7 DMA).
Meta’s core business model: Data and Ads
Promising compliance with Art. 5(2) DMA, Meta confirmed that users are presented with consent screens and can manage their data preferences – though workshop participants questioned just how “optional” these choices really are. Concerns were raised about interface design nudges or even pressures, users into consenting through by degrading services.
On data portability (Art. 6(9) and 6 (10) DMA), Meta showcased its compliance toolkit: Do-It-Yourself (DIY), Transfer-Your-Information (TYI), and Export-Your-Information (EYI) tools, an alphabet soup of tools backed by developer guides and transparency materials. While technically impressive, stakeholders questioned whether these tools are actually usable by business users and developers seeking straightforward access. As for Article 6(10) on data access for business users, Meta also described integrating Download Your Information and Transfer Your Information into a single tool, but, participants asked whether advertisers and developers can access meaningful, granular data – or just metadata in disguise.

Meta’s team reflected on a year of DMA compliance, citing sustained dialogue with the Commission. The tone quickly shifted when Meta presented its less personalized ads option and defended its now-infamous “consent or pay” model. In a bold move, Meta directly challenged the Commission’s non-compliance decision on this model, invoking the Court of Justice’s ruling in Meta Platforms v. Bundeskartellamt (Case C-251/21). Meta argued that (1) the case law permits charging a fee for services, and (2) the Commission disregarded Meta’s business model, effectively pressuring the gatekeeper to provide services free of charge.
To bolster its position, Meta stressed two main themes. First, it underscored continued effort to consolidate the user interface and experience by rolling a streamlined user interface that now supports a less personalized experience—reducing reliance on personal data for ads by 90%, though, as Meta warned, at the cost of relevance and effectiveness. Second, Meta claimed enhanced user control over data, allowing individuals to easily access and transfer their data, reinforcing data mobility and empowering users to determine how their information is used across services.
However, stakeholders remained sceptical of Meta’s narrative. Some concerned that Meta might subtly and effectively use so-called “option to pay” to coerce users into consenting. “Choice” in Meta’s ecosystem, it seems, may still come with conditions.
Meta’s AI: The blurry edge of compliance
Meta acknowledged the increasing role of AI in shaping its services and emphasized its commitment to open-sourcing and ecosystem openness regarding its AI models. Yet when pressed on whether its AI integration fall under DMA’s obligations – especially regarding data use and self-preferencing – Meta opted for abstraction. There was no clear answer on whether its AI-enhanced services might leverage Meta’s own content or products, just industry-speak about AI deployment.
Meta’s messaging interoperability – secure but ineffective by design?

Meta’s messaging services – WhatsApp and Messenger – fall under Art. 7 DMA’s interoperability mandate with competing messaging services. Meta stressed its commitment to security, citing its use of the Signal protocol, and pitched its cautious approach as a balancing act between openness and user protection. Critics, however, viewed it as a barrier by design that conveniently maintains Meta’s market position by complicating effective integration with rival services – potentially diluting the effectiveness of Art. 7 DMA.
Meta’s workshop brought into focus the widening gap between the Commission’s expectations and Meta’s interpretations of the DMA. In particular, the “consent or pay” model remains a regulatory flashpoint—still contested, still unresolved. With the EU Court’s opinion on the horizon, all eyes remain on whether Meta’s boldness will be read as regulatory defiance—or litigated vindication.
7. SCiDA’s concluding remarks
To conclude, the six DMA compliance workshops organised by the Commission between 20 June and 3 July offered a valuable – if occasionally uneven – glimpse into how gatekeepers are navigating their DMA obligations. Each gatekeeper came prepared with polished slides, technical glossaries, and confident declarations of compliance. Yet beneath the surface of dashboards and developer portals, familiar tensions remained.
While all gatekeepers showcased technical tools and policy tweaks to demonstrate formal compliance with the DMA, the workshops consistently highlighted deeper, unresolved issues – particularly around transparency, interpretability, and meaningful user control. Data combination and portability still operate in murky waters; interoperability appears to be engineered with more caution than conviction; and AI, unsurprisingly remains the ultimate black box.
As the Commission now shifts from polite dialogue to formal enforcement, these workshops will likely be remembered less for their PowerPoint aesthetics than for what they exposed: a regulatory regime still finding its footing, and gatekeepers still calibrating how far they actually need to change.
The second round of workshops still seem to mark not a conclusion, but another chapter of a much longer – and likely more contentious – journey.
Stay tuned for a deep dive into the most controversial issues that surfaced during the workshops in the next weeks – just what you needed to get you through the summer lull!
If you do not want to miss any of our future blog posts, you can subscribe to the SCiDA-Newsletter here.