Operationalising Fairness – Making a Principle Work in the Digital Fairness Act

As algorithmic decision-making and platform power increasingly shape citizens’ daily lives, the EU’s Digital Fairness Act represents a critical opportunity to prioritize fairness alongside innovation. In this guest post for SCiDA, Behrang examines a fundamental challenge threatening the Act’s effectiveness: the absence of a clear, operational definition of “fairness” itself.

by Behrang Kianzad, Senior Researcher, Institute for Global Political Studies, Malmö University; Founder, European Researcher Network on Fairness in Digital Markets and AI (FIDMA)

In an era where algorithmic decision-making and platform power increasingly shape citizens’ daily lives, it is highly commendable to see EU lawmakers prioritize fairness alongside innovation, with the latest initiative bearing the name of Digital Fairness Act (DFA). This focus aligns with broader EU digital policy trends – complementing measures like the AI Act’s emphasis on transparency and non-discrimination in AI systems and the Digital Markets Act’s drive for fair platform accountability – thereby ensuring that emerging technologies and dominant online platforms operate in a manner that is just and accountable.

The DFA’s goals of tackling dark patterns, manipulative design, unfair personalization and other online consumer harms demonstrate a forward-looking commitment to a “fair digital environment for all Europeans”, with clear rules enforced to hold powerful actors accountable, ensuring that consumers deserve the same level of fairness online as they enjoy offline.

The Digital Fairness Act represents a pivotal next step in Europe’s digital strategy, building on recent landmark regulations. The Act explicitly puts fairness at the forefront, signalling Europe’s intent to not only spur innovation and matters related to economic efficiency but also to embed ethical values in digital markets. By addressing issues such as AI transparency and platform accountability under the banner of fairness, the EU is reinforcing key principles: for example, requiring AI-driven services to be transparent, non-discriminatory and fair in how they treat users, and ensuring that large platforms (like app stores, social media, and e-commerce gatekeepers) treat consumers and business users equitably. This integrated vision of fairness – spanning from algorithms to online marketplaces – will help rebuild consumer trust in digital services.

It also aligns with fundamental rights and EU values by holding digital businesses to standards of honesty, transparency, and responsibility. Fairness is not a mere abstract ideal, but a practical necessity for consumer protection and market integrity in the digital age. DFA’s broad scope (covering manipulative UX design, influencer marketing, addictive features, etc.) shows admirable ambition in closing gaps left by prior legislation and ensuring no aspect of the consumer’s online journey falls outside the umbrella of fairness.

The Need for a Clear Definition of “Fairness” in the DFA

Notwithstanding the above ambitions by the lawmakers, a critical issue must be touched upon: the fact that the proposed Act currently lacks a clear, operational, administrable, practical and consistent definition of Fairness. While “fairness” is a powerful guiding principle, its meaning in legal terms remains undefined in the consultation materials, which could undermine the Act’s effectiveness. Paraphrasing the Danish legal philosopher Alf Ross, “Fairness, like a harlot, is at the disposal of anyone” – in other words, an undefined fairness concept becomes intuitively subjective and potentially void of meaning. A matter which has spurred much doctrinal battles between fairness and efficiency in the competition law and economics discourses.

If fairness is left to individual interpretation, regulators and courts may struggle to apply the law consistently, and businesses will face uncertainty about their compliance obligations. The Digital Markets Act (DMA) offers a cautionary example: the DMA invokes “unfair” practices 43 times and “fairness” 18 times, yet nowhere does it precisely define these terms beyond describing the desired outcome of “fair markets”, a matter previously explored in this space.

Legal and Philosophical Foundations of Fairness in Law

Crafting a robust definition of fairness in the DFA benefits from examining the legal-philosophical underpinnings of fairness. The concept of fairness has deep roots in European legal tradition and moral philosophy, which can guide its modern regulatory usage. Seen against the backdrop of previous research on the matter on  analyses of how fairness can be reconciled with law and economics, one can point to a classical tension between fairness and efficiency.

In the past few decades, especially under the influence of neoclassical economics, legal policy (notably also in EU competition law) often privileged efficiency and welfare maximization over ideals like fairness. The Chicago School view, epitomized by scholars like Bork and Posner, but also Kaplow and Shavell, treated fairness or justice as subjective values external to law, whereas efficiency was seen as an “objective” metric.

As Holler & Leroch put it, “everybody knows what is just, nobody knows what is efficient” – though economists might retort that they know efficiency but “have problems getting hold of justice”. The main argument against the Chicagoan approach is that an exclusive focus on “wealth maximization” and efficiency is often at odds with core legal values like fairness and justice.

Law, especially European law, has long seen protecting parties from undue harm or inequity as part of its mission – for example, EU competition law explicitly prohibits “unfair pricing” and other unfair trading conditions by dominant firms (Article 102 TFEU) as illegal in themselves, not only if they reduce total welfare.

This reflects a more deontological stance, which re-connects with the Kantian and Scholastic roots of EU law, where certain conduct (price gouging, exploitative contract terms) is seen as wrongful per se since it violates sustained fairness or justice norms and preferences, even if some efficiency argument could be made to tolerate it. European jurisprudence, influenced by Kantian and Aristotelian thought, has thus maintained that fairness has intrinsic legal weight – e.g., the concept of “just price” and equitable dealing can be traced through centuries of civil and canon law. In the digital realm, this perspective is manifestly resurging.

Notably, not only EU Competition Law prohibition against “unfair pricing” is influenced by the Kantian legal philosophy and Just Price theories, but also other streams of law such as intellectual property law also displays at times certain Kantian elements, using it to depart from the “abstract” to “real”, in that even if the abstract cannot be defined “in itself”, the enabling, objective and ex ante conditions for the “abstract” can still be defined using rational experience and reason.

After the 2008 financial crisis and the digital platform upheavals, there is a “return” to fairness-based analysis, asking whether markets are delivering socially just outcomes and not merely efficient ones. This does not mean abandoning economic analysis, but broadening it. Whether one adopts a Kantian (duty-based) or utilitarian normative stance invariably affects legal-economic analysis and its outcomes.

The DFA, by foregrounding fairness, represents this shift: it acknowledges that beyond pure consumer welfare or output metrics, we care about the moral quality of digital transactions (are they free of coercion, manipulation, discrimination? are they conducted on equal footing?). In sum, fairness is being reclaimed as a legitimate regulatory objective alongside efficiency – a trend the DFA rightly embraces.

A Kantian (deontological) perspective provides intellectual support for making fairness a legal mandate rather than a vague aspiration. Kantian ethics would argue that individuals must be treated as ends in themselves, not merely as means – translated to the digital context, this implies digital services should respect user autonomy and dignity rather than exploiting cognitive weaknesses or asymmetrical information for profit.

It can be pointed out that treating fairness concerns as mere “externalities” to be ignored by mainstream economics conflicts with the core rationale (ratio legis) of laws explicitly banning unfair practices. For example, the existence of Article 102’s prohibition on “unfair trading conditions” or “excessive pricing” signals a legislative judgment that such unfair conduct is malum in se (wrong in itself) and must be prevented, even if a purely economic analysis might tolerate it in absence of consumer harm in the short term.

A Kantian legal philosophy thus reinforces that fairness is fundamental: laws are not only tools for efficient resource allocation, but expressions of society’s moral commitments (to fairness, justice, equality). In practical terms, a Kantian view in the DFA context would demand that digital businesses observe certain duties toward consumers – e.g. a duty not to mislead or manipulate, a duty to obtain genuine consent, a duty to deal openly and even-handedly. These duties exist regardless of whether unfair practices could increase overall consumer spending or innovation; they are grounded in respect for the individual’s rights and the integrity of the market process itself.

Indeed, fairness in exchanges can be understood, borrowing from Kant and Aristotle, as requiring “equitable exchange” – each party gives and receives in proportion, without one side secretly undermining the other’s freedom of choice or siphoning an unjust share of benefits. It is notable that the fairness norms in the DMA/Data Act implicitly follow this equitable exchange idea.

To ensure consistency, the DFA should similarly conceptualize fairness as avoiding extreme imbalances and ensuring reciprocity in digital transactions. In short, deontological ethics support embedding fairness criteria into law: some practices (e.g. dark patterns that trick vulnerable users) can be deemed inherently unfair and prohibited on principle, which aligns with the EU’s approach in the DFA.

Behavioral Economics and the Human Sense of Fairness

Far from being an ineffable moral instinct alien to economics, fairness is grounded in human behavior and psychology. Behavioral economics provides empirical evidence that people care deeply about fairness and will even sacrifice personal gain to enforce it.

Classic experiments like the ultimatum game illustrate this vividly: when one party is given power to split a sum of money, proposers who offer a very unequal split often have their offers rejected by responders, even though rejection means both parties get nothing. In other words, people are willing to incur a cost to punish unfairness. This and numerous other studies confirm that humans have an innate bias toward fair treatment (or at least an aversion to perceived unfairness). Similarly, in the marketplace, consumers tend to perceive certain practices – say, dramatically raising prices in a shortage, or sneaking in hidden fees – as unfair, and this perception affects their behavior and well-being.

As a widely cited study study by Kahneman, Knetsch & Thaler (1986) demonstrated, there is a shared societal intuition about “fair pricing”; for instance, most consumers find it unfair if a store exploits a surge in demand by steeply hiking prices (even though standard economics might label that as supply-and-demand at work).

Behavioral law-and-economics scholars (like Sunstein, Thaler, Jolls) have long argued that regulators must account for these fairness preferences. Fairness concerns directly impact overall welfare – people derive utility not just from material outcomes but from the fairness of processes and distributions.

As economist Matthew Rabin noted, public policy should consider “not solely how concerns for fairness support or interfere with material efficiency, but also how these concerns affect people’s overall welfare.” In practice, this means that ensuring fair digital practices is part of maximizing social welfare, broadly understood.

A platform that avoids manipulative dark patterns may ostensibly have slightly lower conversion rates (less efficiency in the narrow sense), but users gain in trust, satisfaction, and autonomy, which are genuine welfare gains. Behavioral insights thus bolster the DFA’s rationale: unfair digital practices cause real harm (frustration, loss of trust, distorted choices), so banning them not only upholds an ethical principle but improves consumer welfare in a very tangible way.

Moreover, recognizing that users can be nudged or coerced subconsciously (through interface tricks, psychological pricing, etc.) affirms the need for regulatory intervention – market forces alone won’t weed out unfair practices if they are profitable. The law must step in to set fairness boundaries, reflecting both our ethical convictions and what behavioral science tells us about human vulnerability.

In summary, drawing from law, philosophy, and economics: fairness is not a fuzzy ideal to be handled intuitively, but a principle with deep normative and empirical support. EU law is increasingly moving toward acknowledging this, by writing fairness requirements into hard law (as the DFA seeks to do). The challenge, however, lies in giving “fairness” concrete legal meaning so that it can be operationalized effectively, rather than becoming an empty buzzword.

Recommendations for Clarifying and Harmonizing Fairness in the DFA

To ensure the Digital Fairness Act fulfills its promise, the following suggestions are offered to improve the conceptual clarity and legal consistency of the term “fairness” across EU regulatory frameworks:

1. Provide an Explicit Definition or Test for Fairness: The DFA should include, in its recitals or an article, a clear definition of what constitutes unfair digital practices. This could draw on analogies from existing EU law. For example, the Unfair Commercial Practices Directive (UCPD) provides a useful template: it defines an unfair commercial practice as one that “contravenes the requirements of professional diligence” and “materially distorts or is likely to distort the economic behavior of the average consumer.”.

A similar approach in the DFA could define an unfair digital practice as one that materially distorts consumer choice or behavior through deception, manipulation, or exploitation of vulnerabilities, contrary to good faith and honest market practice. Such a definition would encapsulate dark patterns, misleading interfaces, etc., under a general clause. Additionally, objective criteria or examples should accompany this definition.

For instance, the Act (or delegated acts) could list specific practices always considered unfair (much like the Data Act’s list of per se unfair contract terms, or Annex I of the UCPD’s blacklist of unfair practices). By explicitly enumerating tactics like bait-and-switch designs, hidden subscription traps, non-transparent personalization based on sensitive data, etc., the law will give concrete guidance to industry. Overall, the goal is to translate “fairness” from a value into an operational rule – eliminating the guesswork. Empirical research and user testing can inform this; if evidence shows a practice consistently misleads consumers, it belongs in the unfair category. In short, define fairness in practical, enforceable terms.

2. Align the DFA’s Fairness Standard with Existing Frameworks: Consistency across EU laws is key. The DFA should explicitly state that it is complementary to and coherent with regulations like the DMA, AI Act, and Data Act (as well as horizontal laws like competition and consumer protection law). Concretely, we recommend including a recital akin to DMA Recital 5 and 10, clarifying that the DFA builds on gaps not covered by other laws and is without prejudice to them. Moreover, any fairness metrics used should resonate with existing norms.

For example, if the DMA uses the notion of “disproportionate advantage” as the hallmark of unfairness, the DFA could incorporate that language when dealing with platform-consumer relationships (e.g. unfair contract terms or unfair default settings that disproportionately favor the service provider).

If the AI Act views fairness as avoiding “unfair bias”, the DFA’s provisions on personalization could explicitly include discriminatory or bias-amplifying personalization as unfair. Importantly, we also suggest leveraging competition law principles to inform fairness. EU competition law has a developed (albeit sparingly used) concept of “unfair pricing” and “unfair trading conditions” under Article 102 TFEU and related case-law.

For instance, an exploitative price is one having “no reasonable relation to the economic value” of a product. The DFA, while a consumer protection instrument, could borrow such notions: if, say, an online service uses personalized pricing, a price that vastly exceeds a benchmark of value or is targeted to exploit a personal vulnerability might be deemed unfair.

By aligning with competition law’s concept of fairness as preventing “excessive or exploitative” conduct, the DFA ensures a coherent legal narrative. As suggested, since the new digital regulations were inspired by competition law, it would be “advisable to analogously interpret the fairness dimension in these acts” in light of competition law’s established doctrines. Regulators enforcing the DFA but also DMA should consult competition-law experts and precedents when evaluating fairness (especially in borderline cases like personalized pricing or platform terms that may also raise competition concerns). This cross-pollination will promote legal consistency and prevent bad actors from exploiting definitional gaps between regimes.

3. Develop Clear Guidelines and Examples (Soft Law): In addition to statutory definition, the Commission should commit to issuing guidance or codes of conduct that elucidate fairness in various contexts. These could be analogous to the Guidelines on Unfair Commercial Practices that give practical examples of how to apply the UCPD. For the DFA, one can imagine sector-specific guidance: e.g., how fairness applies to UI/UX design (with illustrations of dark patterns versus acceptable practices), how fairness applies to algorithmic recommendations or targeted advertising (what constitutes unfair targeting of vulnerable consumers), etc.

This guidance should be developed in consultation with consumer groups, industry, and interdisciplinary experts (including ethicists and behavioral economists) to ensure it is realistic and robust. By providing detailed examples of compliant vs. non-compliant behavior, the Commission will enable smoother enforcement and compliance.

It is also recommend to incorporate behavioral research into these guidelines – for instance, setting standards for choice architecture that take into account known cognitive biases. A practice could be deemed unfair if it intentionally exploits a bias (say, a countdown timer creating false urgency), whereas merely persuasive design that does not cross certain thresholds might be considered fair. These nuances can be fleshed out in guidance. The presence of such explanatory materials will also signal to national authorities the intended uniform interpretation, enhancing harmonization EU-wide.

4. Foster Coherence with Fundamental Rights and Ethical Principles: Fairness in digital markets does not exist in a vacuum; it intersects with fundamental rights (data protection, non-discrimination) and ethical principles (human dignity, autonomy). The DFA’s fairness concept should be explicitly tied to these higher-order norms to strengthen its legitimacy and consistency.

For example, the Act might reference the EU Charter of Fundamental Rights or general principles of EU law, noting that fairness in digital practices encompasses respect for the consumer’s free will, privacy, and equality. This could help align the DFA’s application with GDPR (ensuring fairness in personal data use for personalization), as well as with equality laws (ensuring, for instance, that algorithmic targeting doesn’t result in prohibited discrimination under the guise of personalization).

A joined-up approach can be achieved by collaboration between the Data Protection Authorities, Consumer Protection Authorities, and Competition Authorities through something like a joint fairness task-force. While this goes beyond textual changes, it is a practical recommendation: regulators should coordinate on developing a unified fairness framework so that, for instance, a dark pattern that violates the DFA might also be tackled as a breach of GDPR’s fairness principle (if personal data is misused) or as a competition issue (if a dominant firm is using manipulative design to entrench its position). The citizen does not care which law is invoked – only that the outcome is fair. So, regulators must ensure their interpretations of “fairness” are interoperable.

Conclusion

In conclusion, despite strong normative support for the EU’s initiative to enhance digital fairness and appreciate the holistic approach taken – targeting everything from user interface design to algorithmic practices – the primary critique above relates the absence of a clear definition of “fairness,” which is both addressable and crucial to the Act’s success.

Without a clear definition, fairness can become an all-encompassing but amorphous term. Different stakeholders might invoke fairness to justify conflicting positions, weakening its normative force. Regulators could be accused of arbitrariness if enforcement is based on an implicit, subjective sense of what is unfair. Conversely, a well-defined fairness standard provides legal certainty and focus.

It guides businesses in compliance (by delineating acceptable vs. unfair practices) and equips enforcers with a concrete yardstick. In short, clarifying “fairness” is essential to translate the DFA’s laudable principles into predictable, enforceable obligations. The EU Commission should move to more narrowly define or contextualize fairness within the Act – whether in the recitals or operative provisions – drawing on existing legal definitions and established doctrines, as discussed below.

It would be problematic if “fairness” in the DFA (focused on consumer-facing practices like UI design and marketing) is interpreted in isolation from “fairness” in, say, the DMA’s platform conduct rules or the Data Act’s contract terms rules. The Commission should clarify, perhaps in a recital, that fairness under the DFA is intended to align with the overarching fairness principles in other digital regulations – i.e. promoting equitable, non-exploitative and transparent relationships online, consistent with EU competition, consumer protection, and data laws. Such clarity will prevent legal fragmentation and ensure that businesses and enforcers see fairness as one coherent principle across the digital regulatory spectrum.

If you are curious about the topic and wish to furrther explore questions surrounding fairness in digital markets and beyond you are most welcome to send your abstract or register for the upcoming FIDMA 2nd Annual Conference on Fairness in Digital Markets and Artificial Intelligence – Synthetic Imaginaries of Digital Fairness to be held on 18th March 2026 at Faculty of Law, Lund University, Sweden.

More information about the conference and call for paper can be found here: https://fidma.org/second-annual-conference-on-fairness-in-digital-markets-and-artificial-intelligence-synthetic-imaginaries-of-digital-fairness-2/

Deadline for submission of call for papers is set to 15th December 2026 and abstracts should be sent to behrang.kianzad@mau.se

Posts created 34

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top