Engineered Fragility: Confronting Digital Exploitation for a Safer Internet
Engineered Fragility: Confronting Digital Exploitation for a Safer Internet
Safer Internet Day is a moment for policymakers, organisations, and citizens to reflect on building a more secure digital world. Yet, this day also prompts a vital question: In an environment that increasingly defines and frames our identities and relationships, how safe is the landscape we have built, particularly for our mental health? Furthermore, does this digital space offer the same protection and accountability we demand in the physical world, especially for those in most vulnerable situations?
Recent news suggest that these concerns are no longer theoretical. In February 2026, the European Commission’s preliminary findings under the Digital Services Act concluded that TikTok’s design may breach EU rules due to features such as infinite scroll, autoplay, and highly personalised recommendations. The investigation found that these mechanisms can encourage compulsive use, place users in “autopilot mode”, and that the platform had not adequately assessed or mitigated the risks to the physical and mental wellbeing of users, particularly minors and vulnerable adults.
In today’s digital environment, our attention is not just a bargaining chip, it is the product. Every swipe, scroll, like, pause, and click is captured, analysed, shaped, and monetised. Tech platforms compete relentlessly for human attention, optimising design systems not to empower us, but to keep us engaged, triggered, and returning for more. This business model has profound implications for mental health and for democratic resilience. Yet regulation still struggles to catch up.
The EU’s proposed Digital Fairness Act (DFA) represents an opportunity to address this imbalance. Too often, digital regulation has been framed around consumer protection as mainly economic: preventing financial loss or unfair pricing. But as mental health organisations and researchers have repeatedly shown, digital unfairness goes far beyond the wallet and competitiveness. It affects stress levels, autonomy, self-esteem, well-being, and our fundamental rights.
To truly make the internet safer, especially for young people and those experiencing vulnerable situations, we must recognise the psychological reality of the digital economy: users’ attention and emotions are systematically steered for profit.
The hidden cost of the attention economy
Mental Health Europe’s recent work on digitalisation highlights both opportunities and risks in a rapidly evolving technological landscape. Social media and digital platforms can foster connection, peer support, and access to information and mental health tools, as well as provide an affordable entry point to services. Yet research also confirms harmful effects from hyper-connected environments, persuasive design, algorithmic amplification of distressing content, constant profiling, and targeted advertising.
Digital environments are deliberately engineered to exploit behavioural psychology: variable rewards, auto-play, infinite scrolling, frictionless nudges, and dopamine-driven feedback loops. The result: addictive use patterns, fatigue, anxiety, decision paralysis, disrupted sleep, poor body image, and increased loneliness.
Many groups in vulnerable situations, including young people, individuals experiencing mental health distress, and socio-economically disadvantaged communities, carry the greatest burden. Those least able to opt out are most exposed to manipulative design and targeted advertising practices that feed on insecurity and distress. In a physical environment, such risks would raise immediate concerns about safety standards. Online, they remain embedded in the system itself. This is not simply a design flaw, it is the core business model.
From consumer protection to digital mental health protection
The DFA, alongside the Digital Services Act, DMA, AI Act, and European Health Data Space, shows that Europe recognises the need to protect autonomy and dignity in digital spaces. But if Safer Internet Day is to be more than a symbolic commitment, the DFA must go further and address the psychosocial harms of digital manipulation.
We need a shift similar to the evolution in public health thinking on tobacco or gambling. In those fields, the focus moved away from the individual responsibility alone and towards regulating products, environments and business practices that drive harmful behaviours, through restrictions on marketing, design availability, and industry incentives. The question is no longer whether these systems affect mental health, the evidence is clear. The question is how we regulate platforms whose profitability depends on behaviours that undermine well-being.
In its contribution to the public consultation to the Digital Fairness Act, Mental Health Europe outlined a vision of what a safer and fairer digital space could look like from a mental health perspective. This vision recognises that digital fairness should encompass digital wellbeing, acknowledging that mental health harms can be as detrimental as financial and economic harms. A safer internet digital ecosystem would limit manipulative and addictive design practices, while ensuring transparency and accountability of algorithmic recommender systems that shape what people see and experience online. It would also prioritise strong privacy and data minimisation to reduce profiling-based exploitation and introduce meaningful user choice and protective friction nudges that support autonomy rather than undermine it. At the same time, a healthier digital environment requires investment in accessible digital literacy and wellbeing strategies, alongside the active involvement of people with lived experience of mental health challenges in the design, evaluation, and governance of digital policies and services. Together, these elements reflect a broader ambition: a digital space designed not only to be efficient and innovative, but to actively support human dignity and mental wellbeing.
Beyond technology: towards a truly safer digital future
To protect mental health in the digital age, we must place technology within a human-rights and psychosocial framework, where well-being, dignity, participation, and autonomy guide design and governance. Technology must serve users, not the other way around.
The EU has shown global leadership in data protection and online safety. Recent enforcement actions under the DSA, including the European Commission’s investigation into addictive design risks on TikTok, shows a growing recognition of the mental health implications of platform design. With the Digital Fairness Act, it now has a chance to lead again by protecting not only the wallets of consumers, but their well-being and dignity.
Digitalisation is not neutral. It can deepen inequalities or reduce them. It can promote autonomy or erode it. It can empower individuals and communities or concentrate power in the hands of a few.
If Europe wants to build a digital future that promotes thriving lives, democratic resilience, and fair participation, Safer Internet Day should remind us of what is at stake: a digital environment that is not only innovative, but genuinely safe, for everyone.
Stay connected
Get our latest news, personal stories, research articles, and job opportunities.