How Oversight Bodies Access Information in the Digital Economy—and Where It Falls Short
For a while now, I have been thinking about (and working on) transparency in digital systems from the perspective of the users. My earlier work looked at transparency in the Internet of Things, where I found that users have little meaningful technical or legal means to understand where their data goes, who receives it, and what happens to their data (see our paper “Rights Out of Sight”). In another study, I worked with users to co-design concrete mechanisms for transparency and control over data in women’s health apps. This is written down in my paper on data transparency and control.
What became clear to me over time, though, is that it is unrealistic to expect individuals to monitor and micromanage their own information all the time. We use too many applications. The underlying infrastructures are too interconnected. The data flows are too opaque. And the convenience of digital services outweighs the effort needed to manage them. It would simply be a burden for users.
So I turned to the institutions that are supposed to protect us: oversight bodies. But that raised a rather basic question for me: do oversight bodies themselves actually have meaningful visibility into what organisations are doing in the digital economy?
New laws and governance frameworks keep arriving, covering data, AI, platforms, and connected devices. But, as I write in my paper:
“Oversight depends on access to relevant information. While more information does not automatically lead to more effective enforcement, meaningful oversight is difficult to achieve without access to the information needed.
Indeed, when oversight bodies lack information, their ability to detect harm, monitor compliance, and ultimately enforce rules is necessarily constrained. In such cases, regulatory frameworks may risk losing practical effectiveness, as there are limited means to assess whether regulated actors are in fact meeting their obligations and respecting rights.”
That is the starting point for my new paper, which has just been accepted to ACM FAccT 2026. I am still preparing the final version, but I can already share some insights.
The study is based on interviews with 21 senior professionals from 19 oversight bodies across the EU, EU member states, and the UK, including regulators, consumer organisations, digital rights NGOs, certification and audit bodies, and policy or standards actors. The paper deliberately takes a broader view of oversight than regulators alone, and it is explicitly cross-technology rather than narrowly AI-focused. In practice, AI constitutes a network of data, cloud infrastructure, platforms, and connected devices anyway, so real-world oversight problems rarely emerge from AI alone (whatever that may mean anyway).
What mechanisms do oversight bodies actually use to access information?
Based on the interviews, I identified and clustered the various mechanisms oversight bodies use for accessing and collecting information. Oversight bodies first need to detect signals, then decide which of those signals matter, then investigate, and finally sometimes feed information back out as guidance. See image below.
Phase 1: market monitoring
Signals come from three places. Some are generated by oversight bodies themselves, through horizon scanning, audits, surveys, scraping, device testing, and other technical monitoring. Those practices exist, but they are still relatively uncommon. Much more often, oversight begins because somebody else noticed something first: journalists, academic researchers, NGOs, complainants, or other authorities. The media, in particular, seems to play a surprisingly strong agenda-setting role. Oversight also receives signals from companies themselves, especially through breach notifications, incident reporting, or when firms proactively seek guidance.
Phase 2: prioritisation
Not every signal becomes a case. Oversight bodies usually investigate a case based on risk, which they measure in terms of scale and likely impact. They look for systemic patterns rather than one-off anomalies, and they often prioritise issues affecting many people, vulnerable groups, or sensitive data. This sounds reasonable, but it also means a large number of cases may never receive scrutiny at all. One participant noted that they engage with only about 30% of reported breaches.
Phase 3: case investigation
Here the findings are both unsurprising and a bit depressing. Investigations are still heavily document-led. Oversight bodies often begin with policies, self-assessments, contracts, impact assessments, certification files, and internal reports provided by the organisation under scrutiny. Interviews, on-site inspections, system forensics, and lab-based technical testing do happen, but they are resource-intensive and far less common. In other words, oversight bodies largely depend on what companies say they are doing, and investigations to validate whether this is true remain uncommon.
Guidance
Information flows both ways: from the market to oversight bodies, and from oversight bodies back to the market.Bodies produce advice, warnings, educational materials, and policy feedback. Sometimes this is a dialogue with firms. Sometimes it is public-facing rights education. Sometimes it is oversight bodies informing one another, or informing policymakers that something harmful is happening even if current law does not yet cleanly address it.
What challenges get in the way?
This is where things start to become concerning. The paper identifies six recurring challenges. Together, they suggest that oversight bodies often lack the visibility needed for robust oversight in the digital economy.
Paper versus practice. Oversight relies heavily on documentation produced by the very organisations being overseen. That creates a structural asymmetry. Documents may be formally complete while still being strategically framed, selective, or detached from what actually happens in practice.
Fragmented oversight in a cross-cutting landscape. Real incidents cut across data protection, AI governance, platform regulation, consumer law, product safety, and cybersecurity, but institutions are still organised in narrower siloes. This makes it easy for real-world harm to fall between mandates.
Skills, culture, and capacity constraints. Technology moves quickly; oversight institutions generally do not. Limited staff, limited time, limited technical expertise, and procedural drag all narrow what can be investigated.
Poor visibility into supply chains and cross-border flows. Data, models, software components, and infrastructure move across organisations and jurisdictions in ways that are difficult to trace. Oversight bodies often struggle to see how systems are assembled across supply chains, or how data flows between actors across borders.
Information overload and low interpretability. Even when organisations disclose information, they may disclose too much, too opaquely, or without the context needed to interpret it. Oversight bodies described receiving hundreds of pages of documentation that are technically compliant but difficult to analyse or act upon.
Insufficient engagement with affected communities. Citizens are often expected to file complaints, but are rarely meaningfully integrated into oversight. Many harms are invisible to the people experiencing them, especially in data-driven or discriminatory systems.If people do not know they are being affected, they cannot report the problem, meaning oversight bodies may never receive the signal in the first place.
I found it useful to think of these not as isolated problems but as one recurring condition: persistent information asymmetry.
Company tactics in response to oversight
This was one of the most fascinating parts of the study. Several interviewees described ways in which organisations appear to manage oversight strategically rather than simply comply with it.
Sometimes this is about framing. One regulator said firms tend to present “the good side” of their practices and risk telling “their story rather than... the facts.” Another participant noted that organisations may provide only “70% of the truth.” This can result in semantic discussions. A consumer organisation described disputes where firms effectively say: “you call that ‘selling’ data, but we call it something else”.
Sometimes it is about doing the bare minimum. In discussion of self-assessment regimes, one participant pointed to commercial pressure to move quickly (i.e. put there AI products on the market ASAP) and said companies may choose to “do the minimum in the self-assessment instead of doing the maximum.”
Sometimes it is temporal. One interviewee said that “the second they get the whiff of what we’re doing, they do subtle changes that change the facts on the ground.” Another described procedural hurdles used to “slow things down,” while another noted that big companies had understood early that authorities have limited resources and can simply drag cases out.
It all links to power asymmetries. As one participant put it, “Those companies have huge muscles.” That matters because when oversight depends on what firms choose to reveal, and when firms are better resourced than the institutions scrutinising them, delay and partial visibility become strategic assets.
I do not think that every firm is acting in bad faith all the time. But the current arrangement gives organisations, including large technology firms, considerable room to manoeuvre, while oversight bodies often have limited visibility and control.
What should we do instead?
Before getting into possible paths forward, I should say that I came away from these interviews impressed about the people doing this work. Indeed, I spoke with very senior, top-level professionals across different oversight bodies. Despite high workloads, limited resources, and constraints (see above), I was impressed by their creativity and innovation. Many are actively experimenting with new approaches to oversight and continuously look for ways to improve how oversight works in practice. Across the interviews, a number of recurring themes emerged as potential ways forward.
Stronger verification beyond paperwork
If oversight bodies rely mainly on documents written by the companies they oversee, they remain dependent on curated accounts. The study points toward stronger verification methods, such as scraping, technical audits, system testing, lab evaluations, and data-driven surveys. These approaches help check whether what is written on paper actually matches what happens in practice. Some concrete practices that could support this shift include:
developing shared monitoring infrastructures (e.g. scraping tools, device testing labs, measurement platforms)
expanding technical audit powers and access to systems
These approaches aim to reduce dependence on company-provided documentation and enable oversight bodies to observe how systems actually operate in reality.
Better coordination across the oversight ecosystem
Since digital harms are cross-technology and cross-jurisdictional, oversight cannot remain neatly partitioned either. Regulators, auditors, certification bodies, NGOs, consumer organisations, journalists, and researchers all see different parts of the ecosystem. The challenge is to build ways of sharing signals, expertise, and priorities without creating excessive coordination overhead. Participants described several practices already emerging in this direction:
cross-regulator cooperation and joint task forces across regulatory mandates
structured information-sharing networks between oversight actors
collaboration with journalists, researchers, and civil society organisations who often surface early signals of harm
Rather than trying to centralise oversight into a single authority, the aim is to strengthen the networked ecosystem of oversight so that signals about emerging harms can circulate more effectively.
Stronger participatory and intermediary-based oversight
Individuals cannot realistically monitor digital infrastructures on their own, but that does not mean they should disappear from the picture. The more plausible route is through intermediaries: consumer groups, NGOs, human-rights organisations, collective complaints, and community-based monitoring. These can act as connective tissue between lived experience and formal oversight.
Interviewees highlighted several practices that could strengthen this intermediary role:
enabling collective complaint mechanisms and representative actions
creating structured channels for community reporting
developing participatory oversight processes that integrate user perspectives into investigations
These approaches recognise that citizens may not be able to monitor systems individually, but collective and intermediary forms of oversight can still surface harms that institutions would otherwise miss.
Final note
The issue is not merely that users lack transparency. It is that even the bodies tasked with protecting users often lack the visibility, interpretive capacity, and institutional position needed to see clearly into the systems they are meant to oversee. If that is true, then the question is not just whether organisations comply with the law. It is whether our oversight arrangements are currently capable of knowing when they do not.
That seems like a useful question for the next phase of digital governance.
Source paper: to be published at ACM FAccT