Get alerts
NEWS
NEWS &
& UPDATES
UPDATES
TAKE
TAKE ACTION
ACTION
Home
Home // Guides
Guides // AA Featured
human
human rights-centered
rights-centered
approach
approach to
to digital
digital public
public infrastructure
infrastructure
Featured
campaigns
campaigns
OUR
OUR WORK
WORK
GET
GET HELP
HELP
More
More campaigns
campaigns
ABOUT
ABOUT US
US
RightsCon:
RightsCon: our
our digital
digital rights
rights summit
summit
Ban
Ban Biometric
Biometric SurSurveillance
veillance
#NoExam#NoExamShutdown
Shutdown
Reclaim
Reclaim Your
Your Face
Face
#KeepItOn
#KeepItOn
#WhyID
#WhyID
Fighting internet shutdowns around the world
Putting people first in
digital ID systems
#KeepItOn
#KeepItOn for
for Tigray,
Tigray,
Ethiopia
Ethiopia
A human rights-centered approach
to digital public#FreeAlaa
infrastructure
#FreeAlaa
Combating and advocating for free and open internet access for all.
PUBLISHED:
PUBLISHED: 24
24 OCTOBER
OCTOBER 2024
2024
MORE
MORE
LAST
LAST UPDATED:
UPDATED: 24
24 OCTOBER
OCTOBER 2024
2024
The concept of digital public infrastructure (DPI) has increasingly taken center stage in debates around digitalization, digital
transformation, and development. In the United Nations and other multilateral spaces, as well as within policymaker and private-sector circles in countries around the world, proponents of DPI are advocating for its swiD implementation, hailing these
systems as a means to accelerate development, drive inclusion, and promote innovation, and to advance progress on the Sustainable Development Goals.
The scope and scale of the systems will impact nearly every aspect of people’s lives – which means where design and implementation are handled poorly, the potential to undermine human rights is severe. These risks include data breaches, crimes
like identity theD, corporate or government abuse of personal data, the amplification of surveillance, and systemic rights violations. Poor implementation can also make DPI systems a tool for exclusion, not inclusion. This can have devastating consequences for those who are most vulnerable in our societies.
As an organization committed to defending and extending the digital rights of individuals and communities at risk
around the world, we are prepared to help lawmakers make informed decisions about digital public infrastructure.
In the following discussion paper, we provide guidance on approaches to DPI that center people and communities, the harms caused by models founded on overreaching digital identification systems, and draD recommendations for policymakers on ways to reduce security risks and protect human rights.
If you would like assistance or have questions or suggestions about our recommendations, please contact us at
info@accessnow.org
HOW TO DEFINE DPI
DPI HARMS
RECOMMENDATIONS
Understanding how different stakeholders define DPI
While DPI has emerged in certain policy circles as a flashy new concept in recent years, activists and technologists alike have been
highlighting the importance of reliable, accessible, equitable, and sustainable publicly managed digital infrastructure for decades.
Originally, when the digital rights sector and some tech policy academics talked about DPI, they envisioned community-driven,
decentralized infrastructure aimed at promoting civic engagement in digital spaces. This early vision saw DPI as akin to physical
public infrastructure like roads or water supply systems, which are essential to public life.
Today, however, influential actors leading discussions at the international level have shiDed the collective conversation around
DPI toward systems primarily focused on government service delivery and financial access, rather than fostering civic life and participation, and on building pathways for private-sector growth, rather than supporting community-led innovation. In defining DPI,
players like the United Nations Development Programme (UNDP) and the Gates Foundation name economic advancement as a
primary objective. Most commonly, the DPI models up for debate assume the need for a foundational layer of digital legal identity,
along with mechanisms for digital payments and streamlined data transfer.
On one hand, by leaving behind the aspects of public infrastructure that allow for the protection of private spaces where civic life
can thrive, like public squares and libraries, and focusing only on the digitization of public infrastructure that can be built with
digital legal identity as its core, the current vision of what DPI is and can be is overly narrow and does not account for all the
mechanisms needed to ensure everyone has the opportunity to fully enjoy their human rights both inside and independent of
digital spaces.
On the other hand, by bundling these systems under the umbrella of DPI, this approach discounts important conversations
around the unique human rights issues for each of these components, including digital payments, education, health, and a myriad of other topics. In particular, civil society has called out the underlying approach to digital identity systems now deployed under the banner of DPI for years as fundamentally incompatible with human rights, documenting the harms these systems have
caused among vulnerable communities, and raising the standards for human rights protections among implementers and funders
alike. In many ways, the current push for “digital public infrastructure” is a rebranding and expansion of digital identity systems
many actors have already acknowledged as problematic. To protect human rights going forward, it is essential that the foundation of principles for rights-respecting digital ID systems is not cast aside in policy conversations around DPI.
How the current predominant approach to DPI threatens human rights
Under the current prevailing view of DPI, many actors assume that tools and technologies developed in one context can be effectively – and profitably – exported elsewhere. The GovStack initiative, for example, promotes a standardized platform for egovernment services, which, while efficient in theory, risks ignoring the specific needs and contexts of individual communities and sidelining human rights concerns. Rather than facilitating collaborative design processes
through which impacted communities can define their needs, and systems can be built to directly address them, both communities and policymakers are limited to working with what is already “on the menu.” This ultimately encourages implementation of
technologies that at best do not meet people’s needs and at worst introduce further harm. It also discourages realistic conversations about which systems the local underlying infrastructure, legal frameworks, public financial and personnel resources, and
other essential factors are able to support and sustain over time. Each community’s needs and the appropriate solutions to meet
them are unique and context-specific, which means that a system will very rarely work “out of the box” in the way many DPI proponents assert.
By conceptualizing DPI as an expansion of existing digital ID systems – many of which are centralized, mandatory, dependent on
biometrics, and lacking in effective safeguards – proponents of these models are more deeply entrenching the many
harms vulnerable communities are already experiencing as a result of digital ID system implementations.
These harms include, but are not limited to:
Exclusion: Failures both in the technologies themselves and in the administration of digital ID and DPI systems have resulted in
many vulnerable individuals being denied access to essential services and benefits, from pension payments to welfare benefits,
food rations, and healthcare subsidies.
Discrimination: System design choices carry a significant risk of amplifying existing forms of discrimination or introducing new
ones, ranging from targeted surveillance, detention, and deportation to denial of services, forced misgendering, and more.
Cybersecurity risks: Centralized databases of sensitive personal data are an attractive target for malicious hackers – whether
criminal enterprises or state-affiliated actors – and it is extremely difficult to provide meaningful remedy to people when their personally identifying information has been compromised. Data breaches and other cybersecurity concerns have been prevalent
even in cases that many point to as examples of “good” DPI implementation, such as Estonia, whose X-Road open-source data exchange platform is now deployed in over 20 countries. In India, recurring cyber attacks have both compromised millions of people’s privacy and hindered people’s ability to access essential services such as healthcare.
Surveillance: Whether by design or as a result of mission creep aDer implementation, centralized data systems controlled by
entities with little to no accountability for the use of data, or statutorily permitted sharing of data between government agencies
and/or private actors, have proven time and again to be a recipe for abuse. In Venezuela, where every transaction and interaction
with the state leaves a digital footprint, the national digital ID system has been harnessed as a tool for a regime looking to surveil,
exert control, and oppress its critics.
Coercion: When participation in these systems becomes de facto mandatory, individuals lose the ability to opt-out, undermining
the principle of consent and autonomy. This is particularly damaging in cases where digital identity systems are integrated into
broader DPI ecosystems, creating comprehensive surveillance networks that compromise individual privacy. Coerced participation can happen as the result of both policy and pressure in the broader ecosystem. In India, many banks won’t allow people to
open or access accounts without an Aadhaar ID, despite the Supreme Court’s unambiguous verdict that mandatory linking of the
Aadhaar system and banking services is unconstitutional. In Jordan, refugees receiving assistance from the World Food Programme (WFP) and the UN Refugee Agency (UNHCR) are unable to access funds or buy groceries without first undergoing iris
scans to confirm their identity. Though participation in the biometric identification system is not explicitly mandatory, when opting out means giving up the resources you and your family need to survive, there is no possibility of meaningful consent.
Moreover, DPI systems oDen involve a significant role for private companies, blurring the lines between public and
private service delivery. Leaving decisions about the design and deployment of DPI in the hands of private companies oDen
means less accountability to impacted communities, and less transparency around choices for implementation, in particular
where “technical” decisions have impacts on how people experience the system and who the system leaves behind. Giving companies that stand to benefit from the broadest possible use of their tools an outsized role in framing what is needed and what is
possible can incentivize an extractive and maximalist relationship to people and their data, and generate coercive pressure on individuals where private actors benefit from enrollments and increasing transactions in the system.
With all of these issues as a backdrop, the current model of DPI risks amplifying each of these issues even further by
making “interoperability” a central objective. Proponents of these systems posit that the inherent value of DPI is that all
components of the system work together seamlessly – which most oDen is understood to mean that they transfer data across different systems and entities with as little friction as possible. While interoperability, done correctly, can support more efficient service delivery, it also presents significant risks if not properly managed. Without a robust data protection framework in place, a system that significantly increases the flow of personal data among public and private actors increases the risk of surveillance and
other forms of abuse, decreases an individual’s agency over when and how their data is used, and amplifies the impact of failures
in one system across other systems that rely on shared data (as was the case, for example, in Kenya when the government relied
on UNHCR data on the status of refugees to determine eligibility for enrollment in the national ID system).
How to center human rights in the design and deployment of
DPI
DPI might hold significant potential to advance development and inclusion, but only if it is designed and implemented with a
strong commitment to human rights. Current approaches to DPI risk prioritizing efficiency and economic growth over the protection of individual rights and freedoms.
To create truly public digital infrastructure, it is essential to prioritize the needs and rights of individuals, particularly those most
vulnerable to exclusion and coercion. This requires a shiD from viewing DPI as a purely technical solution to understanding it as a
social process that must be governed by principles of equity, inclusion, transparency, and accountability. Digital transformation should never be an end in itself but a means to achieve broader social goals. By placing human rights at the
center of DPI, we can ensure that digital technologies empower individuals and communities rather than entrenching existing
power structures and inequalities.
To put those principles into practice, policymakers should take the following steps when conceptualizing, designing, implementing, maintaining, or reforming digital public infrastructure:
1. Engage directly with impacted communities, civil society,
and other experts to design DPI systems that center the needs
of people most at risk, are narrowly fit for purpose, and ensure respect for and advancement of human rights.
Limit the scope of any DPI applications by applying the principles of necessity and proportionality,
and ask whether the problems policymakers aim to solve using DPI can be addressed through less
intrusive, more human rights-friendly means. Impact assessments and consultations with civil society are critical to identifying potential risks and ensuring that DPI systems do not exacerbate existing
inequities or create new forms of exclusion.
Ensure these impact assessments and community consultations include explicit provisions for evaluating environmental impact.
Refrain from relying solely on pre-packaged systems or choosing technologies without first understanding whether they have a necessary use case. Instead, conduct thorough assessments of any
technological approaches to developing DPI, both to test their responsiveness to the needs of impacted communities and to identify potential human rights impacts before moving forward with implementation.
In particular, ensure that discussions, policy developments, and possible investments in DPI do not
become a blank check for investments in centralized digital identity systems that raise human rights
and cybersecurity concerns. Governments and international institutions must ask #WhyID when looking at conversations centered around digital public infrastructure. DPI should not become a rebrand
of problematic digital ID proposals.
2. Ensure that DPI is “public,” in that authorities — including
any private partners — are accountable to the people, and
that the design and operation of systems is transparent.
The government must not have a monopoly over the development and implementation of DPIs. Any
initiative should involve the public sector, civil society, and the private sector, and include a regular
review and consultation process.
Any private entities involved in a DPI application, particularly entities collecting and storing people’s
personal data, or providing technical services that automate decision-making, must also be made
subject to public scrutiny, accountability, and liability, particularly when carrying out activities as an
agent of the state.
Procurement processes and technical specifications should give preference to more open applications and services over closed or proprietary systems to promote sustainability, avoid vendor
lock-in, and encourage more robust security and adherence to trusted standards.
3. Integrate robust human rights safeguards from the design
stage as a prerequisite to deploying DPI at scale. This includes
adopting and adhering to strict standards for privacy, data
protection, and security, and offering effective mechanisms
for accountability and redress.
The framework for deployment of any DPI platform must be in line with the principles of necessity
and proportionality and must include, in the primary basis for the platform’s application, actionable
remedies for the breach of people’s rights and the service providers’ duties.
A separate data protection policy must protect people’s personal data, prohibit its misuse, and require accountability from all entities collecting and using personal data.
Data sharing and access to databases is oDen sought for purposes of national security or prevention
and investigation of offenses. The grounds for such access must be narrow; access for law enforcement agencies must be limited to specific items rather than general access to the entire database;
and independent oversight must be a prerequisite for every request for access.
Decision makers in development agencies and other funding institutions determining whether to
provide financial support to develop and deploy DPI should require these human rights safeguards to
be in place in all cases, and pay careful attention to the ways in which systems involving large-scale
collection and storage of personal data can enable enhanced surveillance. For example, in Myanmar
as of October 2024, financial and technical support for DPI would support the military’s surveillance
infrastructure. The United Nations Population Fund refused to fund the national census initiative because of human rights concerns in 2023.
4. Refrain from making DPI systems legally or practically
mandatory, to ensure that individuals have the freedom to
choose whether or not to participate. This includes providing
real and tangible analog alternatives to digital systems.
In both design and in practice, any DPI system and its individual components (like digital IDs, payment systems, etc.) must be founded on respect for people’s agency and their expression of meaningful consent. For consent to be meaningful, individuals must have real choices and alternatives that
don’t imply the need to sacrifice something or to make a significant additional effort in order to
choose the alternative. However, when digital systems become the default, with no viable analog alternatives, consent is rendered meaningless. In such scenarios, individuals are effectively coerced
into participation, stripping them of their autonomy and control over their personal data.
There must be no indirect attempts to make participation mandatory. For example, governments
should not make digital IDs effectively mandatory by gradually increasing the number of welfare programs or other government/ private services for which it is required; or by mandating that if a digital
ID is available, a person must use that ID for several services (so the only choice is no digital ID at all,
or digital ID for all services); or by making digital ID verification necessary to get access to government services online where the alternative is not equally accessible.
When requesting identification in the process of designing a system, governments must consider
which form of identification is the minimum viable option (whether full legal identification is truly
required, or only one or some specific identifying attributes, such as nationality or age). In every case,
the minimum viable option for identification should be required, following all the principles of data
minimization.
LEARN MORE
DIGITAL
DIGITAL IDENTITY
IDENTITY SYSSYSTEMS
TEMS
DIGITAL
DIGITAL IDENTITY
IDENTITY SYSSYSTEMS
TEMS
#WhyID
The digital identity
toolkit
DIGITAL
DIGITAL IDENTITY
IDENTITY SYSSYSTEMS
TEMS
DIGITAL
DIGITAL IDENTITY
IDENTITY SYSSYSTEMS
TEMS
#WhyID: Access
Now responds to
World Bank Principles on Identific
Identific…
…
tion for Sustainable Development
Busting Big ID’s
myths
Venezuela:
Digital ID as a
tool of oppression
Contact
India stack:
public-private
roads to data
sovereignty
Careers
How we use your data
DIGITAL
DIGITAL IDENTITY
IDENTITY SYSSYSTEMS
TEMS
Why we need tailored identity systems for our digital
world
DATA
DATA PROTECTION
PROTECTION
Private tech, humanitarian problems: how to ensure digital tran
tran…
…
formation does no
harm
Unpacking
Digital Public
Infrastructure:
navigating
conceptual
ambiguities
Media usage
Code of Conduct
PRIVACY
PRIVACY
National digital
identity programmes: what’s
next?
DIGITAL
DIGITAL IDENTITY
IDENTITY SYSSYSTEMS
TEMS
Digital Identity
Digital Public
Infrastructure
and public
value: What is
‘public’ about
DPI
Site Terms of Use