Skip to main content

Algorithms in public administration: How do we ensure they serve the common good, not abuses of power?

The use of algorithmic decision-making is growing faster than you might think

People crossing the street, conceptual representation of facial recognition

Photo: DedMityay / Shutterstock

Headshot of Matthew Jenkins, Research Manager at Transparency International
Matthew Jenkins

Research Manager – Knowledge Services, Transparency International

In a victory for digital rights campaigners, members of the European Parliament sounded the alarm this week over the use of artificial intelligence (AI) in criminal matters. According to critics, the proposed use of algorithmic technology to predict future criminality, the roll-out of facial recognition software in public spaces and the authorisation of biometric mass surveillance threatens the enjoyment of a myriad of fundamental rights and freedoms.

Proposals like these might seem like a step towards the dystopian future portrayed in Steven Spielberg’s 2002 smash-hit Minority Report, where police pre-emptively intercept and detain offenders before they commit crimes based on data provided by clairvoyant psychics.

But algorithmic systems that assist or even replace humans in decision-making processes are no longer a matter of science fiction. As a growing number of policymakers and administrators come to rely on AI, it is increasingly shaping the way we are governed. We are in fact in the middle of a profound change in how public institutions work and how public goods and services are administered.

The promise and peril of algorithms in public administration

Advocates contend that the use of algorithms in public life can lead to more consistent outcomes by replacing fallible and irrational humans. There are already promising cases of machine-learning algorithms being used to detect suspicious tenders in public procurement processes, including by Transparency International Ukraine. Algorithmic systems are also already regularly used to analyse massive datasets of financial transactions to spot potential money-laundering.

Despite these initiatives, there is mounting evidence that algorithmic systems are far from being neutral pieces of technology. Instead, they reflect the (un)conscious preferences, priorities and prejudices of the people that build them. Even when software developers take great care to minimise any influence by their own prejudices, the data used to train an algorithm can be another significant source of bias.

For instance, predictive algorithms used in law enforcement attempt to calculate the likely crime rate in different neighbourhoods on the basis of past arrest data, which may serve to embed historical patterns of racist policing.

In anti-corruption work, when trained to sniff out corruption based on historical datasets, an algorithm can be accurate only if authorities impartially sanctioned corrupt practices in the past. In settings with a weak rule of law – where anti-corruption campaigns are primarily used by incumbents to target political opponents – algorithms may simply help governments to crack down on critics more efficiently.

Furthermore, if deliberately designed to favour specific outcomes, algorithmic systems could also assist corrupt causes or actors, while giving the impression of neutral decision-making.

Transparency as antidote?

Being able to ‘see’ how a public institution arrives at a decision is often assumed to allow for greater oversight, as well as to ensure fair and non-discriminatory outcomes. Efforts to ensure transparency in public bodies’ growing reliance on algorithms to take policy decisions, however, run into three causes of opacity.

First, the analytical processes that algorithms rely on to produce their outputs are often too complex and opaque for humans to comprehend. This can make it extremely difficult to detect erroneous outputs, particularly in machine-learning systems that discover patterns in the underlying data in a somewhat unsupervised manner. It can be nearly impossible in the case of these so-called “black boxes” to trace how an algorithmic system produced a given output.

Second, economic factors such as commercial secrets can inhibit algorithmic transparency. Most commercial software providers refuse to disclose the exact working of their algorithms or the input data used, on the basis that this is part of their intellectual property rights.

This can be very difficult to circumvent, as exemplified by the case of a Californian defendant who was sentenced to life without parole based on the output of a DNA analysis software. When an expert witness sought to review the source code of the programme, the vendor of the software claimed that it was a trade secret, an argument upheld by the Court.

Finally, regulatory and legal challenges such as data privacy legislation may complicate efforts to disclose information, particularly with regards to the training data used, if it is based on sensitive personal data.

Integrity standards

The process of building and deploying algorithmic systems is complex, and the many decisions taken along the way – from the design of a model, to the collection and use of training data, to how it is used and by whom – can undermine a system’s transparency and, by extension, its accountability.

Despite the challenges, it is especially important to open up algorithmic systems to meaningful public scrutiny in order to help avoid unfair outcomes, bias and corruption. Fortunately, there are a growing number of frameworks that set out core integrity principles for the use of AI in public administration:

  1. People must be alerted to the fact they are interacting with, or subject to, an AI system every time they encounter one.
  2. Those affected by the decision of an AI system should be helped to understand the outcome.
  3. These people should also be able to challenge this outcome.

A recent report by the Centre for Data Ethics and Innovation in the United Kingdom, for example, concluded that a mandatory, proactive transparency obligation should be put on all public sector organisations using algorithms. Greater transparency could in fact lead to a win-win situation, in which users find and correct erroneous data that affects them and public agencies benefit from more accurate training data.

Another promising approach is to make algorithm impact assessments mandatory, before they can be adopted. Based on the method used in environmental impact assessments, jurisdictions including Canada, the European Union and the United States have begun drafting legislation that would require developers of algorithmic systems to conduct risk assessments that public institutions can use before deciding whether to acquire and operate a system. As yet, however, how exactly this will be carried out remains unclear.

Technocratic solutions to political problems?

Ensuring that algorithms serve the common good is a complex matter. Public institutions need to adopt and follow specific protocols during the process of planning, designing, testing, documenting, deploying, auditing and evaluating algorithms.

However, not every problem or administrative process lends itself to automation. Many government processes entail the weighing of political or normative factors which are ill-suited to algorithmic decision-making. To avoid undesirable, opaque and unfair outcomes resulting from the use of algorithms in public administration, careful evaluations are necessary in each case.

Ultimately, public institutions that use algorithms in decision-making processes need to be held to the same standards as institutions in which humans make these decisions.

Want to dive deep into our research?

Check out our Knowledge Hub

Please also consider a donation

Become our supporter

For any press inquiries please contact press@transparency.org