This publication was prepared by Kilian Vieth and Joanna Bronowicka from Centre for Internet and Human Rights at European University Viadrina. It was prepared based on a publication “The Ethics of Algorithms: from radical content to self-driving cars” with contributions from Zeynep Tufekci, Jillian C. York, Ben Wagner and Frederike Kaltheuner and an event on the Ethics of Algorithms, which took place on March 9-10, 2015 in Berlin. The research was supported by the Dutch Ministry of Foreign Affairs. You can find the printable version in pdf format here.
Ethics of Algorithms – why should we care?
Algorithms shape our world(s)!
Our everyday life is shaped by computers and our computers are shaped by algorithms. Digital computation is constantly changing how we communicate, work, move, and learn. In short, digitally connected computers are changing how we live our lives. This revolution is unlikely to stop any time soon.
Digitalization produces increasing amounts of datasets known as ‘big data’. So far, research focused on how ‘big data is produced and stored. Now, we begin to scrutinize how algorithms make sense of this growing amount of data.
Algorithms are the brains of our computers, mobiles, Internet of Things. Algorithms are increasingly used to make decisions for us, about us, or with us – oftentimes without us realizing it. This raises many questions about the ethical dimension of algorithms.
What is an algorithm?
The term ‘algorithm’ refers to any computer code that carries out a set of instructions. Algorithms are essential to the way computers process data. Theoretically speaking, they are encoded procedures, which transform data based on specific calculations. They consist of a series of steps that are undertaken to solve a particular problem, like in a recipe. An algorithm is taking inputs (ingredients), breaking a task into its constituent parts, undertaking those tasks one by one, and then producing an output (e.g. a cake). A simple example of an algorithm is “find the largest number in this series of numbers”.
Why do algorithms raise ethical concerns?
First, let’s have a closer look at some of the critical features of algorithms. What are typical functions they perform? What are negative impacts for human rights? Here are some examples that probably affect you too.
They keep information away from us
Increasingly, algorithms decide what gets attention, and what is ignored; and even what gets published at all, and what is censored. This is true for all kinds of search rankings, for example the way your social media newsfeed looks. In other words, algorithms perform a gate-keeping function.
Example: Hiring algorithms decide if you are invited for an interview
- Algorithms, rather than managers, are more and more taking part in hiring (and firing) of employees. Deciding who gets a job and who does not, is among the most powerful gate-keeping function in society.
- Research shows that human managers display many different biases in hiring decisions, for example based on social class, race and gender. Clearly, human hiring systems are far from perfect.
- Nevertheless, we may not simply assume that algorithmic hiring can easily overcome human biases. Algorithms might work more accurate in some areas, but can also create new, sometimes unintended, problems depending on how they are programmed and what input data is used.
Ethical implications: Algorithms work as gatekeepers that influence how we perceive the world, often without us realizing it. They channel our attention, which implies tremendous power.
They make subjective decisions
Some algorithms deal with questions, which do not have a clear ‘yes or no’ answer. Thus, they move a way from a checkbox answer “Is this right or wrong?” to more complex judgements, such as “What is important? Who is the right person for the job? Who is a threat to public safety? Who should I date?” Quietly, these types of subjective decisions previously made by humans are turned over to algorithms.
Example: Predictive policing based on statistical forecasting
- In early 2014, the Chicago Police Department made national headlines in the US for visiting residents who were considered to be most likely involved in violent crime. The selection of individuals, who were not necessarily under investigation, was guided by a computer-generated “heat list” – an algorithm that seeks to predict future involvement in violent crime.
- A key concern about predictive policing is that such automated systems may create an echo chamber or a self-fulfilling prophecy. In fact, heavy policing of a specific area can increase the likelihood that crime will be detected. Since more police means more opportunities to observe residents’ activities, the algorithm might just confirm its own prediction.
- Right now, police departments around the globe are testing and implementing predictive policing algorithms, but lack safeguards for discriminatory biases.
Ethical implications: Predictions made by algorithms provide no guarantee that they are right. And officials acting on incorrect predictions may even create unjustified or biased investigations.
We don’t always know how they work
Complexity is huge: Many present day algorithms are very complicated and can be hard for humans to understand, even if their source code is shared with competent observers. What adds to the problem is opacity, as in lack of transparency of the code. Algorithms perform complex calculations, which follow many potential steps along the way. They can consist of thousands, or even millions, of individual data points. Sometimes not even the programmers can predict how an algorithm will decide on a certain case.
Example: Facebook newsfeed algorithms is complex and opaque
- Many users are not aware that when we open Facebook, an algorithm that puts together the postings, pictures and ads that we see. Indeed, it is an algorithm that decides what to show us and what to hold back. Facebook newsfeed algorithm filters the content you see, but do you know the principles it uses to hold back information from you?
- In fact, a team of researchers tweaks this algorithm every week – they take thousands and thousands of metrics into consideration. This is why the effects of newsfeed algorithms are hard to predict – even by Facebook engineers!
- If we asked Facebook how the algorithm works, they would not tell us. The principles behind the way newsfeed work (the source code) are in fact a business secret. Without knowing the exact code, nobody can evaluate how your newsfeed is composed.
Ethical implications: Complex algorithms are often practically incomprehensible to outsiders, but they inevitably have values, biases, and potential discrimination built in.
Core issues – should an algorithm decide your future?
Without the help of algorithms, many present-day applications would be unusable. We need them to cope with the enormous amounts of data we produce every day. Algorithms make our lives easier and more productive, and we certainly don’t want to lose those advantages. But we need to be aware of what they do and how they decide.
They can discriminate against you, just like humans
Computers are often regarded as objective and rational machines. However, algorithms are made by humans and can be just as biased. We need to be critical of the assumption, that algorithms can make “better” decisions than human beings. There are racist algorithms and sexist ones. Algorithms are not neutral, but rather they perpetuate the prejudices of their creators. Their creators, such as businesses or governments, can potentially have different goals in mind than the users would have.
They must be known to the user
Since algorithms make increasingly important decisions about our lives, users need to be informed about them. Knowledge about automated decision-making in everyday services is still very limited among consumers. Raising awareness should be at the heart of the debate about ethics of algorithms.
Public policy approaches to regulate algorithms
How do you regulate a black box? We need to open a discussion on how policy-makers are trying to deal with ethical concerns around algorithms. There have been some attempts to provide algorithmic accountability, but we need better data and more in-depth studies.
If algorithms are written and used by corporations, it is government institutions like antitrust or consumer protection agencies, who should provide appropriate regulation and oversight. But who regulates the use of algorithms by the government itself? For cases like predictive policing ethical standards and legal safeguards are needed.
Recently, regulatory approaches to algorithms have circled around transparency, notification, and direct regulation. Yet, experience shows that policy-makers are facing certain dilemmas of regulation when it comes to algorithms.
Transparency – make opaque biases visible
If you are faced with a complex and obscure algorithm, one common reaction is a demand for more transparency about what and how it works. The concern about black-box algorithms is that they make inherently subjective decisions, which might contain implicit or explicit biases. At the same time, making complex algorithms fully transparent can be extremely challenging:
- It is not enough to merely publish the source code of an algorithm, because machine-learning systems will inevitably make decisions that have not been programmed directly. Complete transparency would require that we are able to explain why any particular outcome was produced.
- Some investigations have reverse-engineered algorithms in order to create greater public awareness about them. That is one way how the public can perform a watchdog function.
- Often, there might be good reasons why complex algorithms operate opaquely, because public access would make them much more vulnerable to manipulation. If every company knew how Google ranks its search results, it could optimize their behavior and render the ranking algorithm useless.
Notification – giving users the right to know
A different form of transparency is to give consumers control over their personal information that feeds into algorithms. Notification includes the rights to correct that personal information and demand it be excluded from databases of data vendors. Regaining control over your personal information ensures accountability to the users.
Direct regulation – when algorithms become critical infrastructure
In some cases public regulators have been prone to create ways to manipulate algorithms directly. This is especially relevant for core infrastructure.
- Debate about algorithmic regulation is most advanced in the area of finance. Automated high-speed trading has potentially destabilizing effects on financial markets, so regulators have begun to demand the ability to modify these algorithms.
- The ongoing antitrust investigations into Google’s ‘search neutrality’ revolve around the same question: can regulators may require access to and modification of the search algorithm in the interest of the public? This approach is based on a contested assumption that it is possible to predict objectively how a certain algorithms will respond. Yet, there is simply no ‘right’ answer to how Google should rank its results. Antitrust agencies in the US and the EU have not yet found a regulatory response to this issue.
In some cases direct regulation or complete and public transparency might be necessary. However, there is no one-size-fits-all regulatory response. More scrutiny of algorithms must be enabled, which requires new practices from industry and technologists. More consumer protection and direct regulation should be introduced where appropriate.