Matthias Spielkamp is the Managing Director and one of the founders of AlgorithmWatch. He is a member of the committee of the Internet Governance Forum Germany (IGF-D) in the “Civil Society” group.

Matthias Spielkamp is the Managing Director and one of the founders of AlgorithmWatch. He is a member of the committee of the Internet Governance Forum Germany (IGF-D) in the “Civil Society” group.

© Manuel Kinzer

You study “algorithmic decision-making” and the impact that it has. What should come to mind when we think about this?

An example that anyone in Germany would immediately understand is the concept of the credit rating check, also known as “credit scoring”. Companies check whether someone is creditworthy, and decisions are made on this basis as to whether someone can be given a loan or a mobile phone contract. This is done using systems that analyse large amounts of data and throw out a result at the end of it all. This result is then used to make decisions that have a profound impact on the lives of almost everyone.

We believe that it is important to debate how this data is used and how these systems should be regulated. Another example is the question of which content should be visible on social media platforms and which should not.

Why are such automated systems a matter of debate for the Internet Governance Forum (IGF)?

They are a typical topic for the IGF because many problems develop due to the international nature of the internet. At the EU level and in other political forums related to regulating the internet, for example, there is a growing demand for criminally punishable content to be reviewed with great haste and deleted where necessary. Because this necessitates the need to review millions of posts very quickly in social networks, it can only be achieved using automated methods.

This in turn is controversial, because automated systems play a role in determining freedom of speech and opinion. For instance, they would need to be able to differentiate between defamation and parody. This is a debate that has become headline material recently due to the EU’s copyright reform and the massive protests against upload filters. Such questions are currently the subject of the most heated debates related to internet governance.

The AlgorithmWatch organisation, of which you are one of the founders, aims to take a closer look at such algorithms and understand what role they play. How do you do this?

There are different ways. One of them is your garden variety journalistic investigation. For example, in the Netherlands there is a computer system that aims to find out whether someone is receiving benefits such as welfare payments despite not actually being eligible for them. We have made enquiries about this with the competent authorities and have published the results.

Another way is to conduct research, for which citizens send us certain data that we could otherwise not obtain. We analyse this data to draw conclusions on how a certain system works – for example, how search results are weighted in Google or how SCHUFA’s scoring model works. All this is referred to as “algorithmic accountability reporting”.

This insight into algorithms may also run contrary to trade secrets or other interests. How much transparency should there be?

A balance needs to be found between the right of confidentiality of businesses and the right of information of the public. Where this balance lies is dependent on the specific case. For example, if we were to publish precisely how the SCHUFA scoring method works, this would probably result in a legal dispute. SCHUFA would say that it’s their trade secret. And there are good reasons for secrecy, among them the right of a business to protect their investments.

Complete transparency may also cause a system to no longer work – something which becomes apparent in the case of search engine results, for example. If it were known exactly how the Google algorithm sorts its results, they would be easier for people to manipulate to have their own results at the top. But we do believe that many algorithm-based systems currently leave the public in the dark too often.

There are different ways that such automated systems could experience oversight – for example establishing an authority which would be working much in the same way that cars undergo official inspections for roadworthiness. What do you think of this?

We don’t have an easy answer to a question like this. A comparison of such processes with car safety inspections is both fitting and problematic. An authorised body for car safety inspections is mandated by law and has the ability to enforce certain standards. For example, road safety laws dictate which standards a car must meet to be deemed roadworthy. If you don’t get your car inspected, you’re not licensed to drive it on the roads. If you drive without being licensed, you have no insurance and you can be fined or otherwise penalised. Garages authorised to conduct official car safety checks have the necessary expertise for this, and in this regard it’s a fitting comparison.

However, the analogy falls apart when you consider that it’s hard to imagine a single institution being able to review algorithms in entirely different fields – for example, high-frequency stock market trading, medical diagnosis, automated border controls and self-driving vehicles.

The more likely solution would therefore be a variety of regulatory mechanisms in different industries. This is already the case – for example, Germany’s Federal Institute for Drugs and Medical Devices is responsible for certain medical apps. The financial market on the other hand is regulated by another federal institution, the Federal Financial Supervisory Authority.

What do you expect from the IGF – should it be tasked with finding answers to these questions?

At an international level, the more pressing concern is to understand what exactly we need. It’s important to be realistic – it’s practically inconceivable that, let’s say, the UN might found a new organisation to oversee algorithms on the internet, or that international treaties might be created to address this issue. And we need to consider whether it's what we really want.

The IGF has already been addressing the issue of algorithms over the past two years. It will almost certainly continue to play a role in 2019, for example in connection with artificial intelligence. However, debates to date have been at a very abstract level. The fact that a debate is even taking on the ethical standards regarding the use of automated systems is to be welcomed, but tangible results will only come with debate at a more specific level. For example, how do we agree which automated technologies can be used in social networks? Should this decision be left to the companies, or should there be legislative pressure?

It’s good that such discussions are held at the IGF – specifically because there are different opinions on this. As a UN conference, the IGF also has a long-standing and positive tradition of involving people from the Global South countries who otherwise rarely get to have their voices heard. They are heavily impacted by such developments, for example by credit scoring and identity management. The IGF gives them the rare opportunity of having a say and representing their interests in the face of the businesses and Global North governments driving these developments. That’s worth a lot.