Our website uses cookies to give you the best experience and for us to analyse our site usage. If you continue to use our site, we will take it you are OK about this. Click on More for information about the cookies on our site and what you can do to opt out.

We respect your Do Not Track preference.

Algorithmic transparency: what happens when the computer says “no”? Vanessa Blackwood
29 November 2017

computer says no

At Nethui, I was delighted to hear the Minister for Government Digital Services, Hon Clare Curran, bring up algorithmic transparency as a concept that needs further exploration. It was a statement that prompted the following speaker, Jillian C. York of the Electronic Frontier Foundation, to express happy surprise - that a government minister would publicly acknowledge the importance of algorithmic transparency.

But what does algorithmic transparency mean, and why is it so important?

Use of algorithms is growing

Algorithmic transparency goes hand in hand with the rise of big data. In living our lives, we create vast amounts of personal information about ourselves. As more and more information becomes available, decisions that were previously made by humans can now be made by an algorithm – a process or set of mathematical computations for processing data and automated decision making.

Both the public and private sector are creating algorithms to process big data to support their work. Algorithms are increasingly used online behind the scenes. For example, when you search a term on Google, an algorithm controls how your results are found and displayed. When you browse Facebook, an algorithm decides which ads are displayed to you.

So far, so simple – the use of big data helps organisations, including government agencies, make informed decisions in targeting a response to the individual, the client, the customer or consumer. It might be used when you access healthcare or other government services, or when you enrol your children in the local school.

But what if an algorithm determines that you’re a higher-risk customer in immigration checks because you accidentally took a banana in your carry-on bag last year, and you got an instant fine for breaching biosecurity rules? Should the Ministry of Education be able to allocate school funding based on algorithmic decisions about which children have family who are beneficiaries, what ethnic or socio-economic bracket they occupy, or the education level of their parents (or all of these factors combined)?

What right do you have to know about how these decisions are made, and what right do you have to know what information is being used to make those decisions?

Algorithmic transparency

Algorithmic transparency or accountability is the principle that we, as the subjects of automated decision making, should be able to understand the basis on which the software makes those decisions about us.

Algorithms aren’t made in an objective world. We might think of computers as being rational and impersonal, but they are written or programmed by people. And people have implicit biases and blind spots. As a result, they may have conscious or unconscious inclinations to shape these algorithms in a particular way.

Serious decisions can be made with little insight or clarity into how those decisions were arrived at. For example, courts in the United States have been asked to decide whether sentencing judges should be able to take into account the results of algorithms designed to predict risk assessment and the likelihood of someone committing another crime. But since it’s a proprietary commercial system, the details about how the algorithm works, and what information goes into it, is kept secret from both the sentencing judge and the person being sentenced.

The problem with secret algorithms is studies show that not only are they unreliable – only a little more accurate than a coin toss – but they also contained significant racial disparities. Without knowing how the algorithm was made, it’s impossible to understand where these disparities come from. If government agencies are collecting large data sets in order to create algorithms to help shape their decision making and delivery of services, it’s imperative that the information going in is the right information – otherwise what comes out has little to no value and might even be harmful to the individuals affected.

Privacy legislation and algorithmic transparency

In the European Union, the General Data Protection Regulation (GDPR) covers algorithmic accountability to some extent. Article 21 gives people the right to object to processing of personal data, including profiling. Article 22 establishes that people will have the right not to be subject to a decision based solely on automated processing or profiling. But returning to the US case of sentencing using algorithms, this wouldn’t fall foul of the GDPR; the sentencing judge used the risk profile as just one factor, so the decision wasn’t based solely on automated processing or profiling.

The New Zealand Privacy Act doesn’t cover algorithmic transparency specifically. Instead, the Act’s information privacy principles govern how agencies can collect, store, use and dispose of your personal information. It means you have the right to know what agencies are collecting about you, and why; to not have your personal information collected by means that are unlawful, unfair, or unreasonably intrusive; to have access to the information held, and to be able to correct it if it’s wrong. The agency must take reasonable steps to check the accuracy of that information before they use it; and must appropriately store, protect and dispose of your information once it’s no longer needed. The Official Information Act also gives you rights of access to reasons for decisions affecting you, if those decisions are made by government departments or representatives.

But how can you find out what information an agency holds about you, if the algorithm is kept secret to protect its commercial value? How can you correct it if it’s wrong, or challenge a decision made using a series of algorithms to determine the outcome? These questions show why algorithmic transparency is so important, and why this year’s Nethui attendees were so excited to hear the Minister refer to it.

Meanwhile, the Privacy Commissioner is maintaining a watching brief on this space, and he has discussed algorithmic transparency in various presentations. As organisations depend more and more on big data to make decisions, and as these decisions have the potential to drastically impact on our lives, transparency and accountability become increasingly necessary to make sure that when the computer says no, it’s saying no for the right reasons.

Image credit: 'Computer says no' mouse mat - Nicholas Pivic via Flickr.

1 comments

, , ,

Back

Comments

  • Great blog and a great conversation starter. Closer to home in Australia #notmydebt #robodebt :)

    Posted by Phil Green, 06/12/2017 2:01pm (12 days ago)

    Post Reply

    The aim of the Office of Privacy Commissioner’s blog is to provide a space for people to interact with the content posted. We reserve the right to moderate all comments. We will not publish any content that is abusive, defamatory or is obviously commercial. We ask for your email address so that we can contact you if necessary to clarify your comment. Please be respectful of authors and others leaving comments.

Post your comment

The aim of the Office of Privacy Commissioner’s blog is to provide a space for people to interact with the content posted. We reserve the right to moderate all comments. We will not publish any content that is abusive, defamatory or is obviously commercial. We ask for your email address so that we can contact you if necessary to clarify your comment. Please be respectful of authors and others leaving comments.

Latest Blog Entries