Breaking open AI’s black box
Author:

Iskander Smit
Innovation Director at INFO and visiting professor TU Delft
Read more by Iskander Smit
Most read
For me, TNW is always high on my conference wishlist, both to get a sense of what these digital entrepreneurs are working on and thinking about, and because it is a great place to catch up with the Dutch digital community. I think this must have been my 13th time attending and I intend to keep it up in the years to come.
Artificial Intelligence
Just like last year, AI is still a hot topic during the conference. Based on what I saw at SXSW earlier this year, I was wondering what the main angle at this year’s TNW would be. I was particularly interested in talks that pertain to AI, Robotics, and Machine Learning, and the societal impact of these developments. I had to miss the talk of James Bridle on Thursday (you can watch it online), but on Friday there was a special program on Art and Tech that dealt with two important topics within the societal-impact theme: responsibility and transparency.
Who is responsible?
But before that, at the start of the day, I attended a talk by Cassie Kozyrkov, Chief Decision Engineer with Google (Cloud). Who claimed that AI is just another tool like so many other tools and that the responsibility lies with the user, not with the maker. I don’t agree. I believe that the makers of the tooling have a responsibility as well, especially with something like AI, which – let’s be honest – is kind of a black box to its users. Consumers don’t know how an algorithm works exactly, they just know that Spotify always knows what they like to hear.
Caroline Sinders, Principal Designer and Founder at Convocation goes even further and discusses how we need to revolve tech around human rights. How we have to go from technology-driven design to people-driven design. She also has an example involving Spotify’s algorithm: we know that it gives you personalized playlists and suggestions, but what it doesn’t give you is a button to change what the algorithm shows you. Essentially, you’re stuck in an auditory prison that Spotify created for you and from which there is no escape.
I believe that we should make algorithms accessible and transparent so that the end user can make changes and personalize them. But – more importantly – has more autonomy and independence from them.
We have to create transparency
The above makes AI sound quite scary and more than a little high-tech. But we mustn’t forget that AI is basically “just” automated decision-making, as Francesca Bria framed it nicely in the panel on the future of smart cities. It demystified the term as only a lot of opportunities, it makes clear what it does, and what it can mean as it is taken over decision making.
“AI is basically “just” automated decision making” – Francesca Bria, Chief Technology and Digital Innovation Officer, Barcelona
Yes, AI probably will get bigger and bigger in the coming decade and yes, its implications on our day-to-day lives will grow. But it also has benefits for the end user, creating more personalized experiences. However, this type of automation also has its disadvantages, as we can see from @GirlFromBlupo’s tweet about Amazon’s algorithm:
It illustrates that we need to deal with how machines interpret our intentions. The problem is that a machine decides what we like based on our behavior. But that we can’t tell the machine how to value our decisions, especially as computing is becoming part of our everyday lives.
Living in simulation
This black box principle of knowing what goes in and what comes out, but not knowing what happens inside needs to be higher up on the AI agenda and makers need to think about how we can give (at least some) control back to the user. Otherwise, people will try to manipulate the machine by making choices they wouldn’t normally make to influence the algorithm or maybe even lose faith in AI and Robotics altogether.
In the research I’m currently doing at the TU Delft, I am intently looking into the tension that appears when we simulate the future behavior of machines to cope with the increasingly complex services we use, and that will at the same time influence our own agency in making decisions. We need to team up with the machines, but with the right balance. The talk of Madeline Gannon in the Art & Tech program was a great example of how we might literally team up with machines. She’s researching new forms of communication with industrial robots that (try to) understand our gestures and is setting up a dialogue with them. For ABB, she created a couple of installations that display possible futures.
I believe in the power of AI and the benefits that it could have for our lives, but I think we should be aware that if we go too far with automated decision-making and prediction systems we will be working for the machines pretty soon, instead of the other way around. That is why I second the pleas made for an open, accessible system in which people can still enjoy all the benefits of automated decision-making, but maintain autonomy.
The best stories about innovation?
Sign up and stay inspired!
Please leave your name and email below