Do you trust your computer?
Back in 1991 I was watching Terminator 2 Judgement day in a cinema in France. First of all they dub films in France and Arnold Schwarzenegger sounds strange in French – he walks into the biker bar and says “j’ai besoin de tes vêtements tes bottes et ta moto” – doesn’t quite have the same impact as the robotic Austrian dialect and voice that says “I need your clothes your boots and your motorcycle”. In the beginning of the film there is this disturbing vision of the future where robots have taken over earth and killed most of the human race (3 billion human lives ended on August 29th 1997)- Since Artificial Intelligence was first mentioned by Alan Turing about 60-70 years ago many writers, movie makers and journalists have painted this grim dystopian future where robots and machines control the world. I always felt it was good entertainment and didn’t really take most of it to heart. Yesterday that changed when I watched a 1 hour documentary about artificial intelligence that send shivers down my spine – it is called “Do you trust your computer”.
The documentary covers everything from robotic surgery, self-driving cars, lethal autonomous weapons, and humanoid robots. AI is already used by companies like Google, Microsoft, Facebook, IBM and many more to help with processing, sorting, and finding patterns in huge amounts of data that would be impractical for humans to make sense of, this is where computing power and AI is put to good use. The darker side to AI is also addressed and it considers how Cambridge Analytica used AI to manipulate the 2016 elections; Tay.ai, Microsoft’s chatbot that the internet turned into a racist within 24 hours of its release; the film maker also depicts the rise of autonomous weapons. There are already 10.000 lethal drones in operation in the US Military and the number is growing – many other countries are pursuing this defence strategy as well. Unfortunately we are not offered solutions for how to regulate AI and as we know most organisations are there to maximise profits so this should be the focus of the next documentary.
Here are some final thoughts on this interesting topic by Hope Reese from Tech Republic:
“Ethical concerns abound in the machine learning world as well; one example is a self-driving vehicle adaptation of the trolley problem thought experiment. In short, when a self-driving vehicle is presented with a choice between killing its occupants or a pedestrian, which is the right choice to make? There’s no clear answer with philosophical problems like this one—no matter how the machine is programmed, it has to make a moral judgement about the value of human lives. Along with whether giving learning machines the ability to make moral decisions is correct, there are issues of the other major human cost likely to come with machine learning: Job loss. If the AI revolution is truly the next major shift in the world, there are a lot of jobs that will cease to exist, and it isn’t necessarily the ones you’d think. While many low-skilled jobs are definitely at risk of being eliminated, so are jobs that require a high degree of training but are based on simple concepts like pattern recognition. Radiologists, pathologists, oncologists, and other similar professions are all based on finding and diagnosing irregularities, something that machine learning is particularly suited to do. There’s also the ethical concern of barrier to entry—while machine learning software itself isn’t expensive, only the largest enterprises in the world have the vast stores of data necessary to properly train learning machines to provide reliable results. As time goes on, some experts predict that it’s going to become more difficult for smaller firms to make an impact, making machine learning primarily a game for the largest, wealthiest companies.”