The future isn’t coming, it’s already here. That’s particularly true in the case of AI, where “super intelligent” machines are only a few decades away, but lower-level machines are already here and transforming the world we live in.
Today, though AI might not be quite as intelligent as human beings – it is everywhere, playing a growing role in everything from our politics to our economy. For many organisations running the AI march, this new technology is the biggest and most important thing in the world. After all, as the environment we live in becomes more digital, AI is here to help us understand and use the onslaught of data that we’re collecting.
The question is if AI is so powerful – why aren’t we looking more closely at how the systems are being used, and whether the processes around it are legal, ethical, and positive for the growth of society?
AI and Asking the Difficult Questions
The good news is that scholars in the technology space have begun to ask the difficult questions around AI for us. For instance, researchers at the New York University “AI Now” research institute have been looking at the social implications of AI. The report published in 2017, highlights some of the concerns of welcoming artificial intelligence into our world.
Additionally, a new critique of technology has been published by a selection of 26 experts taken from 6 major universities, alongside several think tank groups. The title of this new report “The Malicious Use of Artificial Intelligence” indicates how important it is for human beings to start thinking about how we can prevent people from using hyper-intelligent bots for dangerous or nefarious purposes.
While the media is full of stories about the amazing things that AI can do, it’s also worth paying some attention that misled or traditionally “bad” people could do with it.
The Possibilities of Evil AI
The “Malicious Use of Artificial Intelligence” report looks at three areas where we’re most likely to experience problems. For instance, in the world of digital security, AI could be used to automate tasks around cyber-attacks to make hacking easier. There’s also an expectation that AI will be able to exploit vulnerabilities in the human world, through speech synthesis for impersonation and software vulnerabilities.
Another possible threat exists in the world of physical security, where autonomous weapon systems and drones make it harder to protect the human world. We might even expect new kinds of attacks that can subvert physical systems and cause autonomous vehicles to crash or deploy systems that can be controlled from a distance.
Finally, there’s the issue of political security and the idea that AI might be used to automate tasks associated with surveillance and deception. As machine learning algorithms become more effective at understanding human moods and behaviours, it’s likely that new kinds of attacks will emerge.
While some of the reports on AI today are certainly there to take advantage of the hype and even push a rise of scare-mongering, some studies need to be addressed with caution, as they look at the reality of what people could do with AI if their intentions aren’t good. Perhaps the best thing we can do to protect ourselves from the dangers of AI is make sure that the right people are using it.