Before Stephen Hawking left this world, he left us a warning about artificial intelligence. In a 2014 interview, four years before his recent passing, the British physicist sounded a serious alarm bell about AI.
“The development of full artificial intelligence could spell the end of the human race,” he told the BBC via his assistive language device. “It would take off on its own and re-design itself at an ever increasing rate … Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Hawking’s note of caution still resonates today. There were even echoes of it during an AI panel at the DX3 conference in Toronto, held just one week before Hawking’s death.
“There’s a lot of fear,” said panelist Katy Yam, director of marketing and communications at Element AI, a Montreal startup that’s raised more than $135 million in VC funding.
AI apprehension is definitely a real thing. New data from a Gallup poll of nearly 3,000 Americans shows more than one in four expect AI to destroy more jobs than it creates.
While Yam acknowledged those types of concerns at DX3, she also urged audience members to step back from the edge of hysteria and take a calmer look at the situation.
“AI is a tool that can be leveraged. AI is like electricity. If I channel electricity into an electric chair, it can kill me. If I channel it into a stove, it can cook me dinner.” Taking Yam’s advice, let’s step back and wade through some of the hype around AI.
In just the past month, headlines have blared about AI’s ability to help doctors read mammograms, detect heart disease, diagnose blindness-inducing diabetic retinopathy, predict complications in heart failure patients and prevent the global spread of infectious diseases like dengue fever and tuberculosis.
Moving beyond healthcare and into business, 55 per cent of the 3,000 accountant firms surveyed by Sage in eight countries say they plan to use some form of AI to run their companies. According to an Infosys study of 1,000 insurance firms in seven countries, 45 per cent are already using AI as we speak.
Lest we forget the legal profession, an AI engine handily beat human lawyers in a test to see if they could spot issues requiring third-party disclosure within legal contracts. The machine attained 94 per cent accuracy and took just 26 seconds to pore over the contracts, besting its human counterparts who were accurate 85 per cent of the time and needed an average of 92 minutes to get the job done.
On the IT front, Cisco’s 2018 Annual Cyber Security Report finds that 74 per cent of enterprise organizations now rely on AI-based technology to secure their network assets. (Read our overview of the report here.)
One of the biggest business applications of AI, thus far, is in marketing. DX3 panelist Anita Chauhan, head of marketing at Toronto-based Zoom.ai, said one of the next frontiers in artificial intelligence is extremely personalized, automated, outbound marketing.
Fellow panelist Yam added that many marketers currently use data analytics to create data-based “rules” that trigger certain types of actions by marketers once the data values reach a target threshold. With AI, “we’re moving away from these rule-based decisions to dynamic decision making” where the intelligent machine will proactively “start to propose decisions to you,” she said.
If those are examples of AI being leveraged to cook us a nice, hot dinner, there are also some potentially worrisome aspects of the technology.
Rekindling Hawking’s warning about human extinction, a new report has been released by a consortium of AI experts. The title alone — “The Malicious Use of Artificial Intelligence” — certainly isn’t going to soothe any panic about AI run amok. The 100-page study argues for industry best practices and government laws to prevent the use of AI for nefarious purposes like terrorism, IT hacking, ‘smart’ warfare and mass-scale political manipulation.
Even in the business world, there have been early indicators that AI alone — without any human oversight or intervention — may not be the brightest idea out there. Chauhan pointed to the fiasco when Microsoft unleashed a Twitter chatbot called Tay with no human mediation or controls. Left unsupervised, the bot ‘learned’ to spout racist, sexist and homophobic tweets after interacting with mischief-making trolls on the social network.
“So ‘garbage in, garbage out’ is still a danger” with unchecked AI, said Chauhan.
Yam acknowledged historical bias is also problematic. If you used purely historical data to predict startup success, some algorithms “would spit out a predictive suggestion that the highest probable startup success [case] would be white, male and upper class,” said Yam. “So don’t just wash your hands and say ‘oh yeah, the AI says that.’ If it doesn’t feel right, look closer at it.”
Fortunately, some people are starting to take a closer look at issues of legality, morality, privacy, safety and security involving AI applications. A task force recently created by the European Commission will examine the ethical, economic, social and workplace implications of AI and consider potential guidelines and legislation.
If it makes you feel any better, there’s still time to get ahead of this thing. In an exhaustive report last year, McKinsey researchers revealed AI is still in the throes of its youth.
“AI adoption outside of the tech sector is at an early, often experimental stage. Few firms have deployed it at scale. In our survey of 3,000 AI-aware C-level executives across 10 countries and 14 sectors, only 20 per cent said they currently use any AI-related technology at scale or in a core part of their businesses,” said the McKinsey report.
So relax. The AI apocalypse isn’t happening tomorrow. And we’ve realized the machine is far from perfect. How do we know that?
Because a man who was told at 21 that he had just two years to live, who lost the ability to walk or talk, became an accomplished scientist and bestselling author, inspired an Oscar-winning movie, and changed the way the world thought about space and time and disability.
And no AI program could ever have predicted that.