Some early numbers are in, and it looks like many enterprises are uttering “uh oh” over AI.
“Only one in three AI projects are successful and it takes more than six months to go from concept to production, with a significant portion of them never making it to production,” according to a new CIO/IDG survey of more than 200 IT execs at companies with more than 1,000 employees.
(Projects were labelled “successful” when they produced business benefits such as improved security, better customer experience and new competitive advantages.)
This might be a good time to take a deep breath and remember that artificial intelligence is still taking its first baby steps in the enterprise. Or, more likely, learning to say its first words via chatbots. Once we’ve exhaled, let’s look at the top AI adoption challenges cited in the survey.
Top AI adoption challenges
Data barriers: 96 per cent of respondents reported suffering from data-related pain points, with more than half the time they devoted to AI projects spent on prepping data before piloting could even begin.
Silos: The second biggest challenge was difficulty collaborating 1) between those preparing the data and those creating the data training models; and 2) between engineers and data scientists with different skill sets and experience.
Too many tools: Surveyed organizations are using an average of seven different AI-related tools and frameworks, “creating a highly complex environment that can slow efficiencies,” in the words of the report’s authors.
Like any toddler, AI is taking a few tumbles before it can walk and eventually run. In a recent webinar, Gartner’s Magnus Revang shared some tips for enterprises struggling to get the three most popular AI deployments off the ground: chatbots, virtual assistants and conversational platforms.
His suggestions won’t make the three aforementioned challenges magically vanish, but they could reduce some friction points in these early days of AI adoption.
10 ways to reduce AI adoption challenges
Start small: “Don’t go all in and create a big, complex thing as the first thing you do. You need to learn, you need to collect data,” said Revang, research director for UX, UI and web and mobile development at Gartner. “You have to test, so start small and do multiple proofs of concept.”
Be specific: Pick a specialized use case for testing, so the machine can learn from the audience of users who will actually interact with it later. If your test user case is too broad, the program won’t learn to provide assistance or answers that are specific or relevant enough to your eventual users, Revang warned.
DIY: Revang recommends doing some of the piloting in-house versus outsourcing all of it. “You don’t want to just outsource it in a black box fashion so (contractors) learn everything — and you don’t learn anything,” he said.
Pick a lane: Choose a test system that is either “informational or transactional, not both,” said Revang. Each stream requires very different data and training, and transactional systems must connect to “all kinds of back end services,” he explained.
Design a personality: Give your chatbot or virtual assistant a personality, not just a voice and a vocabulary. Revang pointed to studies showing users “are more forgiving of something that has a personality than we are of something that doesn’t.”
Take a short-term vendor view: With 800 to 1,000 vendors worldwide in this space right now, some will shift specializations in five years; others will consolidate or disappear. In this morphing landscape, Revang said it’s safer to choose a vendor with a two-year run in mind.
Expect to be involved: Revang refuted the idea that chatbots or virtual assistants can simply learn on their own: “In most cases, it’s a manual process and it’s supervised learning, not learning by itself.”
Don’t scrimp on language support: “With each language, there may be huge quality differences between vendors,” Revang said. That includes slang and regional dialects. For example, several chatbot and virtual assistant vendors support French, but does that include Québecois or just Parisian French?
Implementation is everything: “Implementation is more important than technology,” said Revang. “Keep in mind that technology cannot rescue you. I can build a very good chatbot on simple technology. I can also build a very bad chatbot on great technology.” A big part of the implementation piece means you have to …
… manage the human factor: “I have examples of clients who have done everything right, they’ve deployed this and still failed, because they didn’t think about the human aspect,” Revang said. That includes considering the impact of AI on your human workers, not just your customers/users.
For example, Revang described an increasingly typical scenario: If AI is so effective that it can successfully handle the simplest customer calls, then human call agents will end up getting only the most difficult (complicated, angry or frustrated) calls routed to them.
“You’re going to have quite a few disgruntled (contact centre) employees,” Revang said. “So you have to look at the human impact of the deployment or else you might fail — even though the technology project succeeded.”