In a maturing market, buyers are becoming increasingly aware that the umbrella term “intelligent automation” comprises a broad spectrum of functionality, ranging from teachable bots that execute repeatable and scripted tasks, to more advanced cognitive tools that apply pattern recognition and language processing capabilities to analyse data, make decisions and learn from experience. The combination of proven results to date, coupled with the potential for dramatic breakthroughs in the near future, is generating excitement around the benefits that enterprises can achieve with the technology.
In this environment of excitement, meanwhile, buyer maturity regarding the risks of smart computers is a bit behind the curve. Consider robotic process automation (RPA): low cost, ease of implementation and rapid ROI have fueled dramatic growth, as businesses in a wide range of industries seek to leverage the technology to reduce costs, increase productivity and address critical business challenges. But as I discussed recently here in Outsource, many enterprises implementing RPA are failing to monitor and document the impact of the tools on underlying IT systems and platforms; in the process, they’re exposing themselves to potential disruption (the bad kind) as they move to modernise their environments and drive digital transformation.
Today, businesses are increasingly turning their attention to more sophisticated cognitive applications that include elements of true artificial intelligence (AI) and machine learning capabilities. Unlike RPA tools, which are affordable and yield tangible results over a short period of time, cognitive initiatives can be long-term undertakings that require significant funding, as well as patience and faith that the eventual payoff will justify the initial investment.
Hype surrounding cognitive applications and platforms have encouraged some to make that leap of faith and invest in, essentially, a black box and big promises. Amidst recent headlines announcing the failure of long-term AI projects, early adopters now confront a potential scenario where, in two or three years’ time, they will find themselves locked into an inferior solution with little to nothing to show for their investment.
At the same time, backlash against AI hype has sparked an emphasis on “flexibility” and “interoperability,” with providers of newer cognitive platforms bending over backwards to assure their customers their solution won’t lock them in to a long-term commitment. The problem is that such an overly cautious approach can lead to a reluctance to commit to specific outcomes and results – and here again, the buyer is left stranded.
An effective cognitive strategy is characterised by an aggressively pragmatic, short-term perspective that demystifies hype and focuses on limiting the chances of failure while defining specific outcomes to be achieved. For example, start small by using intelligent computers to diagnose and solve network and desktop problems. Focus on micro-automation of various processes and functions, where AI provides building blocks that connect to the big picture. Rather than expecting a computer to make the big decision, use computers to help people make the big decision in a more informed manner. Leverage agile tools to maintain flexibility and avoid lock-in with a single solution.
The AI market is rapidly evolving and creating exciting possibilities for new breakthrough applications. In this environment, “visionary” means defining specific outcomes over the short term and delivering incremental improvements tied to a broader strategy. Committing too soon and investing too heavily in any particular solution today risks missing out and going down a path that will be difficult to reverse. At the same time, a wait-and-see approach isn’t feasible, as it puts you on the sideline while the competition is implementing smart tools and achieving critical benefits.