Artificially limited: liability for machine intelligence

Posted: 02/03/2016 - 22:53

Anyone with even a passing interest in technology will be well used to the claim that a particular IT system – some combination of hardware and software – is “clever technology”. Similarly, synonyms for intelligence have been appended as a prefix or suffix by every IT vendor’s marketing department to add an extra sparkle to their latest technology. As consumers we are living in a world of smartphones, smartwatches, smartTVs and smart kitchen appliances, and the enterprise customer is no less able to buy any number of “intelligent”, “smart” or “cognitive” systems. In the vast majority of cases, there is nothing that truly distinguishes these systems as artificially intelligent. Their behaviours, however complex, and however cunningly planned out, were fixed and pre-determined by their programming.

However, true artificial intelligence – systems that can learn and change over time – is now being used in a broad array of business and consumer contexts. They are being given actual or de facto decision-making power, and running more and more aspects of our lives. IBM’s Watson technology has gone from beating the best human champions on the US game show ‘Jeopardy’, to providing intelligent medical diagnoses, and is now being used to underpin decisions about fraud and credit worthiness in the financial services sector amongst myriad other uses.

In the consumer sphere, machine learning systems at retailers like Amazon, or at media providers like Netflix, are providing deeper analysis of customer behaviour, and providing more personalised recommendations to increase sales or attention. These same systems are now being used in business-to-business contexts to intelligently analyse requests for proposal, and spot customer opportunities for upselling, spotting hidden patterns that even the canniest human salesperson couldn’t.

The systems that are delivering these astonishing results tend to be based on some form of ‘deep learning’: systems that rely upon multiple layers of trainable and then self-adjusting pattern matching software, where feedback from previous ‘correct’ decisions informs confidence in future decisions. In effect, these systems become more precise as their experience increases, and in many fields already exceed the capabilities of humans.

Nothing personal – it’s business

Many modern business processes are managed by human workers sat at computer terminals operating specialist computer software that facilitates the process – finance, accounts payable, claims handling, customer orders or whatever. This is equally true whether the task is done in-house or outsourced to a business process outsourcing (BPO) vendor. Consider the potential of a deep-learning based AI system to take over these typically human-at-a-computer operated business processes…

Rather than build new ‘middleware’ software modules to link the AI directly into the specialist computer software, small hardware boxes already exists that act as a pass-through for the human operator’s monitor screen feeds, and their mouse and keyboard inputs. These boxes allow the AI to see what is on the operator’s screen, and what they click on and tap out on their keyboards when doing their job. During an initial training phase, the human operators carry on doing their job as normal with the AI simply watching the task and trying to find patterns in what it is seeing.

After a short time, the machine will have spotted patterns and been trained to a point where it is reporting a high probability score for completing the simplest most oft-repeated tasks. For those tasks the machine will then be at or exceed the level of accuracy of the human workers. If the job is one where those tasks can then be diverted to the machine, it can carry on doing those tasks itself, leaving a smaller number of human operators required for the more complex tasks.

Since the AI is still watching the actions of the human operators, it will become trained in even the more complex tasks over time, allowing more and more types of tasks to be turned over to the machine – inevitably reducing the number of people needed, until a small core of the most skilled trouble-shooters for the edge cases are all that is required.

To anyone not closely following developments in this area, the above might sound like science fiction – but these types of business process ‘robots’ are already a product offering from a number of the larger BPO service providers.

Limiting liability

So far, contracts for major business processing engagements tend to assume that the work will be done in essentially the same way. If the work involves humans sat at computer terminals, then it will continue to do so even once outsourced to a BPO vendor. Both the customer and the service provider are used to the idea that the service provider will be responsible for the actions or mistakes of its employees, and that if anything does go dramatically wrong with the way a particular task is performed, it is likely to be the fault of a human when investigated.

Arguments about liability clauses accept this as their starting reality – and the financial caps and the lists of exclusions are negotiated on this basis. In this article we will consider a ‘typical’ three-pronged limitation of liability clause structure:

  • First, a list of liabilities which will be uncapped, and to which the exclusions will not apply. Typically these might include liability for matters such as death and personal injury caused by negligence, for fraud, and for claims under intellectual property indemnities and breaches of confidentiality.
  • Second, a financial cap that applies to all other claims, which by market practice tends to be expressed as some multiple of the annual charges – somewhere between 125% to 200% of annual charges being common market practice.
  • Finally a list of exclusions, setting out losses that cannot be claimed at all. The list will almost always specify that indirect or so-called ‘consequential’ losses may not be claimed, but service providers might want to add exclusions for claims relating to matters such as loss of profits, loss of revenue, loss of savings, loss of data, etc.

At present, the heads of uncapped loss are negotiated assuming failure modes we have seen in other contracts where the work is done by humans. However, if a substantial portion of the work is to be undertaken by an artificial intelligence, the most likely failure modes will be different, and the traditional liability positions take on a new significance.

Where an AI is undertaking an increasing share of the work, with humans checking only a small portion of its output, errors might accumulate more rapidly and be caught less frequently. Similarly, whilst a machine might generally be faster than a human work force, and work 24 hours out of 24 instead of 8-hour shifts, the resilience of the machine needs to be considered. If it goes down, that is the equivalent of all your human workers not turning up: no work gets done. This makes low-level failure – the type that a service level and service credit regime in a contract might be designed to avoid – less likely, and catastrophic failure a bigger issue. The only service levels that are likely to matter are system availability, as accuracy and performance are almost a given.

In addition, depending on the nature of the system and its ability to back-up its ‘experience’ in the form of its stored patterns for processing the work, if that pattern is lost after the human work force that was previously doing the work has moved on, then the customer’s ability to undertake the work or even meaningfully recreate the AI systems to undertake that work is badly compromised. The literal loss of corporate memory would be acute.

The net result is that failures are likely to be a rarer species, but almost always more severe. The potential for lower-value claims from the customer against the service provider is reduced, but the customer will remain very nervous about a major outage and even more concerned about the loss of those precious experience patterns that represent the AI itself.

As a result, from the customer’s perspective it would make sense for the list of uncapped losses to include those that involve any loss of the core AI, as the business impact of that would be massive. Similarly, customers will probably see the lower end of the current market-standard financial caps as not being sufficient if the truly catastrophic failure occurs. Last, but by no means least, any suggestion that loss of data or anything similar to that phrasing should be listed in the excluded heads of loss is obviously unacceptable – when the super-worker undertaking the services is entirely composed of data, then their loss is exactly what would precipitate the largest claims of all.

From a service provider’s perspective, this isn’t necessarily all give and no take. The reality of lower-risk service provision, lower-cost IT hardware over time and the ability to locate AI systems in politically stable territories with highly resilient network connections will mean that BPO provision becomes lower-risk overall – those higher caps can be covered by cheaper insurance as a result. Not only that, but the lower exposure to people costs reduces risk of data breaches, confidentiality leaks and the loss of key personnel from the account. The service provider could also argue that the service levels need not be so extensively specified, and service credits given a lesser weighting to reflect the lower day-to-day operational risk associated with AI-based service provision.

AI as professional: reliance upon ‘advice’

One other interesting area to consider is the interplay between AI systems as ‘expert’ providers, and typical contractual liability positions. Especially where forward-looking advice is being given, in relation to matters such as the financial markets, insurance products etc., service providers generally seek to exclude liability for the consequences of following the advice.

The introduction of AI into these fields allows intelligent agents to monitor and curate/actively manage portfolios at all times. The situation then becomes one of not simply recommending or advising on a course of action, but taking it on the customer’s behalf.

Similarly, if a diagnostic system recommends a particular treatment as likely to provide the best outcomes for that patient, or a litigation strategy system suggests that a settlement offer should be rejected, will the doctor (or hospital) or lawyer (or law firm) running those systems be as willing to stand over those decisions as they would if another human had made the recommendation to them?

Since by their nature, deep learning systems that have been properly training become ‘expert’ to a degree that exceeds human capacity in that specific domain, if anything those operating these systems should be more comfortable to back these decisions, but no doubt that question will ultimately be decided by how easy it is to secure insurance that covers AI as well as human employees.

Cloudy forecast

Whilst (for the time being) AI for enterprise services is now being encountered in the context of outsourcing service contracts as augmenting human labour, in time the services may be so dominated by AI as to involve no human input at all. At that point the contracts may morph from the service models we use today, to a more service-provider-driven commodity contract, following the cloud model. At this point, liability issues might shift from being purely about the limitations in the contract (which in a commoditised cloud contract are likely to be far less generous and more protective of the vendor) to the nature of the liabilities themselves. Is the provider vicariously liable for the actions of an AI as if it were an employee, or is the claim for incorrect output more a product liability issue? It will be interesting to monitor these developments as AI systems become more prevalent and enter the consumer sphere.

About The Author

Gareth Stokes's picture

Gareth Stokes is a Partner at DLA Piper, focussing on information technology, outsourcing and intellectual property-driven contracts. Gareth’s experience spans numerous complex major procurement and sourcing projects within heavily regulated industries. Most such engagements involve elements of service delivery on-shore, with other elements delivered further afield, whether nearshore within the newer EU member states or offshore in India, China and emerging economies. These projects tend to involve innovative contractual structures, including joint venture arrangements, multi-level framework and call off structures, trust arrangements and other multi-contract arrangements.