Travel to Utopia?

Posted: 03/16/2016 - 11:53

Earlier today, I had the honour of delivering the final presentation at the Sourcing Industry Group (SIG)‘s latest London Regional Roundtable – this time round, actually, a joint effort with the wonderful folks at the Association of Corporate Travel Executives (ACTE), which also comprised the ACTE London Corporate Travel Procurement Forum. It was a great day full of valuable discussion (and, of course, very useful networking) which really highlighted the many synergies between the two organisations, and the many lessons which procurement and business travel professionals can learn from each other; on a personal level I’d like to congratulate the SIG and ACTE teams who worked so hard to make this event a resounding success, sentiments which I’m sure would be shared by every one of the delegates participating today. (You can read here my interview with ACTE’s European Regional Director, Caroline Allen, carried out in the run-up to the event.)

While much of the discussion was, of course, carried out on an off-the-record basis, I’m looking forward to publishing on these pages some content by some of the conference’s excellent speakers in the near future – and, of course, I have no need to subject myself to the same restrictions, so thought I’d share with you (below) a somewhat edited version of my presentation, entitled ‘The role of Artificial Intelligence (AI) in the workplace tomorrow’. In the transition from audio to the written word you’ll miss, of course, my dulcet tones – fortunately my stammering, stuttering and general gibbering didn’t make the final text – and I’m also not publishing the slides which accompanied the presentation and have made a few edits to adjust for that omission. In the main, however, the copy below is very similar to what I actually delivered this afternoon, and I hope provides some food for thought on what is after all a topic of vast and growing significance to all of us.

I would be very interested in hearing your thoughts on this presentation; when I’ve discussed similar themes here in the past such discussion has always prompted a lot of excellent responses from you, dear readers, and I hope that this trend continues: as always you can send your thoughts to me at jliddell@sig.org – and please make clear whether you’re happy for your comments to be published in any future article on this topic.


The role of Artificial Intelligence (AI) in the workplace tomorrow

Let’s start by imagining the journey someone might take to get to an event like this in a few years’ time. We’ll call her Alice; I don’t know what she does, specifically, but we’ll suppose it’s a job that hasn’t yet been automated away…

Alice gets up extremely early one morning, gets herself ready for the day, and goes outside where a car is waiting for her that she’s booked via an app the previous night. It’s a fully automated, driverless car – built of course using techniques relying overwhelmingly on robots – and the booking and payment for the journey have all been processed completely electronically. During the drive to the airport – a drive managed by extremely complex software at least some of which itself has been written by other software – Alice logs onto her email and responds to at least a few messages which have been automatically generated by bots both within and outside her own organisation, communicating through an infrastructure monitored and at least partially designed by more software.

At the airport Alice checks herself in via a self-service terminal and proceeds through the security zone which is comparatively heavily staffed thanks to the power of the security lobby – though even here most of the employees are really in place to ensure travellers pass through the various bits of kit which do the actual security work. She settles in for a quick coffee prepared entirely by robots – again, the resulting transaction is a purely electronic one – and reads, on her tablet, some news articles: two of the articles have been written from start to finish by software, and Alice simply doesn’t notice any differences between those and the articles written by human journalists.

When the time comes to board her plane, she scans her boarding documents under the supervision of a very bored flesh-and-blood employee and takes her seat, for a journey operated from start to finish by an autopilot, though human beings are still present in the cockpit for the peace of mind of the travellers. During her flight – booked and paid for, of course, entirely electronically – Alice enjoys a bit of downtime by logging onto Netflix and catching up on the latest season of American Crime Story: Who Killed Kim Kardashian?; when the plane pulls up to the terminal she disembarks, goes through immigration where the sole person-to-person interaction is with the woman who stamps her passport, and leaves the airport straight into the waiting embrace of another driverless car which takes her directly, at optimal speed, to the conference centre – where at last she starts interacting in a meaningful way with members of her own species.

OK, so this isn’t a particularly detailed little narrative – but one thing to point out is that much of what occurs is already relatively commonplace, and those things that aren’t probably aren’t that far off (depending, mostly, on when we think driverless cars will start replacing human-driven taxis: with Uber paying its drivers a mere 20% of the fare that may well be some time away at this rate).

Let’s take a look at the jobs in this tale which are still performed by humans. The majority of them, in a heavily regulated institution like an airport, will be there because the law mandates that human beings are still part of the process. It’s a staple of society that laws change at a much slower rate than technology advances, and it’s safe to say that this will continue to be so barring some miraculous turn of events – perhaps the Trump Revolution will bring a new broom to the American legislative system, for example… – and in the example I’ve given we also have to take into consideration the fact that powerful vested interests gain financially from employing a great many people in the security process, which is something else unlikely to change. So for at least the foreseeable future, travellers such as Alice will encounter a number of human beings who are there not because systems which could replace them don’t exist, but because the implementation of those systems is blocked for specific reasons outside the normal framework within which automation is implemented – that is, the business-based desire for efficiency and cost savings.

That efficiency and savings drive, however, has gone unfettered by legislation throughout most of the rest of what comprises Alice’s journey, and the most obvious consequence – visible and invisible – is the removal of people, wherever possible, from the activity chain. The same drivers are at play in the utilisation of driverless taxis as in the automation of Alice’s booking process: the removal from the process of the costs of employing people – both one-off such as hiring and regular such as wages – and the removal of human error. This month saw details emerge of what is apparently the first traffic accident which can actually be blamed on one of Google’s driverless cars (as opposed to accidents involving them but where the responsibility lies with other, human, drivers); considering the huge number of miles now driven by these vehicles that’s a pretty astonishing safety record, and the same principles apply to the software taking and processing Alice’s travel bookings: their error rate will be vastly below the equivalent processing carried out by people.

The story of automation thus far can be described pretty simply: where there is a compelling business case, and unobstructed by any other considerations (legislative, social, union-related etc) organisations will replace humans with machines. This constant predates the Industrial Revolution, and despite the various forms of Luddism which have arisen in opposition to it, remains the case today. For most of that time, of course, it has been a concept of most relevance to manufacturing, in that the automation that has been possible has been a very physical thing: first steam-powered machines and later very complex computer-guided robots have replaced human workers in the factory, and enabled the creation of objects with an accuracy simply unachievable by human beings – and at a much greater rate of efficiency.

Today, a new automation revolution is at hand – and it’s taking the form of zeroes and ones, not steam and iron. Staggering advances in information technology have led to the creation of software which is increasingly able to replicate, and subsequently improve upon, the work of its human predecessors – and it is this new breed of software (and, in some sectors, highly specialised hardware developed to carry out its instructions) which is set to turn the world of business – indeed, our world as we know it – upside down.

That sounds dramatic, but I assure you it’s taking place as we speak. Last year, at the most recent SIG London roundtable, I spoke on a related theme and as part of my presentation I played a video which had gone viral on YouTube entitled ‘Humans Need Not Apply’, in which its creator outlines why he believes humans today are in a similar situation to that of horses immediately prior to the development of the internal combustion engine: this is, on the verge of becoming economically redundant. When I agreed to speak on this topic today I promised that I wouldn’t put the audience on quite so much of a downer this time round, so we’ll leave that video on YouTube…. While I do recommend it to you all, you do not however need to watch it to be familiar with either its basic premise or the generally portentous tone it exudes, since hardly a week goes by now without one renowned consultancy or another issuing a gloomy forecast of the scale of the revolution ahead of us, with variously millions, tens of millions or hundreds of millions of jobs set to be handed over to the all-conquering machines within a generation. Again, not to dwell on this apocalyptic scenario for now, but it’s worth pointing out that the changes making the headlines are already well underway: much of the back office activity taking place behind the scenes of Alice’s story is already overwhelmingly digitised and automated compared with how it would have been carried out certainly within living memory and even within the timespan of the careers of most of us in this room, and eradicating the remaining human touchpoints is an ongoing and lucrative mission for software and service providers the world over.

In my narrative, when Alice has made it through the comparatively densely populated security zone, and is waiting to board her flight, she does something which would currently involve interacting with a human employee but which in my hypothetical near-future does not: she orders and enjoys a coffee. This represents a type of activity not impacted by legislation of the sort which keeps the security area staffed but which nevertheless isn’t universally an open and shut business case for automation: the kind of service in which human staff can increasingly be seen as a “nice to have” or even a “value-add”. Alice chooses to go to a fully automated refreshment bar and pays for her coffee via her smartphone; the coffee is prepared – according to instructions generated through her frequent use of this particular chain – and served to her by an actual physical robot (it’s important to bear in mind that many of the uses of the word “robot” in the tech arena today don’t refer to this kind of physical machine, but to software: this is the case for example with the term “robotic process automation” which I’ll be coming onto in a minute). Elsewhere in the airport are establishments where the baristas, bar staff and waiters remain of the Homo sapiens persuasion, for people who don’t mind paying a little extra for the human touch; Alice isn’t feeling particularly sociable today and, besides, she knows when she goes to this particular automated coffee bar she’s going to get her drink prepared exactly the way she likes it. The point is that the staff employed in the other bars and restaurants are not there because their presence is necessary, but because it is preferred by some of the patrons: their employers have decided that those businesses benefit from retaining the legacy hardware (or “meatware” to use a particularly delightful term increasingly at play).

Therefore when we come to look at the kind of jobs which will still be carried out by humans in the future – the relatively near future – we must bear in mind this kind of discretionary or niche employment, especially in service work (rather than business services): of course, the advent of the motor car did not mean that the professions of cartwright or blacksmith disappeared completely from the face of the earth; it just meant that their numbers dropped to nearly nothing, and those who still ply their trade do so because there remains an attraction amongst certain people with a certain degree of disposable wealth to such traditional craftsmanship. They are no longer necessary, but for some people a nice-to-have – and so it will be with many roles in the service sector. I imagine it will be a long, long time before top-end restaurants do away with human waiters, sommeliers and the like – even though it will be perfectly feasible to do so. Human nature, in other words, will sustain many people in roles which could technically be done away with completely, because the business case for retaining them will remain intact. However, in many – perhaps most – cases, certainly at the lower end of the value chain, that business case won’t be there: we can see that development occurring already in the restaurant sector with McDonald’s introducing ordering via tablet, and indeed it’s been commonplace for many decades on a very basic level in the form of the vending machines we’ve all grown up with.

Let’s leave Alice for a while in the wonderland of her conference and step back a little, and get acquainted with some of the technology which is driving all this change, before I return later to look a bit deeper into what all this implies, as per the title of this presentation, for our workplaces tomorrow (though hopefully I’ve already given sufficient indication that much of this transformation is already taking place today).

The phrase most commonly invoked when describing the kind of software I’m talking about here is “Artificial Intelligence”, or AI, long a staple of science fiction but increasingly genuine science fact – up to a point, anyway. It’s defined as “the study of man-made computational devices and systems which can be made to act in an ‘intelligent’ way.” There are many degrees of AI, some of which we have already attained (see the software which can beat chess, Go, or even Jeopardy! champions) and some which remain a long way off; you’ll encounter great claims to AI capability on the market and I advise all of you if you haven’t already to read up on this topic in depth and formulate your own ideas about what AI actually means, since it’s going to be of supreme importance to humanity before too long…

As I hope I’ve outlined already, digital technologies are indeed changing the way we work – and that graphic [a modified version of the famous image ‘The Ascent of Man’] is actually for me a very insightful one since my personal belief is that artificially intelligent technology represents the next evolutionary phase for humanity and/or life on Earth, but that’s definitely a topic beyond the remit of this presentation. At any rate we can imagine how that progression is mirrored by the changes in the manufacturing sector over the last couple of hundred years, except in that case the last image would now more appropriately be a computer-guided robotic welding arm rather than a human being.

AI is indeed taking some of the work off our plates – and, of course, one of the major issues we as a species have to face is how people are to put food on their plates once the work’s been taken off them. Having played the video I mentioned at the last SIG London event, much of the discussion which followed looked at how society will cope with the changes which are coming, and I’ll digress slightly here to say that that discussion and pretty much every one I’ve had on this topic saw people split into two camps: the optimists who believe that history shows that when radical change of this nature happens new work emerges as a result, and therefore the spectre of mass unemployment is mere doom-mongery since people will generally be redeployed rather than moved out of the workforce entirely; and the pessimists who think that’s a load of bollocks. While as you may have guessed I tend to side with the pessimists, the fact is we simply don’t know what’s going to happen – and that’s somewhat alarming in itself.

Digression over: as [the quote on the slide] says “the premise of AI is its ability to learn from the data it collects”: this ability to learn is fundamental to what makes us human, and fundamental to to genuine AI. A piece of software which processes invoices to an incredible degree of accuracy according to very fine instructions given to it by its developer may be a wonderful piece of kit, but it isn’t AI; it becomes AI to a degree when it learns to do its job better through the process of doing that job.

As I said, AI is a widely used term with many different definitions so it’s worth looking at some of its subsets which you may encounter in the press and in the literature covering the field. Artificial narrow intelligence is a type of artificial intelligence that specialises in one area. An example of this would be teaching a computer to play chess better than a human world champion. Artificial general intelligence refers to computers that are as smart as humans in terms of performing any intellectual task that a human being can. Artificial super intelligence refers to computers that are much smarter than any human in practically every field.

The term ‘artificial intelligence’ has been used for a very long time; the technology itself has been in development for longer than we might realise. The joke goes that “AI has been the future for more than half a century”; we’ve gone from Alan Turing in 1950 pondering the question of what would make a computer “intelligent” to today’s plethora of dazzling software which, nevertheless, does not get us to the point of having created artifical general intelligence: that is still, half a century on, the future – but if you listen to the likes of futurist Ray Kurzweil and keep an eye on Moore’s law, you may well conclude that it is no longer stuff of the distant future.

We can also see [on the slide shown] that “big data is a primary driver for advancements in AI technology”. Personally, I hate the term “big data” but that’s irrelevant: there’s no doubt that the availability of and our ability to analyse and depict simply tremendous amounts of data is fundamental to the ongoing evolution of AI.

What is big data? In summary big data collates information from various internal and external sources — including enterprise database content, social media, sensors in products, mobile devices and more. It takes this data in real-time or near-real time, massages it and leverages it to make decisions about products and services that better meet the needs of customers. Companies can also use big data to find new revenue sources or to make operational decisions, and to discover unprecedented insight, improved decision-making and new sources of profit. When big data meets software which can learn, we get a revolution.

I mentioned robotic process automation earlier: RPA refers to software platforms that use virtual robots to manipulate existing application software in the same way that a person processes a transaction or completes a request. It uses the existing desktop interface to access other applications and handles complex business processes with near-zero error rate.

In essence, RPA is a software that drives software. The virtual robot, which is a software, sits on servers and connects via a virtual private network (or VPN) with a company’s user application. It is driven by procedural or rules-based processes which can be quite complex but of low to moderate risk. RPA applies technology that configures software (aka a “robot”) to capture and interpret existing applications for things such as: processing a transaction, manipulating data, triggering responses and communicating with other systems. RPA allows companies to quickly, easily and cost-effectively automate business processes that can be quite complex in nature, without requiring expensive platforms to do so. In most cases, robots deliver with 100% accuracy.

Companies that use large-scale labour for high-volume, highly transactional general knowledge process work, such as business process outsourcing providers, have an opportunity to save time and money with RPA, which has seen improved accuracy, cycle time and productivity. It allows companies to access areas that were not previously automated for whatever reason, consequently removing people from repetitive tasks. Much of the debate about the future impact of this technology on society, as I mentioned, centres on whether the people being removed from these tasks will be redeployed on more interesting, higher-value, strategic work. One of the pressures which I believe mitigates against this is that organisations can’t help but see this as a long-term cost-saver – and the cost being saved is that of the employees, and is therefore not saved at all if they’re retained and redeployed.

It isn’t hard to see why the potential cost savings are so attractive: RPA benefits being touted often include 40% to 70% labour cost reductions and near-zero error rates, so it’s no surprise that companies are using RPA to automate, digitise and standardise the bulk of their repetitive back-office work. Retaining the replaced employees removes those savings at a stroke and indeed adds to the cost burden because RPA solutions are far from free – and there is of course no guarantee that an employee good at performing transactional, repetitive work would be good at the higher-value strategic work which the optimist camp says will be waiting round the corner for him or her. As long as organisations view RPA as a cost saving measure first and foremost, it’s my belief that the overwhelming majority of those employees are going to be joining the dole queue very soon – if, that is, their societies provide unemployment benefit…

Many of you will be aware of the term “cognitive computing” which refers to the development of computer programs that can teach themselves to grow and change when exposed to new data. Another term you might hear in place of cognitive computing is “machine learning.” Cognitive computing uses simple computer simulations similar to how biological neurons behave, to extract rules and patterns from sets of data. It is iterative, which is important because as computer models are exposed to new data, they are able to independently adapt. They learn from each data input, make predictions and produce reliable, repeatable decisions and results. Retargeting is an example of cognitive computing. This is when you see ads pop up on your computer advertising things you recently searched for online. Netflix is another example when it makes recommendations based on movies you’ve recently watched. Spam filters use cognitive computing to decide what emails to deliver and which ones to reject. During Alice’s journey she encountered all of these varieties of cognitive computing, already widely at play today.

I said that we get a revolution when big data meets AI. Here’s one reason why: cognitive computing allows companies to analyse massive data sets and deliver high-value predictions that can guide better decisions in real time without human intervention. Cognitive computing can create thousands of models a week versus humans who can create one or two good models in a week. This results in trend and forecasting information that companies can use to make better business decisions. It automates tasks normally completed by humans and therefore reduces costs by addressing them significantly faster. Because the information is based on massive amounts of historical data, it provides confidence in the actions that it takes. By being able to synthesise data from different sources and learn from those inputs, cognitive computing allows companies to customise products and services in very personalised ways.

So, a quick summary of all this. Artificial intelligence studies man-made computational devices that can be made to act in an “intelligent” or “human” way. It requires data to do so. Data comes in many forms: unstructured data can come from social media, videos, GPS information, tracking sensors, etc., and structured data comes from relational databases. Big data takes all of that information and makes sense out of it. Robotic process automation takes the information that has been collated through big data and makes predictions. Cognitive computing goes even further in that it learns from all the previous inputs and changes things based on those learnings.

Hopefully I’ve already convinced you that we’re already experiencing a revolution, but as added evidence here are a few examples of AI of varying degrees and types that are already prevalent today:

  • Retail – self-checkout lines utilise smart technologies to ring up food purchases.
  • Automated call centres – for years these have been in place for customer service and they continue to grow in use as computers learn with each call how to make adjustments.
  • Manufacturing – assembly lines are one of the most ubiquitous uses of robotics with machines eliminating the need for humans to perform repetitive tasks.
  • Consumer products – Siri is a classic example of a computer that has been programmed to respond to your requests and learn from you in the process. For example when you say a name differently than Siri thinks it is pronounced and she then asks you to teach her how to say it properly: that is a very tangible way to see how computers can learn.

And then, a quick look at how this technology is already impacting upon the sourcing profession… As digital age services become more and more prevalent, companies need to be aware that traditional approaches to supplier selection do not fit these services. More specifically, yesterday’s contract templates will not meet the needs for today’s digital age providers. A few examples of this:

  • Traditionally it is assumed that suppliers will provide services that meet customer needs. The digital age assumption is that customers will buy what the supplier is delivering.
  • A traditional assumption in an outsourcing contract is that you will be hiring human labour – that services will be provided by people. The digital age assumption is that the labour you are contracting for comes by way of computers with RPA and cognitive computing.
  • The pricing under contracting was previously metered primarily on inputs or specific tasks being performed. The digital assumption is that pricing is based on access to a fixed cost infrastructure.
  • Another traditional assumption is that the value is primarily in performing needed activities. The digital age assumption is that value is primarily in data generated by services.
  • Finally a traditional assumption in an outsourcing contract is that there will be long-term commitments for services. The digital age assumption is that contract terms will be shorter.

And finally, a bit of optimism at the end of the deck: as I described earlier, some jobs aren’t on the chopping board any time soon, and not just those in the security area at the airport. C-level jobs, for example, will never be replaced by automation (partly, it has to be said, because turkeys don’t vote for Christmas). Many low-level jobs are also not subject to automation any time soon: we’ll need to wait for the emergence of physical robots with all the dexterity and intellect of a human craftsmen, at an equivalent pricepoint, before jobs like basic apprentice-level plumbing, or even anything involving climbing a step-ladder, will be replaced. Mid-level jobs are most “at risk,” yet anytime that creative thinking or problem-solving is required, a human is almost always going to be a superior choice to a machine until we reach that artificial super intelligence phase.

OK, so: what does the workforce of the future look like? I’m going to caveat this by admitting, frankly, that I don’t know for sure – though in my defence if you can find anyone who genuinely can predict the future I’d suggest that’s probably a more remarkable development than even the technology I’m talking about. So let’s agree that we’re entering the realm of conjecture – but that it is at least partially informed conjecture.

One thing which I’m pretty confident about is that across the board it’s going to be a smaller workforce – and that applies to areas where you might think our robot replacements are still at the science fiction stage. Going back to Alice’s journey: drinking her robot-served coffee she reads the news, and encounters without knowing it a couple of articles written entirely by software. Science fiction? No: fact, today. Software is already writing thousands of articles every day – some of them of a very low quality, admittedly, but some of them actually published in respected papers around the world. Currently these tend to be very data-heavy, basic-prose articles like weather forecasts, but the tech is evolving incredibly quickly and I recently read a review of a football match which was entirely generated by software and which could easily have made the pages of something like a school newsletter or the Daily Mail. My profession, journalism, is already experiencing the AI revolution.

What about yours? Well, I won’t begin to pretend I’m an expert on the business travel sector, but both that and the sourcing profession generally fit into the very large category of work which requires a significant degree of problem-solving and decision-making – which tends to make it relatively resistant to the kind of AI currently available – but where the more transactional activity is ripe for plucking by the machine conquistadors and in many cases has already been plucked. As the tech gets more advanced, the more aspects of the job it is able to carry out – and, as I said earlier, the iron rule is that when an organisation sees a business case for automation, barring external constraints, it will implement it.

When we reach the more distant possibilities of AI – the general and super-intelligence levels – all bets are off as, unfortunately for us, no matter how great we may be at our jobs the fact is that we won’t be as good as a machine of hugely superior intellect. That point remains a way off – but getting there will be a gradual progression and during that process more and more elements of what makes up a job even at a senior level will be opened up to software – and following the aforementioned iron law of automation, software will be deployed. Therefore, with less work to be carried out by people, it stands to reason that even in our professions fewer people will be retained to carry out the work. The specific nature of that work will of course vary by profession and organisation but it will be heavily focussed towards problem-solving – and increasingly involving a good degree of integration and interaction with the software that is carrying out the rest of the work. Liaising with artificial intelligences will become an ever-increasing part of one’s workload, and of course new skills will have to be developed in order to accomplish that.

So, the workforce at the upper ends of most businesses will be smaller, more strategically focussed and requiring more technology-related skills – and it’s possible that people skills, long the supposed bedrock of any self-respecting manager’s edifice, will become necessarily less important since there’ll be fewer people with whom to engage and whom to manage.

Because, let’s make no mistake here, at the lower end of many organisations the human toll will be catastrophic. Again, the iron law: if technology can replace people and save a business money, the business will do it if it can, and you can pick any sector you like – right now there are incredibly smart and innovative companies working round the clock developing technology that will take – or even are taking right now – those employees out of the workforce. Logistics: driverless trucks and drone delivery. Financial services: next-generation modelling and “robo advisors”. Healthcare: robotic surgeons and remote diagnostics. Literally every sector in business has opportunities and where they’re not limited by law or pragmatism they’ll be snapped up.

This has huge ramifications for parts of the upper echelons of an organisation too: think of the impact on the HR function for example of a wide-scale reduction in headcount (alongside, of course, the advent of similarly revolutionary technology within HR itself) or, indeed, on the procurement function now only having to source items for a fraction of the former workforce. I can’t see into the future but I would bet my bits on the fact that this revolution will profoundly impact every function as well as every sector.

And then, of course, there are the socio-economic ramifications – and while, again, I can’t see into the future, and while I did promise not to be too depressing, if you believe as I do that unless we change radically our societies’ approach to wealth allocation, welfare and the working week we run the risk of creating a vast, permanently unemployed underclass with limited or no access to capital and no way of participating beyond at a mere survival level in the global economy. And if that’s the case, then we need to think again about the workforce because the question for most organisations will be: where are our customers? If business continues its headlong – and, let me be clear, entirely understandable, and non- rather than un-ethical – rush into efficiency and automation-derived cost savings, it runs the risk of throttling itself by laying off the workforce which keeps the economy going.

You can see this in Alice’s journey: bar those people employed because the law mandates that they must be, she doesn’t interact with anybody. Those people who would formerly have played a part in her journey represent jobs which can be taken out of the economic system forever – people whose consumer spending, investments and tax payments no longer drive the economy. Yet those people themselves haven’t disappeared – merely their jobs. The people still require food, and shelter, and clothing and everything else that keeps us alive and sane. But they are no longer earning anything in order to fund meeting their needs. So Alice’s journey, wondrous though it is from many angles, is also a bit of a warning: unless we reassess the entire foundation of our society – dramatic though that sounds – we run the risk of automating ourselves into catastrophe. To return for a moment to the title of this presentation, I can’t tell you exactly what the workforce of the future looks like, other than to say that without a lot of very careful, radical thinking, and some very tough and politically explosive decisions, we risk not having one at all.

However, in light of my promise to avoid depressing everyone, especially this close to drinks, let’s don some rose-tinted spectacles to conclude. Firstly, along with a remarkable capacity for self-harm, humanity has an incredible aptitude for problem-solving and overcoming challenges, and it’s perfectly plausible that we’ll demonstrate that once again when confronting this one: that we are able to reorganise society in a way that makes the most of the incredible opportunities arising, increasing leisure time for all and actually redeploying to more interesting work at least a good proportion of those employees whose jobs are being automated away. This technology has the potential to free minds as well as boost bottom lines, and if we can deal with the sociological side-effects coherently we could find ourselves enjoying something akin to Utopia.

And, finally, we can enter some truly far-out territory, with questions about the nature of life, the purpose of the human race and our destiny as a thinking species. Many people, myself and Stephen Hawking included – sometimes one has to make one’s own associations – fear the rise of uncontrolled AI, and the impact of automation upon society; yet that is to separate the intelligences we create from ourselves in a way which might well be missing the profoundest point of all. What if we’re not creating our replacements, our successors, but new facets of our future selves – what if the human race of the future is a species bound with and enhanced by technology at every level from molecular to metaphysical? Then the challenges arising today will come to be seen as mere growing pains in the evolution of our next incarnation, and will be forgotten just as quickly. What travel, or procurement, or business generally, will look like then is beyond anyone’s guess – but I suspect we will not, at least, need to concern ourselves with who gets the extra leg room seats. At least, by then, I am sure I won’t.

The above was delivered to the SIG London Regional Roundtable/ACTE London Corporate Travel Procurement Forum on March 15, 2016. For more information on the Sourcing Industry Group see www.sig.org; to find out more about ACTE see www.acte.org/Home.htm.

Region: 

About The Author