The big debate on which jobs will be lost to automation and mechanisation is firmly on the business radar. We’re mildly appeased that robots and algorithms can’t replace human empathy and intuition. Yet. But what happens when you are able to replicate and outsource your expertise as a form of artificial intelligence? That day is dawning.

The future of work is about to become even more complicated.

A few months ago, some owners of Amazon’s IPA (intelligent personal assistant), who have been getting accustomed to chatting to the Amazon persona, Alexa, were understandably unnerved when Alexa emitted unprompted, witch-like laughter. Amazon said that they fixed the problem by disabling the phrase, “Alexa, laugh,” and changing the command to “Alexa, can you laugh?”, saying the latter phrase is “less likely to have false positives,” or in other words, the Alexa software is likely to mistake common words and phrases that sound similar to the one that makes Alexa start laughing.

Not sure if that appeased the customers who were on the receiving end of the disembodied cackle, as it was a totally unsolicited response.

Welcome to the new frontier of artificial intelligence, and like the Star Trek sign-off mission statement, we’re on the threshold of the unknown and about to, “explore strange new worlds…and to boldly go where no one has gone before”.

The late Stephen Hawking, along with scientists and tech pioneers like Elon Musk, have for some time now been warning about the dangers of delving too quickly into the realm of AI: but humanity is not known for heeding warnings, so we’re pushing ahead nevertheless.

Cape Town drivers!

In December last year, I was privy to a very insightful glimpse of where we are exactly on the journey to autonomous, or driverless, cars. Dubbed the Mercedez-Benz Intelligent World Drive, Mercedez-Benz was sending their autonomous prototype on a global five-city tour to collect data about conditions and drivers from different parts of the world, namely Munich, Shanghai, Sydney, Cape Town and Los Angeles.

The challenges, and limitations, of AI – or the ability for a machine to think like a human being – were brought to the fore. The Intelligent World Drive revealed just how varied these challenges are, in different parts of the world, and just how much the car “still has to learn”.

For example, in Cape Town very unique South African curve balls were faced. Unlike the strictly monitored autobahn in Germany, we have pedestrians ambling along the sides of our motorways.

We also drive “on the wrong side of the road”, have illegally parked cars obscuring road markings, unconventional traffic signs warning you of wildlife, and that’s before factoring in the Wild West behaviour of our minibus taxi drivers. All these anomalies will be added to the data for the car’s AI to absorb, process and add to its self-learning.

It’s this self-learning aspect of AI that worries the scientists and tech pioneers.

HAL's progeny

Last year Facebook quickly abandoned an experiment after two AI programmes started interacting with each other in a coded language that only they understood. The experiment involved two chatbots, which Facebook had challenged to negotiate with each other over trade-offs involving hats, balls and books, each of which were given a value. But soon they started excluding humans, by inventing a shorthand they quickly developed on their own. That’s when the experiment was stopped. A preventative measure, and a wise move.

...two AI programmes started interacting with each other in a coded language that only they understood

When, six months later, Alexa started cackling at its owners, everyone was reminded of a pivotal scene in the movie 2001: A Space Odyssey when HAL 9000 (the spaceship’s computer) first makes his evil intentions known and responds to an instruction to open a crucial airlock with, “I’m sorry Dave, I’m afraid I can’t do that.”

This was back in 1968 and is possibly the first (of many) science fiction references to a dystopian future where robots begin ruling their human masters.

We now seem to have reached that crossroads.

When the topic of the Fourth Industrial Revolution comes up, the inevitable question, “what happens when the robots and algorithms take our jobs?” arises. That question is now not one for scenario planners to mull over but for businesses to factor into their short and medium-term strategies.

Complex algorithms are already being used across many business sectors, from retail heat mapping to ensuring your Uber Eats meal is delivered piping hot to your door, to the more controversial, like using AI to provide targeted messaging to certain demographics, as was the case with Cambridge Analytica using Facebook users’ data.

The Cambridge Analytica scandal has brought into sharp focus the issue of bias, and how AI can not only be used to affect bias in large demographics but also the problem when a harmful bias is programmed into its core learning. Humans, after all, are the ones that do the initial coding. Machines will take initial learnings from their human masters, then evolve them.

These nuanced issues are becoming more complex and the ability to solve them, more urgent. We have, till now, been comforted by the fact that robots and data cannot replicate empathy or intuition, and so certain jobs will be immune to automation and mechanisation.

Emotive algos

Rob Gruppetta, head of financial crime at the UK’s Financial Conduct Authority, is one such believer. In a speech for the FinTech Innovation in AML and Digital ID in London, he said: “Machines can direct the humans to the cases of most interest. But the software will deal in probabilities, not absolutes, and a person will need to make the final decision about whether intelligence is passed to the authorities”.

I suggest he chats to South African company Merlynn, who are in the pioneering business of applying another layer to AI: one that could fill the gap of empathy and intuition, missing in big data. In essence, the possibility of downloading and outsourcing your expertise.

Say what?

The company has developed a technology they call TOM (Tacit Object Modeller), which they say, “enables the creation of virtual experts”. TOM has the ability to “replicate the judgment, intuition and years of experience of experts in making time-critical, high-consequence subjective decisions”. 

To do this they interview a person whose job relies on intuition, coupled with years of experience – for example, a bank’s risk assessor whose job it is to flag potentially problematic clients or scenarios. These kinds of jobs require hard data analytics (what machines can do) but rely more on personal intuition (what machines can’t do).

The TOM interviewing process delves deeper into how these specific decisions are arrived at, asking questions that progressively narrow down the probabilities in order to replicate an expert’s specific decision-making behaviour, which then is converted into software.

This technology is currently being used in high-volume, high-consequence environments – needed in sectors like financial services – to demonstrate the value of scaling human expertise: in essence, downloading your expertise, then outsourcing it.

A soul in the machine?

Understandably, this type of technology and its proposed benefits spark heated debate that has many trajectories, including moral and ethical ones. But then again, parallel debates are also raging around robots, (especially sexbots), the Industrial Internet of Things, nanotechnology and turning humans into cyborgs by implanting microchips into their bodies. It’s just a consequence of where we are in history…and the fast-approaching era of transhumanism: when humanity and technology eventually merge.

The TOM technology and the concept of downloading and outsourcing your expertise might be at a nascent stage, but it exists and is possible, even if the current format is limited. But we all know the exponential evolution of technology. Futurist Ray Kurzweil maintains that if you imagine a graph that curves from horizontal to vertical, humanity will reach the “knee of the curve” (as it becomes vertical) by mid-century, and if we think things are changing fast today, that vertical trajectory is going to be beyond our current comprehension.

...the fast-approaching era of transhumanism: when humanity and technology eventually merge

Just a year ago robotics seemed to have reached a difficult hurdle. Engineers were of the opinion that it was going to be a very long time before a fully functional humanoid robot could be built. They said it was not easy to replicate the sense of balance, and the movements to maintain balance, which a human body is capable of. That hurdle was overcome in November last year, when Atlas, a humanoid robot developed by Boston Dynamics, was unveiled. Atlas can not only walk and move in a human form but can also perform backflips, with almost the same poise and balance of a gymnast.

Downloading your expertise and converting it to software that you could sell on, or outsource, might seem a utopian (or dystopian) dream, but like the warning signs in a rear-view mirror: things are closer than you think. If TOM proves to be an invaluable tool in high-volume, high-consequence environments for financial services, how soon will it be before it is applied to other industries and sectors?

Best you factor this into your HR strategy. Or should that be your Co-Bot strategy?

Dion Chang is the founder of Flux Trends. For more trends as business strategy, visit: www.fluxtrends.com

Related

Humane Leadership Takes Centre Stage

Humane Leadership Takes Centre Stage

Global Threats, Local Challenges

Global Threats, Local Challenges

Proven Under Pressure

Proven Under Pressure