AI may favour incumbents not disruptors
Artificial Intelligence (AI) and Robotic Process Automation (RPA) may not be using reason, but they are undeniably effective in cutting costs and improving customer service, and gradually encroaching on ever-more areas of life and work. They also raise profound questions about law, audit, regulation, ethics, the future of employment and especially the standard of living. This is a summary of the first of many Future of Finance discussions about the commercial, economic and social impact of AI.
AI is not a single phenomenon. Though proper categorisations of AI is more granular, it is useful to distinguish between data-crunching algorithms, machines that can reason with purpose (Cognitive AI), machines that can identify patterns in data (Machine Learning, or ML).
Robotic process automation (RPA) is not technically AI and refers to automation of rules-based, repetitive manual tasks, though RPA is evolving into “intelligent automation” and is often used by companies alongside AI proper.
Companies are not always seeking large scale change and, even where they are, may invest in short-term, temporary automation projects while simultaneously pursuing an overall transformation. AI is becoming a routine feature of technology upgrades, with the support of internal DevOps.
However, all AI investments have one or both of two principal goals: to improve the customer experience and/or cut operational costs.
AI is yielding time and cost savings of more than 90 per cent, even in offshore locations where labour costs are lower. These gains are not yet visible in national productivity statistics, either because they are not being measured properly or because best practices are not yet sufficiently diffused throughout economies.
Though accuracy levels are high, they are not 100 per cent, which is necessary to give them the flexibility to learn from mistakes.
Augmentation of machines by humans is still necessary. The long erm impact of AI on employment is not likely to diverge from the historical pattern of changing, rather than reducing, work. Indeed, the fact that investment in AI is economic reflects a shortage, and not a surplus, of workers.
Though it is often said that a broken process cannot be automated, automation can deliver benefits without re-designing a process. Automation can also improve understanding of a process and aid its reconfiguration.
Some tasks are impossible without AI, so AI expands the range of work that can be done. Its incremental encroachment on the world of work means that AI will gradually become embedded in all human activity, at home as well as at work.
This means that AI will develop in a networked fashion, with multiple versions interacting across different spheres of activity. It may evolve into artificial general intelligence (AGI) sooner than anticipated.
AI can assist committed disruptors, but tends on the whole to entrench incumbents, which have the resources to invest in the technology and data science expertise, though automated machine learning is eroding this advantage.
The need to retain the tacit knowledge of workers, explain computer-based decisions and prove algorithms do not discriminate or breach competition law is creating new disciplines of AI ethics, law, regulation and audit.
Artificial intelligence (AI) is a broad term, with multiple meanings. Indeed, the term is somewhat debased by companies whose management believe an association with AI will boost the share price. In reality, a company that claims to use AI is like one that claims to use computers.
AI comes in multiple varieties
It is more helpful to distinguish between Artificial Intelligence (AI, or data crunching algorithms), Cognitive AI (machines that can reason with purpose) and Machine Learning (ML or identifying patterns in data) – though even these categories are too broad, an require further sub-division into, say, documentation management and general-purpose applications of AI.
Robotic Process Automation (RPA), which focuses on the rules-based automation of routine but repetitive manual tasks of the “If this, then that” type, is not seen as an aspect of AI at all. However, is merging with AI in terms of “intelligent automation,” by which machines scour unstructured data sets for key data points, and learn from their experience (ML, in other words).
AI augments humans but also displaces them
In practice, organisations are using both RPA and AI to automate different activities, including different aspects of the same process. They are also using machines to augment rather than replace humans, though this does lead to job reductions as well as changes in job specifications.
For example, by applying intelligent automation to unstructured documents one bank has reduced document processing time from 48 hours to three. This has enabled the bank to exchange a large data input workforce for a smaller data verification workforce.
The fact a verification workforce is still required is a reminder that AI and ML cannot deliver 100 per cent accuracy, especially when processing unstructured or ambiguous data. As Garry Kasparov noted, a journeyman human chess player augmented by a computer is a formidable opponent, because the combination adds processing power to human intuition.
That said, progress in chess-playing computers shows that AI can aggrandise an increasing share of human labour. They began by augmenting the human flair for strategy with a capacity to anticipate thousands of moves ahead, but now beat even the best humans routinely, unaided.
Errors are necessary for flexibility and learning but require human overrides
However, as confidence in machines increase, so does the risk of overconfidence. There is a danger that users invest excessive faith in the outputs, or operate with inadequate margins of error or fault tolerance, in the same way that financial market participants misused quantitative modelling.
On the other hand, tolerance of a certain rate of error is essential if an AI machine is to be flexible enough to absorb fresh inputs. Errors can be vivid and rapid (as in the notorious case of the Tay chatbot) but are almost always straightforward to identify and correct.
Humans must adapt to AI as well as AI to humans
Human interaction with machines consuming historical data that is always backward-looking are nevertheless complicated. Decision-makers must use proxies for components where vectoral data is not available, and, even where it is, cannot always disentangle every variable.
As Luciano Floridi, the Oxford Internet Institute’s Professor of Philosophy and Ethics of Information, has pointed out, technology does not always adapt to humanity; humanity sometimes must adapt to technology – and AI is an instance of this.
Robot vacuum cleaners require their owners to rearrange their furniture. Go players contesting with machines have found they can beat machines by mimicking the outrageous moves that machines make.
The belief that existing processes must be re-designed is over-stated
An implication is that companies need to adapt to AI by changing how they work. Indeed, it has become an industry trope to argue that AI cannot be applied to a broken process, and that the technology demands an inefficient process be re-designed first. In practice, this proves not always to be the case. AI machines can take on relatively inefficient processes and deliver savings.
For example, if five people in accounts payable that cost the company US$500,000 a year can be replaced with robots costing US$30,000 a year, the savings are large enough for the company to be indifferent as to whether the process is optimal or not.
Automation can aid the reconfiguration of a process
Automating a process can actually improve corporate understanding of a process. That is because it is possible to review the work of a bot systematically in a way that cannot be done by scanning the brains of human employees shuffling paper or re-keying data.
Indeed, so-called process mining technologies are now available. These monitor and record who accesses which pieces of data in large data sets, and when, and draw maps which highlight opportunities to automate business processes.
That said, there are cases where investing in AI cannot rescue a fundamentally broken process, and it must be re-engineered before it can be automated.
Some tasks can be performed only by an AI
More importantly, there are use-cases – such as searching 2 million client records for evidence of money laundering – which only an AI can fulfil. In this way, AI expands the range and types of work that can be done.
Asset managers, for example, are using AI to gather information to feed the research that drives portfolio management decisions. It frees research analysts to focus on obtaining insights and making judgments rather than sourcing and mining data.
Companies are not always looking for large scale changes. A bank looking to make mortgage originations or loan approvals more efficient, for example, may prefer short term tactical fixes to a complete re-design of entire processes, even if they can be fully automated from the outset.
AI is being integrated into technology investment in familiar ways
Sometimes, tactical fixes are combined with large scale strategic transformation. RPA, for example, can provide rapid and low cost but temporary relief from process inefficiencies while the entire system is re-designed and replaced. In the meantime, the company collects the benefits.
Many companies have adopted “agile” methodologies for software development, and their developers are simply incorporating AI technologies into that work – by building virtual agents, for example, into the code they write. AI is being added, incrementally, to standard software products and to software as a service (SaaS).
Likewise, data scientists understand the need to collaborate with the leaders of in-house technology departments, because they alone understand legacy systems.
This can lead to tensions, because internal software development and IT operations (DevOps) are often understandably nervous of RPA and AI initiatives that threaten to do more work with fewer software engineers.
However, resistance by DevOps has diminished as RPA in particular has become more commonplace, and chief technology officers (CTOs) have come to see it as another tool they can use.
Strategic transformation requires more than piecemeal adoption of AI
The strategic adopters of AI review their existing workflows, from sales and marketing, through customer acquisition and customer on-boarding, to product delivery and customer service. They then ask how they can apply existing and potential future states of AI to each discrete package of work. This can be described as adopting an AI operating model.
But the truly revolutionary users of AI are the companies which make imbue everything they do with AI, including new product development and acquisitions, and make it accessible to employees and customers. Uber, for example, looks always to use AI and ML in anything it does.
That focused corporate personality is what empowered Uber to successfully disrupt the taxi industry, though the industry it disrupted made little use of technology (indeed, the London cab driver “knowledge” or memory test is the reductio ad absurdum of analogue technology).
AI tends to favour incumbents rather than disruptors
However, because AI-based data extraction and decision-making, and even natural language processing, are now commoditised – tools to do this work are readily available from vendors, and can be consumed via the Cloud or APIs – AI technology tends on the whole to strengthen the position of incumbents. They have resources to invest and to purchase any AI-based challenger that threatens.
Yet – and paradoxically, give its commoditisation – AI is also difficult to implement because an AI project requires skilled and experienced data scientists. They are in short supply, and therefore expensive. This further entrenches the position of the incumbents and weakens that of the challengers.
Auto-ML promises to change these economics, precisely because it reduces reliance on expensive data scientists. The goal of Auto ML is not to dispense with data scientists but to minimise their use while making AI useable by so-called “citizen developers.”
The impact of AI on the labour market is likely to follow the historical pattern
If technology that requires no special skill to use – if using AI becomes, as it were, a skill on apart with driving a motor car – sound reassuring for employment, AI is not always viewed in such a benign light.
While previous technological advances have dispossessed low status workers such as farmhands and unskilled factory workers, even professional roles such as medicine, accountancy and the law – hitherto protected by the high barriers to entry set by examination and other professional qualifications – are sometimes seen to be at risk.
Some see this as liberation from the Biblical condemnation of humanity to work (“By the sweat of your brow you will eat your food until you return to the ground, since from it you were taken; for dust you are and to dust you will return”), followed by a life of leisure funded by a universal basic income earned by high productivity machines. As Aristotle famously observed, if looms were to weave by themselves masters would not need slaves.
The history of technology, however, suggest that neither Dystopians nor Utopians are correct. New technologies can create bumpy transitions, and the nature of work changes, but the lump of labour fallacy remains fallacious: there is always plenty of work to do, for humans as well as machines.
A likelier outcome of AI is liberation from low value, low status, work such as reconciliations, and an increase in employment in higher value and customer-facing work and in entirely new products and services.
Artificial general intelligence (AGI) could happen sooner than anticipated
On this view, the fact that investment in AI machinery is commercially viable reflects a shortage of workers, not a surplus of them. And it is a more probable outcome, as least in the foreseeable future, because artificial general intelligence (AGI) is not yet extant.
However, current versions of AI do represent a way-station on the journey to AGI. Project Debater, the AI system built by IBM that is able to digest knowledge papers on complex topics and then debate them contextually with human beings, dates back to 2012. Deep Blue in 25 years old, and the success Watson enjoyed on Jeopardy! celebrates its tenth anniversary in 2021.
Clearly, as Ray Kurzweil never tires of pointing out, computing power grows exponentially rather than linearly, so an AGI may become available more quickly than anticipated with (as Nick Bostrom has warned in Superintelligence) a sudden transition from one state to another.
The shorter termer impact of AI will be cumulative and networked
Nevertheless, the conservative stance is that the effects of AI on business will be cumulative, with machines taking on more and more activities – and decisions – previously monopolised by humans.
On this view, by the time AGI is available, AI will already be embedded in everything humans experience at work and at home.
It follows that AI, as it develops, will be networked rather than monolithic, with multiple versions interacting in complex ways. AGI requires sensors and actuators that are not yet deployed as widely as required.
Companies recognise that employees possess valuable tacit knowledge
Yet companies are already grappling with the challenge of how to integrate even existing forms of AI with their human workforce.
There is a growing recognition in business, for example, that long-serving workers may be digitally challenged but nevertheless possess a valuable tacit knowledge – of, say, how to settle a securities trade or borrow a stock – that is difficult for machines to capture.
Cost-cutting is nevertheless a major goal of AI investments
Ultimately, however, cost-cutting tends to be one of only two goals behind all AI investments (the other is improving the customer experience). And in terms of cutting costs, RPA is certainly delivering on its promise.
A major American bank has used RPA to cut 20 per cent of the 200,000 staff it employs, even in offshore locations. Another firm accelerated customer service query turn-around times at its call centre by 99 per cent, while also reducing the volume of calls received.
In other words, RPA has out-arbitraged even the global labour market arbitrage of offshoring work This is why third-party providers of offshore services in India, for example, are among the earliest adopters of RPA. This is not surprising. When it comes to repetitive, rules-based tasks, bots are a lot cheaper than full-time employees, even in offshore locations.
AI is not yet visible in the productivity statistics
Curiously, however, the returns enjoyed by companies are not visible in the productivity statistics. One possible explanation is that the gains in productivity (such reductions in hours worked) are simply not being picked up in the economic statistics.
Another possibility is that the companies enjoying the most marked success with AI are not sharing their success, mainly for fear of arming competitors, but partly because the power of the technology has become too quotidian to warrant being mentioned. This might be reducing the overall impact on the wider economy, by making imitation of best practices impossible.
More curiously still, despite being a general purpose technology, computing has not yet had an economically transformative effect of a kind comparable with electricity, the internal combustion engine or clean water. This transformation may await AGI, which could be available within ten years.
Ethics, law, regulation and audit are setting governance standards for AI
Equally, compliance demands are antithetical to a totally black-box approach to decision-making. Contrary to popular opinion, “the computer says No” is not an adequate answer from a bank refusing a loan application. The bank has to explain that any decision by an AI was not motivated by invisible racial or gender-based or other politically unacceptable biases.
This is why companies are developing “governance” protocols for the algorithms used by AI to review job, loan, insurance and university applications, and publishing them, so their decision-making criteria are open.
In the United Kingdom, the Competition and Markets Authority (CMA) is currently investigating the use of AI to reduce competition by personalising prices, exclude competitors and facilitate collusion. Consequently, companies are under pressure to monitor and audit their algorithms. These developments all require humans to make judgments.
It would be easy for humans to spot an RPA that consistently rejected mortgage applications from an ethnic minority, for example, because it is a rules-based machine. An AI or ML making value-based decisions represents a trickier issue, but even it must be rigorously tested for hidden biases and have a high degree of explainability, to enable the operator to explain its decisions to a regulator.
Questions to be addressed at the next AI and RPA discussion:
1. Can you show us worked examples of how companies have used AI to transform costs and/or customer experience?
2. Is “digital transformation” more than a meme?
3. Can AI transform the quality of robo-advice in wealth management?
4. How should AI algorithms be audited and regulated?
5. Does AI favour incumbents over disruptors?
6. Are rogue algorithms a company-specific (or systemic) risk?
7. Are network effects a factor in AI adoption?
8. Is data privacy a constraint on the adoption of AI?
9. Does AI reduce or enhance cyber-security?
10. Can AI solve the shortage of data scientists?
11. Are AI and Ml doing useful work or merely automating the needless complexity created by a surfeit of law and regulation?
If you would like to participate as a panellist please contact Wendy Gallagher at email@example.com
If you would like to participate in the audience please let us know below or contact Wendy Gallagher on the email above
If you would like to participate as a sponsor please contact Valerie Bassigny on firstname.lastname@example.org