We hear and read a lot about AI and machine learning (ML) these days, but outside a few core applications, such as autonomous vehicles or decision-support systems, how much use is it really seeing? How well is it doing in industry, for example? After all, the huge volumes of data generated within the modern factory, or by connected products and devices, would seem like a natural fit for ML’s data-sifting capabilities.
Yet in a recent study of AI plans and perceptions carried out by Freeform Dynamics, just over half of the IT professionals surveyed said their organisations were not currently prioritising AI investments (Figure 1). The reasons most commonly cited for this were excess hype and concerns over AI’s relevance and readiness.
The perceptual part of the problem is made worse by the tendency of some marketing people to indulge in ‘AI-washing’. They label almost anything automated as AI or AI-enabled, even when it simply has a few rules embedded in it – and of course when ML is just one component of what might one day turn into artificial intelligence.
Indeed, when most people refer to AI they are actually referring to ML, or perhaps to deep learning (DL), an ML subset that’s especially fashionable at present. Sometimes they excuse the conflation by labelling the thinking machines of the future as “general AI” and ML as “narrow AI”, although in some ways that’s just a differently-coloured coat of AI-wash.
This is gradually changing, with the term AI becoming more acceptable in certain usages. An example is virtual assistants or intelligent agents, especially where the AI is also able to act autonomously – remedying an insecure network port or S3 bucket, say. The key factor seems to be that these assistants combine multiple AI subsets, such as ML plus natural language processing and automated planning.
Making ML measurable
In any case, there is more to AI resistance than just an allergy to hype. Our study also revealed concerns around how to measure return on investment, how to scope, specify and cost the AI platform, and of course the availability of the necessary skills. In addition, AI/ML is very much a multi-disciplinary matter, with dependencies that go way beyond IT. It’s essential therefore to also have business users, operational staff and other engineering disciplines involved right from the very start, from evaluation, budgeting and planning, through to implementation and ongoing operations.
Yet most of our survey respondents said it was a challenge to get all the relevant stakeholders and disciplines or departments working together. Just 11% of those with experience of AI initiatives said they had the full involvement of those required, while 37% said their stakeholders were “not at all” working together effectively.
The one area in our study where it flipped around to AI acceptance – to some degree, at least – was in manufacturing, where a slim majority of respondents (53%) said they were already prioritising AI investment. If you include those who agreed that they ought to be prioritising AI investment, as well as those already doing so, the totals rise from 60% for all sectors to 72% in manufacturing alone.
One of the biggest areas for AI/ML today is intelligent automation. This is often presented as a necessity: think about areas such as network security, or predictive maintenance in a factory, for instance. There is simply too much data now, too many alerts coming in, for a human to process it all. In applications such as these, ML can not only act as a filter but it can also take action, assuming that action would also be automatic on the human’s part. One thing our research into AI has shown though is that this only tells part of the story. For a start, the technology we want (or need) to automate must actually be capable of automation. It is something of a self-reinforcing loop – the application of technology to a process also enables it to be automated, and as the process’s speed and complexity ramp up, that automation becomes pretty much essential (Figure 2).
AI as an agent of change
And then there is automation and ML not just as ways to keep up with an ever-faster world, but as creators of opportunity. By automating the routine, we should – in theory, at least – free up human and mental resources for new tasks. Certainly, past industrial revolutions have subsequently driven innovations and changes in culture and society that went far beyond their initial industrial impact.
It remains to be seen if intelligent automation will indeed amount to a new industrial revolution, as some believe it will. Even if it does, will it be as disruptive and all-embracing as the changes brought by steam power and mass production? For instance, the shopper of today is used to clothing and fashion being largely disposable. Yet in pre-industrial times, the effort required to grow fibres on a sheep or plant, then manually spin, weave, sew and dye them, meant that your clothing could be the most valuable thing you owned.
It is quite possible that we are in the position of our medieval ancestors, trying to imagine a world in which a garment, far from being something that you leave in your will, costs the same as a pint of beer – and can be discarded as readily. That means we need to plan for a future that we cannot see – not just for the things that we know that we need more information on (the known-unknowns), but for the “unknown unknowns”.
For example, we know that while previous industrial revolutions destroyed some jobs, they created others. But can we assume that will always be true, and even if it is, will the new jobs be of equal value, status and skill – will master weavers once again be replaced by mere machine-minders?
Taking advantage of AI today
Whatever changes may follow in the future, our survey respondents clearly see opportunities in using AI today, to help both staff and customers, and to improve operational efficiency (Figure 3).
Yes, there is the problem of simply knowing how and where to start, and then what to expect. And yes, there is the challenge of avoiding AI bias, which can creep in in very simple ways – for example by using training data that reflects the current state of affairs when that itself is biased.
AI tools and technologies are increasingly available in packaged forms, however, and it’s useful to remember that they are just tools and technologies. For any technology to win adoption it has to be trustworthy: people don’t need to understand how it works, but it needs to be explainable, fair and accessible. AI is no different.
The one caveat is to starting thinking now about strategy and architecture – if you’re not already doing so. Something we’ve seen in our research time and again is that although the barrier to acceptance is high, once a new technology has proved itself in one application it tends to find others pretty fast. So unless you get reusable processes and platforms in place first, you could end up in a disjointed landscape of incompatible silos and stacks. We’ve already had VM-sprawl and cloud-sprawl – let’s not have ML-sprawl next!