FinnAir, an airline that dominates domestic and international air traffic in Finland, thought it could use AI to manage airport congestion.
AI alone was not up to the job so Finland’s largest airline instead implemented a hybrid system that uses AI to make predictions about air traffic and allows the humans-in-the-loop to make better decisions, explains Tero Ojanpera, CEO of Silo.ai, a Finnish AI lab that specializes in bringing cutting-edge AI talent to corporations around the world.
Getting the FinnAir project to that point was not a question of plug and play. It required a complex multi-step modeling process to help the organization become more AI literate.
Finnair’s experience neatly illustrates the current state of play. AI is not fully ready to make the kind of decision-making corporates expect it to make and even if it were corporate teams and networks are not fully ready to implement and reap the full benefits of AI.
The state of AI-decision making was the focus of an October 13 roundtable discussion moderated by The Innovator in partnership with DataSeries, a global network of data leaders led by venture capital firm OpenOcean. The discussion centered on what is holding back business from using AI, how corporates should approach AI projects in order to better leverage the technology and the methods being tested to improve AI’s decision-making powers. Some of these methods – such as the merger of rules-based and machine learning (ML) techniques, knowledge graphs and multi-modal neural sequencing – promise not just to help automate existing functions but to aid companies to strengthen and even re-imagine their businesses.
There are many variables at play for a successful AI product to go from infancy to launch.
“Corporates need to make sure their whole infrastructure is ready before trying to build something more intelligent on top,” says roundtable participant Ekaterina Almasque, a general partner at OpenOcean. “Unfortunately, in many enterprises there is still the question of how to deal with the data.”
She cited the example of the automotive industry, which is searching for new sources of revenue, such as using AI to leverage data collected from connected cars. The automakers don’t even have data centers that can collect and process the data in a way that it can be used, she noted, creating an opportunity for startups to help them develop ways to close that gap.
Other corporates have been collecting lots of data for years. But when they start on AI projects they sometimes find that a few columns of crucial information are missing. “It doesn’t matter how big the data is or how long it has been collected, it is not necessarily the perfect data,” says round table participant Reza Khorshidi, one of the founders of, and currently a research leader at, the Deep Medicine Program at the University of Oxford’s Martin School, and also the Chief Scientist for global insurance company AIG.
Corporates don’t only need to ensure that they have the right data – and enough of it – regardless of whether it comes from different parts of the organization or from a variety of outside sources. The data also needs to be structured properly, which is no easy feat.
“If there is one thing that we could do to save hundreds of billions of dollars every year it is to start with standardization of data schema,” says round table participant Vishal Chatrath, CEO of Secondmind, a U.K. startup that helps corporates identify and address how to to build a decision-making framework that combines the best of AI with human domain knowledge.
A lack of data compatibility makes it impossible to businesses to fully leverage AI insights and for supply chains to operate efficiently. Chatrath used the example of an online shop in the UK that sells branded sport t-shirts. But there is no way for the shop to alert Nike or Adidas that the green t-shirt is out of stock and proactively order green dye, buttons and thread, because there is no universally recognized way to call a green t-shirt a green t-shirt. This type of problem was avoided when the mobile Internet was created because a lot of effort was put into standardization, resulting in a Global System for Mobile Communications (GSM). “We need the equivalent of a GSM for AI,” says Chatrath. “Someone has to take this bull by the horns and say ‘dammit you have to standardize data schema’.”
To get the best out of AI, corporations should count on spending about 80% of their time putting into place the digital foundations, says roundtable participant Simon Greenman, co-founder and a partner at Best Practice AI, a U.K.-based management consultancy that specializes in helping companies create competitive advantage with AI. “AI is actually the easy bit,” says Greenman. “The hard bit is making sure the organization really has the platforms and the technology in place to be able to do AI. These things are slowing down the adoption curve.”
Clearly the best way to create systems that are more intelligent is to find the right data or “generate the data you need in a synthetic way to build your models and train them,” says roundtable participant Jose Luiz Florez, an AI expert and founder of a number of startups, including Dive.ai. “But if your models are not good enough then you need to put humans-in-the-loop.”
Getting Ready To Launch
That’s exactly what Finnair ended up doing. The expectation was that AI could actually decide what actions to take if congestion was increasing but AI is currently not able to handle such multi-dimensional optimization problems, says roundtable participant Ojanpera, who previously worked as Nokia’s Chief Technology Officer. Silo, his current company, helped Finnair develop a model to more accurately predict 36 hours in advance how many planes would be delayed based on various factors, such as bad weather. That was just a start because it is only one piece in a complicated problem, he says. When companies deploy such solutions they need to factor in a way to ensure that the human decision makers understand how the model works so that they actually believe what the model says. Then, and only then, can they start to look at the next problem: the possibility of automating some of the decisions that follow when AI starts to understand the context and the situation better. “It is important to break down the problem into pieces and start by selecting the one that will produce the best output in the short term,” Ojanpera says. “That’s how organizations become more AI literate. They start to understand what AI can and can’t do, and I think that’s a good starting point.”
In addition to ensuring the necessary data is ready and the technology underpinning is up to the job, corporates need to invest in AI talent, says AIG’s Khorshidi. “Don’t think by some magic trick that your company is going to go from pre-AI to AI-first without it,” he says. Once the team is in place a system needs to be put in place to properly test the AI and some sort of domain expertise is needed. There are a number of options for adding this expertise, including relying on employees’ insights.
“Usually we talk about data in the traditional sense but it is important to remember that many industries and businesses have been run by experts,” says Best Practice AI’s Greenman. “These human experts are a form of data – they have cognitive data that no company has managed to capture yet with tools. So, if companies are starting to collect more data they definitely should start looking more at their employees and use that cognitive data to help machine learning models to get better and better over time.”
A number of the roundtable participants argue that – for the time being – hybrid systems are the best if not the only real option to obtain better AI decision-making. “There is a need for humans in-the-loop and I don’t see that going away anytime soon,” says roundtable participant Chatrath, Secondmind’s CEO.
The Cambridge-based startup has developed what it calls the Secondmind Decision Engine, a machine learning-powered software-as-a-service platform designed to aid decision-making across industries, including, it says, “those in which visibility is low, data is sparse, and uncertainty is high.”
Currently in limited release, Secondmind’s Decision Engine is already being used by Kuehne+Nagel, a sea logistics provider which coordinates the movement of nearly 13,000 shipping containers per day and Brambles, an Australian company that specializes in the pooling of unit-load equipment, pallets, crates and containers. The companies are using the startup’s technology to make demand forecasting, planning and asset allocation decisions within their global supply chain operations.
Secondmind says its technology can offer – on average – a 35% improvement in efficiency by using a combination of Gaussian Process-based probabilistic modelling and decision-making machine learning libraries. The technology suite is adept at quantifying uncertainty, identifying operational trade-offs and explaining outcomes using sparse and low volume data, capabilities that meet business decision-making demands where other machine learning techniques like Deep Learning struggle, says Chatrath. Still, he says, humans with industry knowledge are crucial to success.
That’s a viewpoint that was shared by round table participant Jinsook Han, Accenture’s Managing Director and Global Lead of Growth and Strategy, Applied Intelligence. “Having humans in the loop is extremely important regardless of whether you are thinking about AI as a means of augmenting, accelerating, assisting and I would add a fourth “a” for avoid [ looking at the technology as a means of reducing risk], says Han. When chatbots were first being introduced clients came to Accenture and asked whether they could get rid of all 1000 employees in their call centers, she says. “We told them that is not the way you want to go. Let’s focus on what is important for the customer experience. What if the call center rep had enough information from the AI when they picked up the phone to be able to resolve the problem on the spot and give the customer a better experience? There are times when AI can handle an issue and other times when it is preferable to have a human-in-the-loop. My mantra is that we should let humans do what humans do best and let machines do what machines do best. Clients are beginning to understand that this is a journey.”
While corporates try and get their own houses in order data scientists are working on a number of ways to improve AI-decision making.Rules-based approaches, which use structured data, have been used to make intelligent business decisions since the 1980s. When you are dealing with multi-modal messy data and more complex problems it is generally agreed that machine learning is the better approach. But it is far from perfect. If new regulations come into play or the rules of the past no longer apply, ML, which has been trained on historical data, has no clue what to do – or even to recognize that the context has changed. “There is a lot of work being done now on how to correct these problems without losing the value derived from ML,” says Harley Davis, head of IBM France’s R &D Lab.
Merging Rules-Based Systems And Machine Learning
One way to try and solve for this is to merge rules-based systems with ML. IBM launched a new product in 2020 called Automation Decision Services that does just that. Combining the two approaches leads to better decision making, says Davis.
IBM’s Automation Decision Services product and its predecessor, IBM Operational Decision Manager, are now being employed in a variety of sectors, including financial services and aviation. For example, all of PayPal’s transactions now use a combination of ML fraud detection and explicit business rules to identify some very specific security concerns, says Davis. Mastercard is using it to do something similar – using the combination of ML analytics along with business rules developed with 800 member banks to detect fraud. In the U.S. Fannie Mae and Freddie Mac (federally-backed home mortgage companies created by the U.S. Congress) are using IBM’s technology to process over two-thirds of US mortgage applications, using ML on top of symbolic representation of rules, to make better decisions, he says. And airlines, such as Delta and United, are using it to figure out the best way to create upgrade offers. They are also using logic-based programming known as mathematical optimization to deal with complex logistical issues such as rescheduling. This approach – which has been around for decades in operations research – is now also being combined with AI-based predictions, in an IBM product called IBM Decision Optimization, says Davis, though he concedes that there is still often a need for humans-in-the-loop.
Knowledge graphs provide another way to start emulating the implicit functions of the human mind and combine it with the computing power of machines to represent meaning by putting data into context, similar to the way humans connect pieces of information to reach a conclusion. They are being used in Alexa and Siri voice assistant devices and in Google searches and they are starting to be applied in different industries, such as pharmaceuticals, chemicals R&D and oil and gas, using an approach IBM calls cognitive discovery.
IBM Research says it has developed a scalable pipeline of technologies that can be leveraged to extract information from highly unstructured sources such as documents or scanned images. This is typically the form in which companies have stored their knowledge and experience. For example, when scientists publish their papers and patents it is can be hard to process digitally. Cognitive discovery aims to automate the extraction of knowledge from these ‘dormant’ sources, combine it with other structured and semi-structured information and make it readily available via a user-friendly interface on top of a knowledge graph. This rich body of knowledge preserves the corporate wisdom and experience and makes it readily available for research and development in specific industries, says Stefan Mueck, IBM Germany’s Chief Technology Officer responsible for digital transformation.
This approach constitutes a new, accelerated and better way of doing R & D, he says. Leveraging the data pipeline for both internal and external sources (e.g. patents) researchers can start building a hypothesis based on a much bigger and broader input than any human could possibly read or have present in his mind. To help him even further, AI could be used to infer knowledge from what is in the graph.
IBM cites the following example, based on real-world experience with a number of chemicals companies as well as in its own material science research: Say a chemicals company wants to create a new and innovative material or substance with certain properties. Given all the internal knowledge plus all the external information that is free or licensed for use, the machine would extract what it finds about ingredients, formulations and processing as well as the related product properties of the end product. IBM Research has developed deep learning models that support the researcher with predictions: Given a formulation and a process, what would be the properties of the new substance? Or, given the properties, what would be the formulation? Similarly, if it is about base chemical reactions there are models to predict synthesis or retro-synthesis. The latter is made open source by IBM. It helps companies build extended capabilities on top of it. The decision intelligence can be driven even one step further by integrating predictions with lab automation, which is one of the latest innovations. This cognitive discovery capability has also been made available to the public for accelerating research around treatments or finding vaccines for Covid-19. Other industries are also using knowledge graphs to improve AI decision-making. The use of IBM’s cognitive discovery is under development with “an innovative player in the oil & gas sector,” says Mueck. It will be revealed at an industry conference called EAGE Digital in November, he says.
IBM believes knowledge graphs can offer business another big advantage. In some -but not all -cases, adding knowledge graphs to a combination of ML and rules-based systems can help companies explain why an AI made a particular decision, helping resolve serious social, legal and ethical concerns. “Machine learning represents a big black box. We don’t know why it gets the results it does which introduces multiple social, ethical and legal problems,” says Davis. “By looking at the knowledge graph and the rules you can give an explanation for a decision like why a loan was turned down: it was because your credit history was bad and your revenue was insufficient and so forth,” he says.
That said, there are still a number of tough problems to solve before more automated decision-making systems can be more widely and safely used by business, says Davis. He cites the well-known case of Amazon having to redesign an AI system that developed for recruiting purposes because it found that it was biased against women candidates. Amazon removed names and gender from the applications but other indicators such as hobbies or schools still led to the AI to give better ratings to male candidates, because previous human decisions favored males and those biases were correlated with other data in the resumes. There are tools that can find those biases, for instance IBM OpenScale, but you have to know what to look for and run analysis on the ML training data and it is not an easy problem to solve.
New types of approaches – including the social sciences – may need to be introduced into ML models, says Davis. That’s where decision intelligence comes in. The term – which made it into Gartner’s 2020 hype cycle – refers to an emerging engineering discipline that augments data science with theory from social science, decision theory and managerial science to try and provide a framework for best practices in organizational decision-making and process for applying machine learning at scale. Gartner has developed a Decision Intelligence Model to help business executives identify and accommodate uncertainty factors and evaluate the contributing decision-modeling techniques.
Transformer-based Sequence Models
While all of these approaches may help businesses move closer to AI-led decision-making roundtable participant Khorshidi believes Transformer-based neural sequence models, which have shown tremendous advances in natural language processing, have the best opportunity for success. If these models can be tweaked to accommodate the multimodal nature of data, he believes “it will have the ability to go beyond language, beyond health, beyond finance and pretty much cover every real-world data generating process,” he says, helping to transform business as we know it.
Khorshidi and his team at Oxford’s Deep Medicine Program have had success testing the application of Transformer-based models to sequence the multi-modal biomedical data found in electronic health records.
Electronic health records are sequences of mixed-type data such as diagnoses, medications, measurements, interventions and more that happen in irregular intervals and are routinely collected by health systems.
If Khorshidi and his team’s initial positive results, which were published in Nature magazine last April, can be replicated at scale and across a range of data scenarios “this could mean a breakthrough in medicine and be the difference between pre-AI/AI-inside medicine and AI-first medicine,” he says. The breakthrough is tied to the ability to build complete electronic health records that include what health systems have been routinely collecting, as well as social, economic, environmental and lifestyle data that have been shown to be important (and predictive) for health outcomes. A sequence model’s ability to learn the key patterns and relationships/dependencies underlying such complex sequences will enable health systems’ ability to anticipate things before they happen, and intervene when needed. “This can enable better, cheaper, faster processes, and ultimately pave the way towards redesigning and re-imagining the system,” Khorshidi says. The same approach could also be applied to other sectors, such as finance and retail, he says.
“Dealing with sequential data that is mixed-type multimodal and happens in irregular intervals gives machines the ability to deal with any sort of data,” says Khorshidi. “In the real world it could be a customer’s data on Amazon, it could be a patient’s data in NHS or it could be a company’s data for asset management.”
If not just medicine but other industry sectors want to move to a world of high dimensional data in which the data is inputted in a messy way, there are not many solutions out there,” says Khorshidi. “You need to settle on some sort of feature engineering or settle for models that can deal with data as they arrive sequentially. And that’s why I think transformer based architectures have got higher odds of success.”
Regardless of which new method businesses use to improve AI decision making they should change the way they think about AI, he says.
“The true north for AI is transformation opportunities and reimagination opportunities,” Khorshidi noted at the end of the roundtable. “Automation is the lowest hanging fruit. We should use AI to reimagine the power of the existing base of employees and strengthen businesses by doing things differently.”
This article is being made available to both readers of The Innovator, an independent global publication about digital transformation and DataSeries, a global network of data leaders led by venture capital firm OpenOcean , under a partnership agreement.
To access more of The Innovator’s Focus On AI articles click here.