When any of the 1.2 million residents in north central London are feeling ill, they now have the option of consulting an artificial intelligence powered chatbot. It performs the triage normally done by humans manning Britain’s National Health Service telephone hotline for urgent but non-life threatening conditions.
Patients are able to describe their symptoms to the chatbot, a conversational
computer program that customers can interact with via a messaging interface
like SMS or Facebook Messenger. The chatbot might advise a patient to seek
a face-to-face consultation with a doctor or to go to an emergency room, but
the aim is the same as the telephone hotline — to avoid unnecessary doctors’
Ali Parsa, CEO and founder of London-based Babylon Health, the startup behind the chatbot, says that the “AI triage system reduces doctor consultations for Babylon patients by up to 50% because half of the cases do not need to see a doctor.” The company has raised $85 million frombackers that include the founders of DeepMind Technologies, a British artificial intelligence company acquired by Google in 2014. AI-based services will play a growing role in automated diagnosis, support and advice in healthcare, says Parsa, a physicist and serial entrepreneur.
And in the not-too-distant future, once AI does the initial triage and an
appointment is booked, a human doctor will already have a very good idea
of what the diagnosis will be. The AI will write up the doctor’s notes, further
reducing the time with a patient. That’s important because two-thirds of
healthcare costs in developed countries are based on the salaries of human
doctors and nurses, says Parsa.
As AI advances and is implemented more widely, it will be used in the
preventative diagnosis of illness, further cutting costs by reducing dependency on expensive experts while allowing the handling of far more cases.The world spends over 10% of GDP on healthcare — over $6 trillion a year –according to World Bank and World Health Organization figures –much of it on medical diagnosis through expensive experts, whether they be general practitioners or more specialized doctors like radiologists. Artificial
intelligence’s partial automation of the health sector could not only save billions of dollars, say industry experts, but also make expert diagnostics
accessible to many people who currently cannot afford it.
Virtual Doctor Visits
So far the NHS is only testing Babylon Health’s triage service in one part
of London, but a quarter of a million British residents have signed up for
its broader remote monitoring service, either as individuals or through
their employers.. After the AI system does the initial triage, patients can
make virtual appointments for video chats with human doctors 24 hours
a day, seven days a week. Subscribers pay £5 a month for unlimited access.
Those that don’t wish to subscribe pay a £25 pay-as-you-go fee .
The annual cost of primary care per patient for the United Kingdom’s
national health service is £140 — £40 of which goes toward a doctor’s
premises and pension and £100 of which goes to the doctor, says Parsa.
And patients can wait days, if not weeks, to secure an appointment.
For half of the cost, Babylon Health says it is able to offer private patients
the ability to make an appointment in seconds and see a doctor virtually
within minutes. “In the world we are short 5 million doctors and 50% of
the rest of the world has zero access to doctors,” says Parsa. “If technology
makes it possible to triage and treat patients at prices they can afford what
doctor in the world would not want to do it?”
Babylon has already rolled out its service in Rwanda six months ago.
Some 450,000 have registered for the service and some 100,000 people
consulted doctors and nurses virtually, Parsa says.“We will do for healthcare what Google did for information — we will make it universally available,”
he says. Like Babylon Healthcare, GYANT, a San Francisco-based startup
co-founded by the German serial entrepreneur Pascal Zuta, is using AI to check symptoms, diagnose a likely condition and determine whether people
should see a doctor. Most of GYANT’s users come from the developing world, where healthcare is not easy to access. Some 500,000 people used the service in the first few months of operation, says Zuta. Longer term, he says the idea is for GYANT not just to give patients in both developed and developing markets remote access to healthcare but to build tools for insurance companies to identify conditions early and prevent serious illnesses from developing.
A growing number of companies are working on AI imaging and diagnostics
with these goals. DeepMind Technologies is collaborating with the UK’s NHS, for example. London’s Moorfields Eye Hospital gave it access to one million images from historical eye scans, along with associated patient data, to help it use AI to develop an early way of diagnosing sight loss. The learning that DeepMind acquired from analyzing these medical datasets is invaluable because the more data it collects, the better the AI. DeepMind’s work with other UK hospitals has caused controversy, however, because critics argue it is handling too much patient information, raising the issue of how to best collect data that can advance healthcare while still protecting patients’ privacy.
AI imaging and diagnostics is attracting a growing number of startups, notes the research firm CB Insights, and 15 of the 24 startups it tracks in this category have raised funding since January 2015.
Israel’s Zebra Medical Vision is one of them. The serial entrepreneur Eyal
Gura, the company’s co-founder, explained the attraction in an interview. The technology makes it possible to “improve people’s lives and life expectancy and even more importantly to deliver healthcare to the next billion people who will join the middle class by 2020,” he says. “Philips and GE can produce more (medical imaging) devices but we can’t produce as many new doctors and radiologists at scale,” he adds. “It takes seven to ten years to train experts and we don’t have enough universities and teaching professors to service mankind as a whole with sufficient diagnostics.” For example, today only five in every 100 patients are being treated properly for stroke, leading to deaths and permanent disabilities. Viz, a San Francisco-based startup, tries to solve the problem by using machine learning to rapidly identify key anomalies in brain scans that first responders or doctors might have a hard time recognizing, says co-founder Chris Mansi, a London educated neurosurgeon.
Teaching AI to Diagnose Disease
Like Viz, Zebra Medical Vision, which has raised a total of $20 million in venture capital, uses AI to teach computers to diagnose conditions in medical imaging scans. It has already developed 11 specific algorithms — software that learns from particular datasets — and obtained a CE mark of approval in Europe in five medical imaging areas: fatty liver, bone density, emphysema, coronary artery disease and back disc compression.That said, Gura explains that it is not easy to collect all of the necessary data and produce algorithms needed for AI to cover the conditions identified by radiologists. “It took us three years to get this data and only then could we begin to build the structure necessary to produce algorithms,” he says. Large hospital chains are starting to use the company’s technology, but due to regulatory constraints it could be years before hospitals or national health services can use it without review by a human medical expert, says Gura. Going forward the health sector will have to figure out the ideal balance between human experts and AI. If it gets it right, say proponents, the sector’s bottom line will be healthier, and so will patients.