The Future of Digital Health with Dr Jack Kreindler

Jack Kreindler is an expert physician, physiologist and digital health entrepreneur. We co-hosted a fascinating health roundtable with him in London earlier this year, together with leading health entrepreneurs from across Europe and the US. This podcast covers many of the topics from the roundtable as well as his own personal thoughts on the opportunities and challenges in digital health, especially in machine learning and big data.

You can listen to the podcast in full below or read on to enjoy Jack’s commentary on the healthcare space and how he thinks the healthcare world will look in 10 years.

How Jack got started in tech and healthcare

My first job before qualifying as a doctor was working with Douglas Adams, author of The Hitchhiker’s Guide to the Galaxy. I designed the first graphical interface for the original “Wikipedia”. I built my first startup Vlife immediately after qualifying as a doctor and we created a product that would help employers understand their employees’ productivity, absence due to sickness, and medical claims. It was simple health analytics. We ultimately sold the company to Cigna, a large U.S. health insurer.

Sentrian, the remote monitoring analytics startup co-founded by Jack was inspired by Singularity University

Singularity’s mantra is that your ideas need to positively impact a billion lives or more over the next 10 years. This led us to think about wearable biosensor technology and the application of cheap cloud computing and machine learning to help doctors to make better, earlier decisions. We talked to the healthcare payers to understand what caused them the biggest pain. We decided to found Sentrian to take biosensor data from very sick people, at home, who come into hospital very frequently. We used sophisticated machine learning methods to early detect which of these people were likely to come back into hospital within 5–7 days. If you can act early and cheaply at home, you can keep those people healthier and save a lot of money.

Why medical imaging is so fascinating

I can stick you inside a machine and see inside your body without cutting you open. This is incredibly cool. But we are pretty bad as medical professionals, that when we see a ghastly lump in your lung, prostate or liver, to know whether to stick a dangerous, expensive, painful (and even deadly) needle into it to know if it’s serious or to just leave it alone. If you are a smoker and a doctor sees a nasty lump in your lump, the doctor will stick a needle in it 97% of the time, when he only needs to do that 25% of the time. We have fantastic newer methods of analytics like deep learning that can map many years of historical imaging data to real outcomes and histology results, to make phenomenal improvements in clinical diagnosis.

Large sets of training data available for machine learning

Huge longitudinal data sets are beginning to become available from national trials on tens of thousands of individuals. At UCL, there is a big prostate cancer data set with imaging, biopsies, genomics and blood markers. Some of the coolest data sets are visually tangible — I can even show them to my Mum to help her understand why we are making a difference. One of the most interesting companies that has come around is Zebra which is taking 9 million scans (MRI’s, CT’s, Xrays) in Israel. Many organisations are protecting the data, feeling that it is valuable in and of itself, but the reality is that it is only useful if it can be connected to outcomes in the real world. I am a believer in almost giving away the data, subject to privacy and confidentiality, to be used by many different groups to innovate around.

Google DeepMind’s partnership with the NHS is taking on a big challenge

Human physiology is dirtier than jet engines. Since we don’t collect everything about a human being, sometimes things that look stable are being stabilized by something that we are not measuring, and other things fluctuate because they are supposed to, rather than because something is going wrong. There are more signal-to-noise issues (around humans) than in financial markets or mechanical engineering, so we need to be more cautious about un-contextualised data that is coming our way. The key is to add more context to the data we are collecting. We need more than just the lab data — we should try to capture the more nuanced information that the patient and caregivers know about what is going on, to capture environmental factors such as weather, that might include benzene levels, changes in humidity and pollution, and also what clinicians or GP’s know instinctively would be affecting the results, ideally in a classifiable way (not natural language). We can then move from population scale big data and analytics to a personalised understanding of a single individual.

The best machine learning applications will provide clinical decision support tools rather than decisions

There are barriers to adoption. First is the professional behavioural change that we need to see. Millenials are more likely to adopt the concept that data-driven medicine, interpreted by machine intelligence, is probably better than most humans combined when it comes down to complicated things. (That does not mean that machines are as good at communicating or being empathic.) We want clinicians to become part of the initial machine learning process. Just telling clinicians that they should use the output of a machine learning application is not going to work. They will want to know where the data came from, whether it has had patient permission and a number of other questions. At DeepMind, they are giving smartphones to clinicians to input into the system and the output is an enhanced version of what clinicians themselves have put in. In one of my projects, the rules are not written in an unsupervised way — humans write the rules in natural language that the machine can interpret and the recommendations are read back in the same way the rules are written, even with the black box in between. This is also a way of to overcome a second barrier which is regulation. Taking the approach of the machine using the same natural language as the clinicians is a way of bridging the gap between what regulators allow and don’t allow. Building black box algorithms that don’t explain why it is that certain rules have been recommended or just saying “do this” without giving time to contemplate are big red flags for regulators. Otherwise, you need to do a clinical trial and every time you make a change or improve, you need to do another trial, so your cycles take years. Regulators are friendly and startups should reach out to them and collaborate with them.

On startups that are operating outside the regulatory sphere

Even if it’s not a device — if its a wellness app or symptoms tracker — you want that recommended by your GP. For nutrition apps, you would want recognised bodies to recommend your app. You are dealing with people’s wellness. All startups should aim to work to high standards even if they fall outside the regulator’s purview. Also, startups must focus on patients’ best interests, making customers comfortable that their data is going to be handled properly and that it will be used for specific purposes and in ways that patients understand. It should also be acceptable to GP’s and other clinicians.

Thoughts on quantified self, patient monitoring and condition management

All of the hardware in this domain is being commoditised. Constant innovation — it’s cheaper and cheaper to get better data about your body. Individuals are good at knowing what is affecting their health. What gets me excited are applications that help join the dots between data that individuals are seeing now compared to what happened to them within an earlier period of time — in other words, delivering real context with the data. These applications can include things like Headspace and Big Health, and apps that help with sleep, manage stress, weight, glucose and diabetes. On the medical end of the spectrum, companies like Kardio are able to early detect issues with the heart or Clue which can make people make significant changes without clinicians around pregnancy. There are major changes underway which help us steer behaviour towards better outcomes, but it’s not clear how much they are chewing into the immediate trillion dollar problems in complex chronic disease.

The biggest economic value is in addressing long term conditions not helping people sleep better. What will it take to tackle those?

It will be a blended approach. Super healthy people who run triathlons will inevitably end up with congestive heart failure, COPD, chronic asthma, diabetes, all of which come with natural ageing. Challenge is that payers are fundamentally more interested in how much a patient will cost them over the next 1–3 years. Yet younger folks whose individual health metrics are deteriorating over time need help to address the downward slope. Payers are less interested in them, even though they will be very expensive to the system in 10–20 years.

What do you want the world to look like in 10 years ?

One where people, who have had far less training in medicine, are empowered and authorised to make a change in their treatment or care. A world of networked apps fed with data by visible and invisible wearable devices, supported by a care network of family and friends. All integrated into billing systems of payers. A throwback to village life in the 18th century, empowered by technology.