March '23: The 10x doctor

We take a look at how LLMs are transforming healthcare operations, and share our thoughts on SVB.

March '23: The 10x doctor
Credit: National Cancer Institute

Hello and welcome to the March newsletter.

This month, we take a look at how large language models, and AI more broadly, are starting to transform healthcare operations and where we see the opportunities.

We also wanted to share our thoughts on the banking crisis – read them here. We ask if the collapse of SVB is just a black swan or the first domino of wider disruption, and discuss what it means for the tech ecosystem.

And, if you missed it earlier this month, we've released a detailed report of our research into the optimal venture portfolio construction strategy. We explain how we modelled and ran the simulations, go step by step through the various factors that affect portfolio performance, and analyse the results.

Mattias and the Moonfire team.

🌓🔥


What's Up At Moonfire?

LLMs in healthcare: The 10x doctor and value-based care at scale

Credit: National Cancer Institute

GPT-4 and Bard won’t replace your doctor any time soon, but large language models (LLMs) will transform how healthcare operates.

Healthcare operations are long overdue a technological upgrade. The last big innovation was electronic health records (EHRs), rolled out in the mid-2000s. But without the data infrastructure to connect them up to other systems, they have remained as siloed as their paper counterparts. And many doctors cite EHRs as a contributor to burnout.

How will LLMs, and AI more broadly, help?

By making data more computable. LLMs open up a world of opportunity, for both providers and patients. They can make healthcare workflows more efficient, enable a shift to more value-based healthcare, and put patients at the heart of the experience.

The rise of clinical LLMs

In the last couple of years, there has been an explosion of medical LLMs. There are already over 80 clinical foundation models, which have been trained on various healthcare data sets, like EHRs, doctors’ notes, and insurance claims.

In 2021, University of Florida Health collaborated with NVIDIA to develop the world’s largest clinical language-focused LLM, GatorTron, which was built to process and interpret EHRs. Google is developing MedPaLM, which is designed to respond to medical questions from both medical professionals and consumers. And just last month, Doximity, a professional social network for clinicians, launched DocsGPT, which can generate clinical correspondence, procedure notes, referrals, patient education, and more – supported by a library of ready-to-use medical prompts.

As well as these Clinical Language Models (CLaMs), which are trained on clinical/biomedical text and output text in response to a user’s input, there are also Foundation models for Electronic Medical Records (FEMRs). These models are trained on a patient’s entire history, and rather than outputting text, they generate a vector representation of the patient that can then be used in downstream models for predicting things like readmission risk.

And some of these early models are starting to be deployed. Earlier this month, the University of Kansas Health System announced a partnership with Abridge to roll out generative AI to 1,500 doctors across 140 locations, to help summarise provider-patient conversations and generate clinical documentation in real time.

Of course, most of these models are currently inferior to clinicians. Take Med-PaLM. It pales in comparison to human clinicians, particularly when it comes to incorrect retrieval of information (16.9% for Med-PaLM vs 3.6% for human clinicians), evidence of incorrect reasoning (10.1% vs 2.1%) and inappropriate or incorrect content of responses (18.7% vs 1.4%). The latest iteration, Med-PaLM 2, surpasses its predecessor, consistently performing at an “expert” doctor level on medical exam questions. But it’s still a long way from frontline use.

These models, however, are just the start. And a lot of their promise lies not in replacing frontline care, but in augmenting clinicians’ work, and helping to automate healthcare workflows and optimise back-end operations.

Where do we see the opportunities?

The starting point for a lot of innovation and improvement is for LLMs and AI to make healthcare data more computable.

Even with EHRs in place, patient information is a mess of unstructured text (often with typos and abbreviations), images, demographic data, test results and a lot more besides. This is one of the reasons healthcare has yet to benefit from the modern data stack: the data is too hard to extract, transform and load. LLMs, however, could do a better job at handling this data, particularly as multi-modal models start to develop.

The 10x doctor

For doctors, it will transform the provider tech stack, making them more efficient and reducing burnout.

Think about it in the context of triage, which represents around 5% of healthcare delivery spend. By combining a chat bot and robotic process automation, you could at least partly automate triaging, making patient prioritisation and matching more efficient. Tools like Curai Health and Decoded Health are providing fast, virtual touchpoints for patients, speeding up consultation and patient diagnosis, and reducing the need to physically visit your doctor.

And companies like our portfolio company Awell are digitising and automating care programmes, allowing providers to build customised workflows for managing complex care pathways. From automated patient check-ins to the collection of patient-reported outcomes post-consultation, these tools can enhance the quality of care while minimising the workload.

LLMs can also serve as co-pilots. Nabla, for instance, is working on a GPT-3 powered digital assistant for doctors that transcribes and repurposes information from video (and eventually in-person) conversations. Robin and Suki are building in the same space, using AI assistants to streamline the creation of clinical documents, freeing up doctors to spend more talking to their patients, not taking notes.

We’ve had the 10x engineer – what about the 10x doctor?

There’s also the data piece. Companies like PatientIQ and Briya are working towards connecting and analysing fragmented medical data sources, giving providers real-time access to patient-level data. This empowers healthcare professionals to monitor patient progress more closely, predict outcomes more accurately and reduce time to insight and action. This data foundation will be vital for deploying value-based care at scale.

Value-based, patient-first care

For patients, it means distributed, accessible healthcare. With the development of more effective self-care applications, patients can benefit from constant monitoring of chronic health conditions (rather than X-monthly checkups), personalised mental health treatment, and tailored educational material. You get products that are both more effective from a healthcare perspective and that get the growth loops and retention of consumer digital products.

Buoy and Lucina are prime examples of this trend. Buoy acts like a sort of matchmaker, guiding people to the best care through AI-assisted self-assessment, while Lucina uses maternity-specific algorithms to identify women who are at risk of preterm birth and connects them up with the right care providers.

This enables a paradigm shift in the way we think about healthcare, moving towards a value-based model that prioritises clinical outcomes over inputs. With more computable and interoperable data, providers can work with patients to determine a treatment plan, then measure the relevant clinical results over the course of a patient’s treatment, being paid based on the patient's health outcomes.

What’s next?

Given healthcare’s difficult relationship with technology, adoption of LLMs will be slow.

For one, while we have a good idea of what these models can do, we have less of a sense of what these models can do that’s actually valuable to existing healthcare systems. We need better evaluation frameworks to better demonstrate the clinical value of these models, and help healthcare providers assess which models are worth investing in.

And the cost is its own issue. The creation and management of these large models is expensive, both in terms of regulation and spend. Though they may be more generally useful and have more downstream applications, the return on investment may be significantly longer than a smaller model specifically developed for a single high-value task.

The biggest risk, however, is the industry itself. The current lack of integration between LLMs and digital health infrastructure (like EHRs), will initially limit their scalable use in clinical practice. This problem is compounded by the misaligned incentives in the industry, particularly in Europe, where there is no incentive for providers or payers to adopt these new tools – and, traditionally,  doctor apathy towards new technology.

But attitudes are changing. 74% of healthcare leaders now trust AI to support non-clinical workflows, according to Optum’s latest survey on AI in healthcare. And with the EU’s plan to digitise all medical records by 2025, making it easier for individuals to access and share their personal data across borders, perhaps healthcare is on the cusp of its open banking moment.

These are early days for LLMs, and their application in the healthcare industry presents unique, hard challenges. But they are poised to revolutionise healthcare ops, improving outcomes for patients and providers alike. It’s an exciting time to be involved.

– Akshat 🌓🔥


Podcast of the Month

Acquired: 'Nintendo'

You’re probably familiar with the Acquired podcast, where hosts Ben Gilbert and David Rosenthal tell the stories of companies and analyse their playbooks. But if you’re not, this episode on Nintendo is a great place to start.

It tells the early story of Nintendo, as it goes from being founded by the former CEO of a cement  company, to playing card manufacturer (fulfilling the needs of yakuza-run gambling houses), to family-friendly toy company, to global multi-billion dollar gaming company, with some of the most loved and recognisable IP in the world.

We learn how Mario was born after Nintendo failed to get the rights to Popeye, how the character Kirby got its name via a lawsuit from Universal Studios, and how the early Nintendo hand-held technology was built on calculator chips.

It’s a great story, and this episode is only the first half of its 130-year-long journey. Look out for the next one.


Good Read of the Month

'Scaling People: Tactics for Management and Company Building' by Claire Hughes Johnson

Hot off the Stripe Press, Stripe advisor (and former COO) Claire Hughes Johnson talks scaling operating structures and people systems in fast-growth startups.

Claire worked at both Google and Stripe in their early days, and has helped founders and company builders try to emulate their success. She knows a thing or two about scale.

Designed like a handbook to flick through when you need it, she takes you through foundations and planning, hiring and team development, feedback and performance mechanisms – accompanied by 100+ pages of worksheets, templates and exercises. It’s a practical guide for ambitious founders and leaders of any size and industry.

And, as with the rest of Stripe Press’s output, it’s beautifully designed too.


That’s all for this month. Don't forget to read our thoughts on the fallout from SVB and our report on portfolio construction strategy.

Until next time, all the best,

Mattias and the Moonfire team

🌓🔥