What's New In Surgical AI: 4/23/23 Edition
Vol 22: Center stage: healthcare, and we tell you how to start building!
Welcome back! With this edition of ctrl-alt-operate, we’re officially at the six-month mark! 🍾 If you’re new to ctrl-alt-operate, we do the work of keeping up with AI, so you don’t have to. We also keep it grounded in a clinical-first context, so you can be prepared to bring A.I. into the clinic, hospital or O.R. when it’s ready.
This week, as always, we bring you all that’s happened in A.I. this week and a deep dive into one topic. If you’ve been with us before, send this to one clinician who considers themselves “techie” - those are our people 😁
We launched a few months before our faithful friend chatGPT took the world by storm. Since then, we’ve been flooded with friends, colleagues, and new friends from all over the world asking how they can get involved with A.I. in surgery or more broadly in medicine. Today, we try to answer some of these questions in our deep dive. Let’s get started.
Table of Contents
📰 News of the Week: Healthcare is front and center
🤿 Deep Dive: Our most common question: How do I get involved with AI?
🐦🏆 Tweets of the Week
News of the Week: Healthcare Front and Center!
This week was quite wild, so buckle up 👷♂️
We’ll start with the clearly-healthcare news first. Microsoft MSFT 0.00%↑ and our favorite EMR overlord, Epic, announced deeper integration of OpenAI’s GPT-4 into their systems to automatically draft messages, and (unsurprisingly) assist in backend data curation analytics pipelines.
On one hand, this is encouraging! I love that these technologies are being used to target the bane of all clinicians’ existence - the inbasket message. On the other hand…
“Our exploration of OpenAI's GPT-4 … to identify operational improvements, including ways to reduce costs and to find answers to questions locally and in a broader context," said Seth Hain, senior vice president of research and development at Epic.
… water continues to be wet, and revenue optimization will likely continue to supersede clinician quality-of-life.
Other news, the venture capital group Andreessen Horowitz (a16z) recently published a fantastic piece on why tech should be building in healthcare. I think this should be mandatory reading not just for the tech sector, but for clinicians. Understanding the language of venture capital, how startups are ideated, funded and governed is key to helping bring these technologies into the clinic.
Notice the distinct lack of patient story, of clinic pain point, and of pathology. Also notice the distinct non-emphasis placed on the clinical perspective. I would encourage clinicians to avoid the holier-than-thou mentality we so often employ, and instead approach it with the following mindset: if the solution to our problems must come from these avenues, how do we bridge the gap so that VCs, technical builders, and the end-deliverers of care (clinicians) can work together?
Okay, some more news to get to:
Greg Brockman (OpenAI President) did a TedTalk where he uploads a spreadsheet into chatGPT and watches as it performs data analysis (makes tables / figures) for him in real time. Let me not mince words:
The ability for chatGPT to write code and interact with files you provide it will fundamentally change how we do research.
Spoiler: if you’re thinking of hiring a data analyst, I’d try chatGPT Pro for $20/mo first.
Meta META 0.00%↑ has gone all in on the computer vision boat, allowing the other tech giants to fight for the chatbot/large language model scraps. They released DINOv2, a better and more generalizable image segmentation model. This is on the heels of Segment Anything, which allows users to (as the name suggests) segment almost any image with surprisingly good accuracy- even hip radiographs (see here). This has clear implications for radiology, but can absolutely be extrapolated as a preoperative planning tool for the O.R., possibly an educational tool for residents on call, etc.
We finish the news off with a demo by Humane (Twitter links still broken…), showing the future of wearables + A.I. models which can speak to you. Imagine a non-intrusive wearable which gets updates from your post-op patients, so you can monitor for complications before they arise. Or even simpler, what if after a long case, your wearable A.I. could just tell you everything you missed while your phone was down? We hope this is what’s coming.
Deep Dive: Our Most Asked Question: “How Do I Get Involved?”
One of the most common questions that we get when we're writing this is how can I get involved? So for this week's deep dive, I’ll answer that question by describing strategies for both clinicians and technical/scientific practitioners across the spectrum of training and practice. We’ll specifically address the following audiences:
Medical practitioners and educators
First, I wanted to open the floodgates for your questions or requests for collaboration. You can write to us at firstname.lastname@example.org, tweet @ddonoho or @dhirajpangal
Second, here’s a question for you:
For the clinical learner (medical student, resident, etc…)
First, for clinical trainees. Initial prerequisites for getting involved are lower than you might think. You'll have a lot of time available to you even though it doesn't seem like it. Believe me, life only gets more complicated when you're a faculty. So the first step for you is to have a curiosity driven, honest, self-aware approach to this novel area. You'll have to think about what's important to you, what you value, what your skills are, and what you enjoy doing, not just talking about.
That could take you many different places. For example, you might design a curriculum using chat GPT to learn simple code in Python or you might enroll for the fast.AI course. While taking a massive online open course, I would have an LLM running in the background as an interlocutor: you can pause the lecture, ask questions, and even potentially design your own practice questions to improve knowledge retention. You can even trial the LMM's performance on the current exercises in the course to see how that LLM performs writing code itself. Understanding how to partner with AI to accelerate your learning is an evergreen skill that no one will teach you.
Basic statistical principles never go out of style. It's critical that you understand how to conceptualize and design experiments, and how to evaluate the results of those experiments. That'll be true whether you're writing a clinical paper, designing a drug trial, evaluating any surgical technique, doing a meta analysis, or anything else.
You’ll do most of your best learning alone, but most of your best work will be done in teams. Medicine in 2023 is too big, too complex and changing too rapidly for any individual to make a major impact. You can stay motivated and engage with relevant groups at your institution who are also working on similar problems. If they don’t exist, I would encourage you to create them. This is a new field, and we would be happy to mentor you from a distance.
For the scientific/technical learner (CS/EE major, masters/PhD student)
As a research scientific trainee, the answers are a little bit simpler. Achieving technical mastery in your chosen domain, along with broad foundational competence, will allow you to take on tasks that improve health.
As we suggested for the clinical trainees, gaining even modest experience in medical problem domains is extremely valuable. Practical hands-on experience solving one medical problem in one setting will dramatically increase your understanding of what it means to work in health care space. There are significant challenges in healthcare, including working with regulated data, managing teammates with diverging incentives/demands, dealing with sensitive patient information that has real world implications and understanding model performance in those contexts.
Your expertise in setting up a computational environment for scientific rigor will be valuable to the remainder of your team. In fact, many of the medical research settings you might find may have deficiencies in this area (including ours), and you’ll be the relative expert in best practices of code versioning, unit tests and many other relevant considerations.
One of the byproducts of a healthcare focused team is you may be operating at a significantly elevated role relative to your level of experience.This is not something that everyone wants. But for many, it's a unique and interesting opportunity.
There's also the side benefit of working on a project that can directly improve human health if it's implemented. But just like many ML and AI research projects, a large number of medical AI research projects are essentially executed as research objects for a paper publication, and then that's it. They never go anywhere.
So this is a place where the technical scientific collaborator can put on their industry hat and bring the prototypes into deployment. Technical collaborators who can draw from industry experience (theirs, or their colleagues) to implement products, be it in technology, life sciences, or otherwise, can vastly enhance the impact of innovations in the health domain.
For the medical practitioner (MD/DO/NP/PA/PT/OT/RN/Health PhD)
As a practicing clinician, you have access to a larger number of potential collaborators, a larger quantity of data, but also significant responsibilities and time constraints. You'll understand the clinical questions better, but achieving technical domain, competence is going to become increasingly difficult due to competing demands on your time, changes your level of interest, and the pace of development in this field, which can seem frustrating to clinicians who are used to learning skills such as, for example, biostatistics, and then being able to lather rinse and repeat using those same methods for every paper. That's in some ways true for ML and AI research.
The key tasks for clinicians are:
identify high-priority problems (work on what matters)
define a clinical problem in solvable terms (even if you don’t know the solution)
organize and curate data for ML applications (you’ll know the health data best)
build and co-lead teams (many funding sources are in the health domain)
learn (formally or “on the job”) from your technical colleagues
validate clinical relevance of solutions
design solutions for implemented functions, not research objects
A hallmark of ML/AI research is the influence from industry hegemons in creating, developing, and deploying the most advanced models. In practice, you may develop a working model on your data and then next week, the model's underlying architecture could be made obsolete through previously unknown development in industry that had been created by the legwork of an army of a hundred PhDs and multimillion dollar research budget for model training. Working in a space where the technical innovations are largely not occurring in the setting of academic research labs is a very different feeling for clinicians. We are used to doing cutting edge work in their lab and then seeing that work brought to scale and to the bedside through productization, but here the products often come first, and then the application to health care comes later. A world where clinicians focus on adopting and adapting scientific advances to the health domain has implications for funding, for publications, for what it simply feels like to work in this field. The concept of “competing” with industry is likely not a winning pattern. Work in health ML/AI has to be more collaborative.
For technical practitioners (academic/industry CS, Eng)
For technical domain experts, on the other hand, the process is a little bit simpler. The ask from clinicians is often “Can you just do x on my data?” And the potential downside is that's not going to result in an ICML or a CVPR paper. Because those institutions don't necessarily care as much about applications, focusing on innovations and exploration in the technical domain (even if said “innovations” have zero relevance and a demonstrated, functioning application would save lives today). On the other hand, clinicians care only about applications and don't particularly care about the latest extremely large parameter model trained on an obscene amount of data nor some clever innovation in the architecture that enables the model to be increasingly performant. They care about whether that moves the needle for an individual patient. So managing those dual expectations from the technical side is critical to getting your graduate students into jobs, a key to achieving funding from your lab.
The key tasks for technical practitioners are:
collaborate on problem identification and definition from the beginning
teach your clinical colleagues how to organize and curate data for ML applications (you’ll know what the models “need” and what matters)
build and co-lead teams (MD’s have a tough time retaining CS talent)
learn (formally or “on the job”) from your medical colleagues
seek technical innovation amidst clinical use cases
teach MD’s to solutions for implemented functions, not research objects
We believe that creating long-term technical-clinical partnerships is a path to success. So in summary, I think that there are 4 different sets of challenges faced by both trainees and clinicians at the training and senior scientist level. I've outlined how to manage some of those tensions with examples of what we feel are some successful projects and some strategies for getting involved in this field.
For the early learners (high school, undergrad/postgrad)
Lastly, I wanted to address a group of folks for whom I have great admiration, the undifferentiated students. I’m talking to our high school students or undergraduates who haven't yet committed 10 plus years into a career path or training pipeline. Two years ago, I wrote that my economics training provided me with phenomenal background for health care. And I would certainly insist all the more that that is true today.
Explicitly acquiring a background in a computational method, whether that's a particular programming language or software design principles or hardware design principles or having structured interactions with engineers, scientists, and clinicians is going to become increasingly important whether you're a poet or a surgeon. In our increasingly digitized world, most of human activity over the next 10 years will be in reaction to that digitization in some way. Whether you're for or against it, inside or outside of it, you're going to have a relationship to digital disruption. So I think it's important that however you view your path, whether it's in health or science or the humanities more broadly construed, understanding what these digital technologies are about will pay outsize dividends.
For example, if you are a poet and can program, you are an extremely rare intersection occupied by a handful of people in the world. And your opinions may be quite interesting. Ditto for ethnography or romance languages or Japanese art history. The folks who can have some flexibility across domains are going to increasingly find themselves in a rare space with rare opportunities. Two years ago I argued that economics was a foundational major for health care, and I still believe that to be true today. Where I went to college, we had other foundational requirements: languages, distribution of courses, and even a swimming test. We might wish to consider whether having some computational exposure in software, hardware, engineering, or even in the interaction of humanities and ethics with technology should be considered pre-requisite requirements. And realizing this - before your peers are required to realize it - could be a real source of opportunity.
Best of Twitter
Twitter previews are still broken on Substack… we’ll pause these until a workaround or resolution exists. Anyone have one?
Feeling inspired? Drop us a line and let us know what you liked.
Like all surgeons, we are always looking to get better. Send us your M&M style roastings or favorable Press-Gainey ratings by email at email@example.com