Welcome back! If you’re new to ctrl-alt-operate, we do the work of keeping up with AI, so you don’t have to. We’re grounded in our clinical-first context, so you can be a discerning consumer and developer. We’ll help you decide when you’re ready to bring A.I. into the clinic, hospital or O.R.
The past two weeks have been monstrous for artificial intelligence developments. Like most things AI, they come to the software developers first, then to the general public, before making their way to healthcare. Consider this newsletter a heads up so when you see these making their medical rounds, you’ve got some context to go off of.
Table of Contents
📰 The News: Hello multimodal models!
🤿 Deep Dive: Milestones. Where are we in the surgery x AI journey?
📰 The News: New Models Galore
This has been one of the craziest weeks in AI. That being said, I feel like I say that every other week at this point. But, it continues to be true. Here’s what you need to know.
In OpenAI’s own words: ChatGPT can now see, hear and speak.
This is a huge update, and allows chatGPT to engage with the world using images. We’ve talked for months about computer vision, or the ability for computers to analyze pictures and video. Combine that with the logic and reasoning that chatGPT allows and you have a very powerful system. Here it is “debugging” how to adjust the seat on a bike.
The implications in radiology, and image processing are fantastic. Imagine preparing for a case and chatting with the PACS to get lengths and widths of relevant anatomy - guided and directed by you the surgeon. What if reviewing video was as easy as chatting with chatGPT, asking it to “Find a 30 second segment of video showing the anastomosis.” We aren’t there yet, but can be soon.
Not wanting to be outdone by themselves, OpenAI Announced DALLE-3, which has some pretty amazing text capabilities. Here’s a very sad looking Mario after breaking up with Princess Peach.
It’s not unreasonable to start planning your postoperative scans. Example: given pre-op scans of a patient with bad spine deformity, it’s not outside the realm of possibility to start seeing how different constructs might affect coronal or sagittal imbalance, postop. Now, they would be generative, and not based in reality, but would still be quite remarkable for patients to see.
RayBan and Meta have partnered to create Smart Glasses .
The possibilities here are endless. Walking in a patients room and having their scans, meds, or notes loaded up on your glasses in case they need to be referenced. The possibilities in the operating room, with analytics and navigation require their own deep dive. This is cool technology, and we should keep an eye on it.
Bringing it back to the healthcare world, Oschner Health to integrate generative AI into patient care, while Abridge partners with Emory and Epic . This gets back to the ultimate question: will enough people be willing to pay for Generative AI tools that these companies see profits? The jury is still out, but at least there are a few large systems willing to put their money where their mouth is and try it out. For the sake of the AI ecosystem, we should hope they do well.
🤿 Deep Dive: Milestones and Where Surgery x AI is Today
I would strongly recommend checking out our new meeting, Digital Neurosurgery, www.digitalnsgy.com, an inperson-only event from October 12-15, 2023. This is a new opportunity and forum for surgeons, technologists, venture and academics to come together in one room and work on critical issues for surgery, neurosurgery, AI and medicine. Get the last tickets today at www.digitalnsgy.com
In case you’ve been creating a gen AI startup for medicine, starting a (nonprofit) company, creating a new meeting around surgery and AI, running a research lab, being a surgeon, or enjoying your summer in other ways, you might have missed the innovations we’ve described in the section above. But few could miss the Lasker Prize (“the American Nobel for biomedicine”), awarded to the creators of Alphafold - perhaps the first major prize in biomedicine to be awarded for AI (Eric Topol did a terrific post on this). As our days shorten, I wanted to take a retrospective look back at what we’ve learned about building in AI - the enduring lessons, the good and inspiring stuff. Then its on to the new news for next week.
Three lessons we learned from building in the first wave of AI
The journey from idea to first prototype has never been shorter - but the devil is in the healthcare details. Repeat after me: Healthcare is not a consumer technology product. The SAAS webapp tech stack has never been better - a single full stack dev can create an impactful MVP in 10 days. The art (for subject matter/product/strategic types) is picking out a real clinical and business problem and distribution channel for the technological solution. For example, many folks (including us) have identified medical note writing as a multibillion dollar TAM ripe for disruption, a real clinical and business problem in medicine. But the precise, accurate, and timely problem statement has never been more important. Are we solving for physician time? Staff coordination? Patient satisfaction? Accurate capture of billing codes? These are all real clinical issues that stakeholders want to see solved - but which stakeholders are willing to pay for them? How do we do business with the hospital institutions? Private practices? Individual doctors all around the world? Each of these issues requires recognition, planning, and execution at a level that is quite different from throwing up an AI avatar photobooth app (remember those??). And a level of commitment to the cause that isn’t commonly found in hackathon level products. Now, in Fall 23, we are starting to see the fruits of some impressive labor over the preceeding year, but it takes a year or more to see these results.
AI is still the mind virus of 2023. The most common questions we are being asked at our speaking engagements are still defensive and predictive - “I’m scared of this new thing I don’t quite understand, how will it ruin my 2024?”, “What are the trends going to be for AI in healthcare in 2024”, and so on. I don’t have a crystal ball, and predictions are hard especially about the future, so I generally demure from these requests. Here’s what we do know, as of fall 2023:
• AI is minimally implemented in medicine outside of a few small niches
• Hospitals can remain technologically incompetent longer than any individual company can remain solvent - we are still using pagers.
• Business and billing functions drive adoption of enterprise systems, clinicians drive adoption of innovative clinical products.
• AI is stuck in the middle: not clinically useful outside of a few small niches, and business functions are slow to change
• The adoption of AI across clinical domains has been uneven and siloed - like medical hardware devices, not like consumer software (when the cardiology department upgrades its ultrasound machines, the ICU doesn’t necessarily get new machines).
• Every major medical institution is scrambling to create partnerships, innovation centers, and take a “leadership role” in AI - but few want to suffer the consequences of being early adopters of failed technologies.
• Major unanswered questions about data and algorithm ownership, access rights, co-development and so on have not yet been answered or solved. You could focus on solving these in your domain (as we are working in surgical video)
•”Guerilla adoption” of unsanctioned AI products in the hospital (like Windows Copilot) is inevitable and a potential distribution channel for small companies. For example: clinical decision support apps like MedCalc (but with AI), Glass Health, and others may never be sanctioned but may infiltrate medical schools and early year residents.
Most of these points are unchanged since Fall 2022 (when we started writing), but there are some fundamental opportunities
• Attitudes towards AI are being defined as positive, even envious. When consumer grade AI significantly outpaces that of medical AI (a fact that is obvious and will become even more glaring in 2024), we are poised for a disruptive break in medical practice.
• AI human teaming is still not regarded as a fundamental area of research (You should work in this area). It isn’t impressive to say ‘chatGPT can write a medical note’, but it is impressive to say ‘here is a system for teaming up humans with AI to write notes, here are the experiments we tried, and here is how we will plug-n-play the next AI that comes along
The brightest days of healthcare x AI are still ahead of us - and you’re still early. We are getting to the point where visiting lecturers on chatGPT pack auditoriums at our hospital grand rounds - whether they should or shouldn’t - and there are more AI x medicine “symposia” than you can shake a stick at (sorry). But no one has come up with an answer or the answers to any of the questions that we ask in this newsletter, which means you can join in, get your hands dirty, and make an outsize impact.
Feeling inspired? Drop us a line and let us know what you liked.
Like all surgeons, we are always looking to get better. Send us your M&M style roastings or favorable Press-Gainey ratings by email at ctrl.alt.operate@gmail.com