Welcome back! Here at Ctrl-Alt-Operate, we sift through the world of A.I., to retrieve high-impact news from this week that will change your clinic and operating room tomorrow.
We’ll keep an eye on current developments but remain focused beyond the horizon to spot the next upcoming wave of innovation, disruption, and enthusiasm.
Our love language is shares and subscribes. In fact, for every share our trusty AI-bot gets another e-cookie. Won’t you help us keep the AI happy?
Table of Contents
📰 News of the Week
🤿 Deep Dive: AI assistants for surgical education.
🐦🏆 Tweets of the Week
📰 News of the Week
If you don’t live and breathe A.I. (😢) then this week’s headlining News of the Week might have gone under your radar. OpenAI, the company behind chatGPT and the recent hyperdrive acceleration of AI into the mainstream media, announced the chatGPT API (Application Programming Interface).
Why even bring this up in a room full of surgeons? This is a big deal for one reason: the price. OpenAI effectively made the best, fastest, A.I. language model available for other developers to use 10x cheaper than before.
If a developer, a startup, or you (the entrepreneurial surgeon), wanted to process 1.5 million words in your own application, that would cost you… $2. These are scales which are almost hard to imagine, and means the cost to develop high quality applications using this technology is accelerating towards zero.
Check out the reaction from Twitter.
Imagine waking up and finding out your #1 driver of cost was now 1/10th the price 🤯.
The point is the following:
If there are “what if AI could do” thoughts that are floating around in your head after playing with chatGPT, the experiment to figure out if the answer is yes or no is probably cheaper and easier than you think.
Along these lines, Dr. Zakka and colleagues looked at how you can link external knowledge bases to large language models to improve clinical factuality. These are the types of work that become accelerated thanks to dramatic cost reductions in language models!
🤿 Deep Dive: AI coaching for surgical education.
Although our focus for the current series is video, it’s important to keep our periscope above the surface and look for foundational work that may be influential. One key question that troubles many surgeons is whether we can have effective “virtual coaching”. This was the focus of an NIH call for applications a few years ago. So, this week, we will look at one paper that proposes a methodology for implementing virtual coaching in brain tumor resection surgery.
We’re not ready to let robots loose in the operating room, so how can we use them to teach our learners? In this study, the Montreal neurosurgery team compared a Virtual Operative Assistant (VOA) with expert instruction for teaching a brain tumor resection task in the VR simulator. The study randomized medical students from four institutions in Canada to either no feedback, virtual-assistant feedback, or human instructors. All groups received 75 minutes of simulation training, including five practice sessions, followed by a realistic virtual reality brain tumor resection. The VOA received automated, audiovisual metric-based feedback, while the instructor group received synchronous verbal scripted debriefing and instruction from a remote expert. The controls just went about their merry way, as we always do: see one, do another, self-coached.
So how did it work? Participants trained by VOA had equivalent or greater performance to those trained by human instruction in both structured assessments (OSATS) and overall qualitative assessment. If given well established guardrails, AI might be able to teach surgical learners who don’t have access to experts (at home, during off hours, etc). Of course, the downside is that these robo-trained surgeons might be a bit lacking in bedside manner, but maybe that's a trade-off we'll have to make.
All kidding aside, we now have some empirical evidence for the effectiveness of AI-guided educational interventions in surgical training. We bet there’s great potential for the development and implementation of future AI-guided educational interventions in surgical training. Perhaps one day we'll look back on this study as the first step towards a brave new world of AI-guided surgical education and practice.
🐦 Tweets of the Week
Okay, okay, okay. We are a far ways away from humanoids replacing surgeons. But what if a humanoid knew what instruments you were going to use based off of your usage characteristics and could have them loaded and ready to hand to you…?
Stable Diffusion (yes the same Generative AI that made Pokemon versions of Monet) was able to decode fMRI data back into its original images… aka, A.I. was able to read minds? Not really. Nonetheless, this is a glimpse into how generative AI can relate back to the clinical landscape. I have lots of thoughts on this. Maybe a deep-dive next week.
Finally, an underrated tweet to highlight Microsofts new paper showing a multimodal large language model. What does multimodal mean? Simply put, it accepts visual inputs as well as textual inputs and is able to reason, articulate and generate responses. One step closer to what humans might be able to do.
Feeling inspired? Drop us a line and let us know what you liked.
Like all surgeons, we are always looking to get better. Send us your M&M style roastings or favorable Press-Gainey ratings by email at ctrl.alt.operate@gmail.com