AI is Your Doctor Now
- Kevin Lankes
- Dec 4
- 8 min read

Watch on YouTube: https://youtu.be/QegpR8kiCM4
I was sitting in my doctor’s office recently because I’ve been seeing a number of doctors recently, because I was diagnosed with asthma at the age of forty. Which is really weird and confusing for me, and after having such an active lifestyle for, well, ever, it just really doesn’t make a lot of sense. And I found a study recently so I brought that up, thinking that maybe it might be applicable to some symptoms I was having, and where this whole thing may have come from. And then my doctor proceeded to get really excited about trying to find this study on something called OpenEvidence. And they turned to the person shadowing our appointment and was like, “do you know about this? Have you used OpenEvidence in school?”
Gang, I was startled. My feelers went up right away at the word “open” at the beginning of the name, because I figured it might have something to do with OpenAI, which one of the leading companies in the large language model space right now. When I looked it up later on my way to the elevator and off to the subway home, I found out it was not affiliated with OpenAI, but sure enough, it was an AI platform.
It turns out that not only are doctors using AI more and more in their medical evaluations now, but patients are too. And this shouldn’t come as a surprise to any of us, I guess, since for years we’ve been making fun of people for diagnosing themselves on WebMD, or for getting their medical degree from Google University.
Some AI medical platforms are available to the public right now. I hesitate to name them because I know someone watching this is going to go right over there immediately to find out what that weird thing on their left butt cheek is. It’s probably nothing, dude. Just watch it for a couple weeks and see if it goes away. One of these platforms, doctronic dot ai, does seem to just be a kind of referral tool that’s going to spout very general non-advice at you until it then refers you to a real person for its paid service. So I guess that’s kind of the least concerning AI you could have, to say nothing of the shady business practices. Like, just go to a doctor at that point. Why would you pay money to someone you didn’t vet to tell you things they can’t help you with from a virtual space?
Another one, this CodyMD platform seems a little concerning. It looks basically like a responsive WebMD. You can get it to do a number of things, and one of the options it gives you says simply “treat me,” lol. In the about me section it gives you this little graphic about how the process works. Here it’s telling you about what it can do for you. Interpret labs? Ooooo, I really don’t like that. I don’t think people should even google lab results unless they know what they’re looking at, because you can easily get the wrong idea just doing that, and AI can do even worse than that in these cases and just tell you some complete nonsense.
“CodyMD continuously reviews your data and makes health recommendations. Sync your smartwatch or fitness tracker”?? Oh for sure, just give them all your data. What could go wrong? If there’s anybody alive right now that for some reason doesn’t know this, collecting personal data on its user base to then turn around and sell to whoever wants it is one of the main revenue generators for any platform that has users.
There’s so much to hate about this. I’ve done a couple of videos about AI at this point, and you can run through those to get the bigger story, but let me just summarise the points here that make this all really a bad idea. One is the data. Not that everyone doesn’t have every single data point about you now, and it’s stored in fifty thousand servers across the world by every nation state and private company on the planet, but do you have to give every new-fangled tool and startup all your personal medical info too? In that case, sign up for my new medical AI; it’s called Let’s Do Some Effing Medicalz. (caption: give us your data) Please don’t forget that we have HIPAA laws in America that prevent people from gaining access to your medical data unless they’re legitimate medical providers who need it to help you. ChatGPT is not HIPAA compliant. Just FYI.
The biggest no-no for me is the fact that AI is still just a predictive text generator that’s programmed to feed back to the user exactly what it thinks they want to hear. So if you say you’re dying to know if that thing on your butt cheek is cancer, it might assume that what you’re looking for is just confirmation, and so that’s what you’ll get. Or if you seem worried about something that could turn out to be a real concern, it might just placate that concern so that you don’t have to feel bad. It’s all about the input-output relationship to the user. That’s the whole point of these LLMs. And there are lots of problems with the way they operate. They can drive people to commit violent and delusional acts, and they can even make people fall in love with them. And people are okay with these things diagnosing medical conditions? This is going to backfire in significant and unpredictable ways that we can’t even make up right now. Oh hey, maybe CodyMD can come up with some things.
Which is a joke, kind of, because that’s another major reason why you should never trust an AI platform. Because a significant portion of what it tells you is simply fabricated. We have so much evidence for this, because again, LLMs are just text generators, predicting what character should come after the last, based on user input. If it thinks you want to hear about a study backing up what you’re experiencing, then it’s going to give you one, regardless of whether or not that study exists. When RFK Jr released his make america healthy again report, he infamously cited multiple fake studies to back up the factually incorrect things he claimed in there. He and the department of health very likely used AI to write this report, and of course, no one bothered to check on whether or not the random text generator spit out real information or not. Par for the course for somebody who intentionally swims in raw sewage.
Okay, so back to my own experience with this. Because I am far from the only patient whose doctor was like, “hey wait a minute I’ll check,” and then ran off to an AI platform to look up medical information. OpenEvidence says that it is “the leading medical information platform.” I can’t prove this, but I suspect that they’re treating the AI part of this as an afterthought these days. The search results on Google say two different things. The sponsored result, which is an ad that they write the copy for and pay for to show up on the top of the results, that tells you it’s an AI. But the first organic result that comes up does not include the AI language. So either the marketing team is deliberately hedging their bets or the VP forgot to tell the demand gen team to change their ad copy and delete the stuff about it being an AI. I don’t know, I’ve been on these teams, and it could really go either way.
It’s worth noting that there is a tool called UpToDate, and this is something that’s been around for about thirty years. And this has been inside your doctor’s office, and in med schools, and medical libraries across all that time. Basically it’s an analog version of OpenEvidence that was managed by an editorial process. It also offered a helpful diagnostic tool that graded results based on the available evidence. Which, to me, sounds a lot like what AI actually is today, because we used to have tons of things that used what we now call AI, but that’s a whole other can of lasagna. The real wizardry of AI is that it’s just repackaged existing technology that’s more of a business and marketing pivot than a practical advancement. The fact that we could already do the things AI is doing now is just icing on the cake for me in this whole weird bubble. And yes, it’s a bubble, and Hank Green did a wonderfully elaborate video on that recently so feel free to get in the know there.
My biggest concern with medical professionals whose job it is to diagnose and manage your healthcare relying on these tools to meet those goals, is that the tools have proven flaws that are not insignificant. That’s not to say that people are infallible and obviously human doctors screw up at not insignificant rates. A report from Johns Hopkins from 2023 found that just under 800,000 people each year are misdiagnosed to the point where serious harm or even death can occur. So that’s definitely not the issue, it’s not the fact that we need to eliminate AI medical tools because human doctors are never wrong and these tools are hampering the process. What is the case, is that we should not look at a system that is beholden to serious error and faulty processes and then fully decide to introduce another tool that we know for a fact is also rife with all kinds of problems across every part of its operational mandate.
Look, the next time you’re sick, or if you ever decide that you should finally go to the doctor about your butt, please, please, do not rely on AI medical tools. Do not gamble with your health. It’s the thing that keeps you on this earth. Men, specifically, have a weird thing about going to the doctor, and that’s dumb. It’s just so dumb. Go to the doctor. And if your doctor is using OpenEvidence or something similar, take that with a grain of salt or even be vocal about your concerns. Because doctors are just people, and they aren’t automatically right, and oftentimes they need to be reminded about evidence-based processes. But don’t assume you’re right somehow, in the face of their years and years of medical training. It’s a balance. It’s all just a delicate equilibrium that we have to navigate in order to get healthcare. And honestly, AI in its current form could only ever be a disruptive force to that equilibrium. Tell your friends, tell your doctor, deliberately go out of your way to avoid AI and AI tools. Because until this bubble pops, we aren’t going to have reasonable methods to navigate the technology that calls itself AI but is really something much less complicated, but no less disruptive and stupid.
OpenEvidence couldn’t find the study I brought up at my appointment, and I can’t find it now either. It was a study about how people who underwent interferon alpha 2b for adjuvant metastatic cancer treatment may experience long-term immune system overreactions later in life. If you know about this study, do me a favor and let me know in the comments. As a cancer survivor who did interferon, I’m personally interested in the fact that I was just diagnosed with asthma at the age of forty. The AI can’t tell me, but maybe you can.
So take care of yourself. And take care of your butt. I’ll see you in the next one.
Sources:


























Comments