AI Marketing Scams Are Hijacking The GLP-1 Boom
- Vincent Grippi
- 7 hours ago
- 7 min read
In 2024, OpenAI CEO Sam Altman predicted that we would soon see one-person companies valued at a billion dollars, powered not by human employees, but AI agents…and a whole lot of Mountain Dew.
So, when the New York Times recently published a profile of a telehealth startup called Medvi, it seemed like that prophecy had finally come true.
The piece portrayed the company as a two-man operation that quietly used AI to built a $1.8 billion pharmaceutical juggernaut selling GLP-1 weight loss drugs online. And they did it all with common AI tools that we all know and use, like ChatGPT and Claude.
It didn’t take long for the piece to blow up on social media, drawing praise from AI enthusiasts and hustle gurus alike. But as exciting as it all sounds, Sam Altman will need to hold his breath a little longer, because the Medvi story isn’t just stranger than fiction. It basically is fiction.

The $1.8 Billion Myth
You see, it turns out that none of Medvi's story holds up to basic scrutiny, which is something a reputable news source like the Times is expected to, you know…exercise.
For starters, that $1.8 billion valuation presented by the Times is actually an annualized run rate, or ARR. In a nutshell, ARR is when you take a company’s best revenue month, multiply it by 12, and present it as a valuation. That’s kind of like getting an A on one test in a class and calling it your GPA. It's essentially a marketing figure and it’s a very common trick used in the AI world right now because perception travels faster than profitability.
But the reality is that Medvi has no official valuation, no outside funding, and no investors of any kind… because they didn’t really build anything.
Instead, Medvi operates as a patient acquisition middleman that routes people to telehealth providers and compounding pharmacies, then takes a cut. It helps to think of Medvi as a marketing wrapper - which is fitting, because that's exactly what 90% of AI tools are anyway - a wrapper of ChatGPT or Claude.
As egregious as they are, Medvi’s generous math and questionable business model aren’t necessarily illegal. Where things get really ugly is the marketing practices it used to lure in unsuspecting patients.
Medvi' Deceptive Marketing Practices
Medvi's website proudly featured before-and-after weight-loss images that were mostly AI-generated images of people who don't exist, showing transformations that never happened. And that's not all. Other before-and-after photos on the site were stolen directly from real Reddit users documenting their weight loss journeys, which didn’t involve Medvi or GLP-1s for that matter.
And just like panic-searching symptoms on WebMD, the story somehow gets worse…
The Times framed Medvi as a two-person operation that didn't need outside help. But their marketing actually relied on an army of affiliate marketers running thousands of paid Meta ads through completely fake Facebook pages for doctors that never existed. AI-generated headshots, fake biographies, the whole thing. And that's just the affiliates. Medvi's own website was separately lifting real doctors' photos for a 'Meet our partners' section without their knowledge.
This comes at a time when scores of companies are running Meta ads promoting AI tools that create fake user testimonial videos and call it “AI-UGC.” Let's be clear: if the user is fake, it's not user-generated content - it's just generated content.
In short, Medvi’s MO was B.S. Then again, I’d expect no less from a company whose CEO looks like Uncle Kracker. Not that there’s anything wrong with Uncle Kracker, but I don’t want him selling me drugs…or scamming me, for that matter.

Futurism first called Medvi out on its shady marketing practices a year ago, but the NYT brushed it off as old news, much like how they ignored the FDA warning letter Medvi received for misbranding compounded drugs, just six weeks before the profile ran.
On top of all that, a class action lawsuit filed thirteen days before the article dropped alleges the company blasts over 100,000 spam emails a year through deceptive affiliate marketers. And separately, a suit against one of Medvi's key partners, OpenLoop Health, alleges that the tablets they've been selling don't actually work. Selling pills that don’t do anything sounds like a terrible business model…who would even try something like that?
The Rise of AI Marketing Scams Targeting GLP-1s
While the New York Times has finally admitted fault and updated its article, one can’t help but be concerned with how easily it was to fool one of the most respected publishers on the planet.
But Medvi is a symptom of something much larger happening right now at the intersection of AI, healthcare marketing, and one of the hottest drug categories in modern medicine.
The global GLP-1 market was valued at $22 billion in 2025 and is projected to reach $33 billion by 2030. That kind of money creates a gold rush, and scammers are pouring in to ride the wave.

The Better Business Bureau has begun sounding the alarm on a surge of GLP-1 scams flooding social media recently. Such scams include deepfake AI videos of celebrities like Oprah Winfrey to push fraudulent weight-loss products, creating fake online pharmacies, and sending phishing messages claiming people are "eligible" for GLP-1 prescriptions in the same way they were once eligible for a Nigerian prince’s inheritance.
The BBB received over 170 complaints about one scam alone, a fake "pink salt" weight-loss drink called LipoMax that used AI-generated Oprah endorsements to dupe buyers out of hundreds of dollars, with refund emails bouncing back as undeliverable.
FDA Commissioner Marty Makary said the agency has sent more warning letters to pharma and telehealth companies in the last six months than in the entire decade before that. While bad actors are to blame here, AI and shady marketing have taken things to a scary new level. And there’s no sign of it slowing down.

AI, Healthcare Marketing and Patient Trust
This trend of GLP-1 scams is accelerating a trust problem that was already brewing in healthcare. A recent Coalition for Health AI study found that 93% of patients reported at least one concern about AI in healthcare, and 51% said it makes them trust healthcare less. Those patients probably wouldn't be thrilled to learn that 57% of healthcare professionals have encountered or used unauthorized AI tools or “shadow AI” in their workplaces.
There’s real promise in how AI can be useful in healthcare and marketing. But scams like these risk undermining patient and customer sentiment toward AI as a whole. If sentiment around LLMs continues to sour, patients may start lumping all AI into the same category.
But there is a North Star - something that offers both hope and direction for how healthcare organizations and marketers can navigate fragile trust in AI. The same Coalition for Health AI study from earlier also reports that more than 80% of patients said their trust in healthcare AI would increase if clear accountability measures were in place. And marketing is where that accountability has to be made real, since it’s typically where frontline communication with patients happens.
4 Things Healthcare Marketers Can Do to Promote Accountability When It Comes to AI
Here's what's frustrating: companies will push you to adopt their AI as fast as possible, but the moment something goes wrong, suddenly it's a beta product. They simply slap a fine-print disclaimer somewhere in the UI that says “AI makes mistakes,” as if that absolves them from accountability. And I say this with utmost professionalism - that’s BS.
If you want patients and customers to trust your use of AI, it's time to get real about accountability. Here are four things healthcare marketing teams can do right now.
1. Be Transparent
Disclose how and when you're using AI in your services and in your marketing. Many health systems already proactively disclose AI use in ambient documentation and let patients opt in or out. The same standard should apply to content marketing: if your content features AI-generated elements, say so and let your audience decide if they’re cool with it. If you’re confident that its use benefits your community, you should have no problem being upfront about it.
2. Be Strategic
Using AI externally needs to benefit your patients and customers first, above all things. The Washington Post launched AI-generated podcasts that summarize stories for subscribers, but they’re often totally wrong. Their fix was a small disclaimer suggesting users double-check things, which means listeners will have to read the original articles themselves to verify. That defeats the entire purpose. You can't pitch a solution to your customers and then hand them homework. If it’s not solving a problem, it is a problem, toss it out.
3. Be Clear About Risks and Mitigation
Tell patients and customers what the risks are and what you’re doing to mitigate them. If you're using ambient documentation in a clinical setting, make it explicit that a physician reviews and approves every note. It’s both reassuring and a demonstration of real accountability.
4. Clarify What Happens When Things Go Wrong
What's the actual recourse when something fails? If AI-assisted notes are signed off incorrectly, what does the patient get, a quick follow-up appointment? A direct call? If an automated email blast spreads misinformation, do customers get the real answer?
This becomes even more urgent as AI agents start executing transactions and booking appointments on behalf of users. Who's accountable when the agent gets it wrong? If you're inviting agents into your stack, you need to decide who owns the outcome when something goes wrong.
It’s Time to Get Real About AI Accountability
Healthcare may not be uniquely exposed to AI-driven fraud, but the consequences are certainly greater. Lives are at stake, and trust is already on life support.
If you’re going to expect your patients and customers to get onboard with your use of AI, you have to get real about accountability. Otherwise you might find yourself dodging headlines of your own.