Actually using ChatGPT in life science work (and the compliance headaches)
I’ve been working with a few of life science companies over the past few months, and ChatGPT keeps coming up in every conversation. But here’s the thing - everyone’s asking if they can use it, and the compliance teams are basically having panic attacks.
I get it. When you’re dealing with regulations and patient data, “move fast and break things” isn’t servicable.
What’s happening on the ground
Here’s what I’m seeing in real client work: The business teams are already using ChatGPT. Marketing people are drafting content, medical affairs teams are brainstorming educational materials, and project managers are using it to write better meeting summaries.
And compliance? They’re finding out after the fact and scrambling to figure out what the hell to do about it.
I was in a meeting last week where the head of regulatory said “We need to ban all AI tools” and the CMO laughed out loud. Because her team had been using ChatGPT for and getting better faster (and in some cases better) results than they’d ever gotten from their agency partners.
The real compliance issues (and the fake ones)
Let’s talk about what actually matters here:
The real stuff:
- Nobody’s putting patient data into ChatGPT (well, they better not be)
- Most of the content being generated is internal brainstorming or early-stage drafts
- The outputs still go through the same review processes they always have
The panic stuff that doesn’t actually matter:
- “What if ChatGPT makes up fake clinical data?” - Okay, but your medical writers aren’t using it to write clinical study reports. They’re using it to draft a outline for a conference presentation.
- “What if the FDA finds out we used AI?” - The FDA cares about your data integrity and your clinical evidence. They don’t care if you used spell-check or ChatGPT to help write your submission documents.
What I’m actually recommending
Instead of trying to ban these tools, here’s what seems to be working for the companies that are handling this well:
Set some basic guardrails - No patient data, no proprietary research data, no competitive intelligence. Pretty straightforward.
Document your processes - If you’re using AI tools in your workflow, just write down what you’re doing. Your SOPs probably already cover “content review and approval” - this just adds one step to that process.
Train people on what not to do - Most compliance issues happen because people don’t know where the line is. Make the line clear.
The thing nobody wants to say out loud
These tools will become standard practice whether compliance teams like it or not. The companies that figure out how to use them safely and effectively are going to have a massive advantage.
And the ones that don’t? They’re going to be stuck paying agencies $50k to produce content that their competitors are creating in-house for basically nothing.
So maybe instead of asking “Can we use ChatGPT?” the better question is “How do we use these tools without screwing up our compliance obligations?”
Because that’s a solvable problem. The other approach - pretending these tools don’t exist - isn’t really an option anymore.