For the last installment of my LDT to IVD series, I wanted to tackle the topic of validations. This one has taken me a while for a couple of reasons. First, I couldn’t come up with any good Fallout references for validations, other than the obvious “Level Up” in the title. But more importantly, this is a challenging topic, because most CAP/CLIA labs are used to the term validation, but frankly don’t always understand why the FDA expects something different. So with that as a backdrop, let me give this a shot, based on some of my experiences with helping labs bring LDTs through the FDA process.
First, let’s discuss some basic terminology. What most CLIA labs would consider validation, the FDA and other IVD regulators would probably call verification. It’s a subtle distinction that has frankly befuddled people in this industry for a long time. So let’s try an example that I used when I taught IVD regulatory that has nothing to do with test development.
Let’s say you are the executive chef in a restaurant. You want to offer a new dish on the menu. You try different combinations of protein, sauces, etc. until you hit on something that’s really good. That’s development. Now you’re ready to teach it to your staff. You write down the instructions, and if they can make it routinely to your satisfaction, that’s verification.
After offering this dish for a while in your restaurant, you decide to add it to your new cookbook. If other people that you haven’t trained and that are in a completely different kitchen environment can make your dish on a regular basis the same way you did, that’s validation. Your recipe / instructions for use are so solid that anyone with some skills and training can reproduce it.
With that model in mind, let’s turn back to the LDT scenario. There are a few basic questions you should ask yourself when you’re evaluating those in-house CLIA validations for FDA submissions:
How well do you understand the test’s performance? You’ve got to be honest on this one. Could you hand the test to someone outside of your lab and expect them to get the same results? Have you identified (and documented) the critical steps that can impact performance, or does your staff just know them by heart?
How much data did you collect to establish that performance? Some labs will go to market with minimal data, knowing that they can tweak and improve the test over time. You can’t do that with the FDA, so you might need to collect more data to convince the agency that your performance claims are real.
Have you looked at what other companies have done for clearance or approval? This is probably the easiest and most overlooked way to determine what the FDA requires. Look on the FDA’s website at the decision summaries or Summary of Safety and Effectiveness Documents and see what others have done. That’s going to be your baseline for sample sizes and performance. If others ran 300 samples and you’re hoping to convince the FDA that 200 is sufficient, be prepared for disappointment.
Have you read the FDA and/or IMDRF guidance documents? This is another huge resource for device and IVD developers that many aren’t aware of. The International Medical Device Regulators Forum or IMDRF has a wealth of resources for device manufacturers. The EU has relied heavily on these guidance documents for their IVD regulations, and the FDA will often reference them in their guidance documents. A quick Google search of IMDRF or FDA IVD guidance documents will yield a treasure trove of information (I’d add the urls, but they have a tendency to move unexpectedly).
What is your comparator method? This is a big one that can trip up even the most experienced LDT developers, because your test’s performance is only as good as the method you validate against. Even something straightforward like clinical outcome can be tricky if the standard of care diagnosis is subjective. For example, comparing your test’s results against something like pathology reads or CT scans (which are subject to expert interpretation) can really impact your validation results.
But the best advice I can give is that, once you’ve done your homework, have someone who’s worked on FDA submissions - and who has no prior knowledge of your test - review your validation package. They should be able to give you a quick assessment that will help you decide if further work is needed. And if you do need additional studies, make sure you run the protocol past the FDA via the Q-sub process just to make sure it’s aligned with the agency’s thinking.