d Deconstructing the Diligence Process: An Approach to Vetting New Product Theses By lifescivc.com Published On :: Wed, 24 Apr 2024 11:00:21 +0000 By Aimee Raleigh, Principal at Atlas Venture, as part of the From The Trenches feature of LifeSciVC Ever wondered what goes into diligencing a new idea, program, company, or platform? While each diligence is unique and every investor will have The post Deconstructing the Diligence Process: An Approach to Vetting New Product Theses appeared first on LifeSciVC. Full Article Biotech investment themes Drug discovery From The Trenches
d Pharmacology: The Anchor for Nearly Every Diligence By lifescivc.com Published On :: Thu, 25 Apr 2024 11:00:33 +0000 By Haojing Rong and Aimee Raleigh, as part of the From The Trenches feature of LifeSciVC This blog post is the second in a series on key diligence concepts and questions. If you missed the intro blog post yesterday, click The post Pharmacology: The Anchor for Nearly Every Diligence appeared first on LifeSciVC. Full Article Biotech investment themes Drug discovery From The Trenches Science & Medicine Translational research Efficacy IVIVc PD pharmacology PK
d The Biotech Startup Contraction Continues… And That’s A Good Thing By lifescivc.com Published On :: Fri, 26 Apr 2024 11:00:38 +0000 Venture creation in biotech is witnessing a sustained contraction. After the pandemic bubble’s over-indulgence, the venture ecosystem appears to have reset its pace of launching new startups. According to the latest Pitchbook data, venture creation in biotech hit its slowest The post The Biotech Startup Contraction Continues… And That’s A Good Thing appeared first on LifeSciVC. Full Article Biotech financing Biotech investment themes Capital markets Fundraising Biotech startups venture creation
d Stars and Scars… Some Lessons Learned About Leadership By lifescivc.com Published On :: Thu, 02 May 2024 11:00:41 +0000 By Arthur O. Tzianabos, PhD, CEO of Lifordi Immunotherapeutics, as part of the From the Trenches feature of LifeSciVC As the biotech industry continues to pick up steam, I have been getting a number of phone calls from folks in The post Stars and Scars… Some Lessons Learned About Leadership appeared first on LifeSciVC. Full Article Corporate Culture From The Trenches Leadership
d Mariana Oncology’s Radiopharm Platform Acquired By Novartis By lifescivc.com Published On :: Mon, 06 May 2024 10:42:46 +0000 Novartis recently announced the acquisition of Mariana Oncology, an emerging biotech focused on advancing a radioligand therapeutics platform, for up to $1.75 billion in upfronts and future milestones. The capstone of its three short years of operations, this acquisition represents The post Mariana Oncology’s Radiopharm Platform Acquired By Novartis appeared first on LifeSciVC. Full Article Exits IPOs M&As Leadership Portfolio news Talent #RLT Mariana Oncology Novartis radioligand therapy Radiopharm
d Has Spring Sprouted New Growth in Immuno-Oncology? By lifescivc.com Published On :: Thu, 09 May 2024 10:38:33 +0000 By Jonathan Montagu, CEO of HotSpot Therapeutics, as part of the From The Trenches feature of LifeSciVC As Boston’s weather has started its turn from the frigid darkness that is a northeast winter to the longer days and lighter conditions The post Has Spring Sprouted New Growth in Immuno-Oncology? appeared first on LifeSciVC. Full Article From The Trenches Science & Medicine Cancer CBL-b I/O Immuno-Oncology
d Boiling It Down: Conveying Complexity For Decision-makers By lifescivc.com Published On :: Wed, 22 May 2024 11:00:28 +0000 By Ankit Mahadevia, former CEO of Spero Therapeutics, as part of the From The Trenches feature of LifeSciVC Drug development is complex. So is running a business. Sometimes, the work of doing both can make your head spin. In my The post Boiling It Down: Conveying Complexity For Decision-makers appeared first on LifeSciVC. Full Article From The Trenches Leadership
d Neuro-Immunology: The Promise Of A Differentiated Approach To Neurodegenerative Disease By lifescivc.com Published On :: Wed, 05 Jun 2024 11:00:30 +0000 By Ivana Magovčević-Liebisch, CEO of Vigil Neuroscience, as part of the From The Trenches feature of LifeSciVC In the last decade, our industry has made great strides in combating cancer by harnessing the body’s own immune system. As it was The post Neuro-Immunology: The Promise Of A Differentiated Approach To Neurodegenerative Disease appeared first on LifeSciVC. Full Article Drug discovery From The Trenches Science & Medicine neurodegeneration neuroimmunology neuroinflammation TREM2
d A Molecular Biologist’s Advice For Life By lifescivc.com Published On :: Mon, 01 Jul 2024 10:25:46 +0000 Having just turned 50, I’ve been reflecting on my first half-century of late… many fun and impactful moments, a few regrets, and a life I’ve tried to live to the fullest. One thread that has run throughout it has been The post A Molecular Biologist’s Advice For Life appeared first on LifeSciVC. Full Article Bioentrepreneurship Leadership Personal The Human Element
d Looking for Opportunities to Accelerate Clinical Research in Rare Diseases By lifescivc.com Published On :: Wed, 17 Jul 2024 11:00:40 +0000 By Mike Cloonan, Chief Executive Officer of Sionna Therapeutics, as part of the From The Trenches feature of LifeSciVC The drug development process in rare diseases is rife with challenges especially when companies target significant differentiation or first-in-class targets. Identifying The post Looking for Opportunities to Accelerate Clinical Research in Rare Diseases appeared first on LifeSciVC. Full Article Business Development From The Trenches Portfolio news Rare Diseases Science & Medicine CFTR Cystic Fibrosis NBD1 Sionna Therapeutics
d Medicinal Chemistry In The Age Of Artificial Intelligence By lifescivc.com Published On :: Mon, 05 Aug 2024 11:00:53 +0000 By Peter Tummino, CSO of Nimbus Therapeutics, as part of the From The Trenches feature of LifeSciVC “Over the next five to 10 years, our goal is to become a company that’s leading the world in personalized medicines, a company The post Medicinal Chemistry In The Age Of Artificial Intelligence appeared first on LifeSciVC. Full Article Drug discovery From The Trenches R&D Productivity
d A Biotech Midsummer’s Madness By lifescivc.com Published On :: Wed, 21 Aug 2024 15:30:17 +0000 By Arthur Tzianabos, CEO of Lifordi Immunotherapeutics, as part of the From The Trenches feature of LifeSciVC Greetings from Lake Winnipesaukee in NH where I am at this time every year. It’s midsummer and vacation time for me and the The post A Biotech Midsummer’s Madness appeared first on LifeSciVC. Full Article Bioentrepreneurship From The Trenches Strategy
d Reflections On My Experience As A Board Member By lifescivc.com Published On :: Wed, 18 Sep 2024 11:00:01 +0000 By Ivana Magovčević-Liebisch, CEO of Vigil Neuroscience, as part of the From The Trenches feature of LifeSciVC In an industry where boom and bust cycles occur regularly and 90 percent of drug candidates fail to reach the market, an outstanding The post Reflections On My Experience As A Board Member appeared first on LifeSciVC. Full Article Boards and governance From The Trenches Leadership
d Biotech Risk Cycles: Assets And Platforms By lifescivc.com Published On :: Mon, 28 Oct 2024 10:00:52 +0000 Today’s market likes products. Platforms aren’t in vogue anymore. Investors, especially in the public markets, only want late stage de-risked assets. Pharma only seems to be buying these kinds of asset. VCs need to focus on clinical stage companies. Or The post Biotech Risk Cycles: Assets And Platforms appeared first on LifeSciVC. Full Article Biotech financing Biotech investment themes Capital efficiency Capital markets Exits IPOs M&As
d Tell the UK’s research regulator to do more on clinical trial transparency By www.alltrials.net Published On :: Mon, 29 Jul 2019 13:41:09 +0000 The UK body that oversees health research is writing a new strategy on clinical trial transparency and it wants to hear opinions on it. The Health Research Authority (HRA) says its strategy aims to “make transparency easy, make compliance clear and make information public.” It has opened a public consultation on the strategy and some […] Full Article News
d UK universities and NHS trusts that flout the rules on clinical trials identified in report to Parliament By www.alltrials.net Published On :: Thu, 24 Oct 2019 00:19:34 +0000 An AllTrials report for the House of Commons Science and Technology Select Committee this week has found that 33 NHS trust sponsors and six UK universities are reporting none of their clinical trial results, while others have gone from 0% to 100% following an announcement from the Select Committee in January that universities and NHS […] Full Article News Uncategorized
d AllTrials guide to asking academic institutions about missing results By www.alltrials.net Published On :: Fri, 17 Jan 2020 11:29:12 +0000 When university and hospital trusts were called to the UK parliament last year to answer questions on why they were not following the rules on reporting results, we saw how effective the questioning from politicians was. Those of you who watched the parliamentary session saw the pressure the university representatives were put under. Because the politicians asked […] Full Article News
d Hundreds of clinical trials ruled to be breaking the law By www.alltrials.net Published On :: Thu, 05 Mar 2020 11:46:11 +0000 A judge in New York has ruled that hundreds of clinical trials registered on ClinicalTrials.gov are breaking the law by not reporting results. The ruling came in a court case launched against the US Department of Health and Human Services by two plaintiffs, a family doctor and a professor of journalism. The case focused on […] Full Article News
d Clinical Trial Enrollment, ASCO 2013 Edition By www.placebocontrol.com Published On :: Thu, 30 May 2013 17:43:00 +0000 Even by the already-painfully-embarrassingly-low standards of clinical trial enrollment in general, patient enrollment in cancer clinical trials is slow. Horribly slow. In many cancer trials, randomizing one patient every three or four months isn't bad at all – in fact, it's par for the course. The most commonly-cited number is that only 3% of cancer patients participate in a trial – and although exact details of how that number is measured are remarkably difficult to pin down, it certainly can't be too far from reality. Ultimately, the cost of slow enrollment is borne almost entirely by patients; their payment takes the form of fewer new therapies and less evidence to support their treatment decisions. So when a couple dozen thousand of the world's top oncologists fly into Chicago to meet, you'd figure that improving accrual would be high on everyone’s agenda. You can't run your trial without patients, after all. But every year, the annual ASCO meeting underdelivers in new ideas for getting more patients into trials. I suppose this a consequence of ASCO's members-only focus: getting the oncologists themselves to address patient accrual is a bit like asking NASCAR drivers to tackle the problems of aerodynamics, engine design, and fuel chemistry. Nonetheless, every year, a few brave souls do try. Here is a quick rundown of accrual-related abstracts at this year’s meeting, conveniently sorted into 3 logical categories: 1. As Lord Kelvin may or may not have said, “If you cannot measure it, you cannot improve it.” Abstract e15572: Inadequate data availability on clinical trial accrual and its effect on progress in cancer research Probably the most sensible of this year's crop, because rather than trying to make something out of nothing, the authors measure exactly how pervasive the nothing is. Specifically, they attempt to obtain fairly basic patient accrual data for the last three years' worth of clinical trials in kidney cancer. Out of 108 trials identified, they managed to get – via search and direct inquiries with the trial sponsors – basic accrual data for only 43 (40%). That certainly qualifies as “terrible”, though the authors content themselves with “poor”. Interestingly, exactly zero of the 32 industry-sponsored trials responded to the authors' initial survey. This fits with my impression that pharma companies continue to think of accrual data as proprietary, though what sort of business advantage it gives them is unclear. Any one company will have only run a small fraction of these studies, greatly limiting their ability to draw anything resembling a valid conclusion. Abstract TPS6645: Predictors of accrual success for cooperative group trials: The Cancer and Leukemia Group B (Alliance) experience CALGB investigators look at 110 trials over the past 10 years to see if they can identify any predictive markers of successful enrollment. Unfortunately, the trials themselves are pretty heterogeneous (accrual periods ranged from 6 months to 8.8 years), so finding a consistent marker for successful trials would seem unlikely. And, in fact, none of the usual suspects (e.g., startup time, disease prevalence) appears to have been significant. The exception was provision of medication by the study, which was positively associated with successful enrollment. The major limitation with this study, apart from the variability of trials measured, is in its definition of “successful”, which is simply the total number of planned enrolled patients. Under both of their definitions, a slow-enrolling trial that drags on for years before finally reaching its goal is successful, whereas if that same trial had been stopped early it is counted as unsuccessful. While that sometimes may be the case, it's easy to imagine situations where allowing a slow trial to drag on is a painful waste of resources – especially if results are delayed enough to bring their relevance into question. Even worse, though, is that a trial’s enrollment goal is itself a prediction. The trial steering committee determines how many sites, and what resources, will be needed to hit the number needed for analysis. So in the end, this study is attempting to identify predictors of successful predictions, and there is no reason to believe that the initial enrollment predictions were made with any consistent methodology. 2. If you don't know, maybe ask somebody? Abstract 8592: Strategies to overcome barriers to accrual (BtA) to NCI-sponsored clinical trials: A project of the NCI-Myeloma Steering Committee Accrual Working Group (NCI-MYSC AWG) Abstract 1596: Rapid online feedback to improve clinical trial accrual: CODEL anaplastic glioma (AG) (NCCTG/Alliance N0577) as a model With these two abstracts we celebrate and continue the time-honored tradition of alchemy, whereby we transmute base opinion into golden data. The magic number appears to be 100: if you've got 3 digits' worth of doctors telling you how they feel, that must be worth something. In the first abstract, a working group is formed to identify and vote on the major barriers to accrual in oncology trials. Then – and this is where the magic happens – that same group is asked to identify and vote on possible ways to overcome those barriers. In the second, a diverse assortment of community oncologists were given an online survey to provide feedback on the design of a phase 3 trial in light of recent new data. The abstract doesn't specify who was initially sent the survey, so we cannot tell response rate, or compare survey responders to the general population (I'll take a wild guess and go with “massive response bias”). Market research is sometimes useful. But what cancer clinical trial do not need right now are more surveys are working groups. The “strategies” listed in the first abstract are part of the same cluster of ideas that have been on the table for years now, with no appreciable increase in trial accrual. 3. The obligatory “What the What?” abstract Abstract 6564: Minority accrual on a prospective study targeting a diverse U.S. breast cancer population: An analysis of Wake Forest CCOP research base protocol 97609 The force with which my head hit my desk after reading this abstract made me concerned that it had left permanent scarring. If this had been re-titled “Poor Measurement of Accrual Factors Leads to Inaccurate Accrual Reporting”, would it still have been accepted for this year’s meeting? That's certainly a more accurate title. Let’s review: a trial intends to enroll both white and minority patients. Whites enroll much faster, leading to a period where only minority patients are recruited. Then, according to the authors, “an almost 4-fold increase in minority accrual raises question of accrual disparity.” So, sites will only recruit minority patients when they have no choice? But wait: the number of sites wasn't the same during the two periods, and start-up times were staggered. Adjusting for actual site time, the average minority accrual rate was 0.60 patients/site/month in the first part and 0.56 in the second. So the apparent 4-fold increase was entirely an artifact of bad math. This would be horribly embarrassing were it not for the fact that bad math seems to be endemic in clinical trial enrollment. Failing to adjust for start-up time and number of sites is so routine that not doing it is grounds for a presentation. The bottom line What we need now is to rigorously (and prospectively) compare and measure accrual interventions. We have lots of candidate ideas, and there is no need for more retrospective studies, working groups, or opinion polls to speculate on which ones will work best. Where possible, accrual interventions should themselves be randomized to minimize confounding variables which prevent accurate assessment. Data needs to be uniformly and completely collected. In other words, the standards that we already use for clinical trials need to be applied to the enrollment measures we use to engage patients to participate in those trials. This is not an optional consideration. It is an ethical obligation we have to cancer patients: we need to assure that we are doing all we can to maximize the rate at which we generate new evidence and test new therapies. [Image credit: Logarithmic turtle accrual rates courtesy of Flikr user joleson.] Full Article ASCO ethics oncology trials patient recruitment trial delays trial design
d Can FDA's New Transparency Survive Avandia? By www.placebocontrol.com Published On :: Wed, 05 Jun 2013 03:53:00 +0000 PDUFA V commitments signal a strong commitment to tolerance of open debate in the face of uncertainty. I can admit to a rather powerful lack of enthusiasm when reading about interpersonal squabbles. It’s even worse in the scientific world: when I read about debates getting mired in personal attacks I tend to simply stop reading and move on to something else. However, the really interesting part of this week’s meeting of an FDA joint Advisory Committee to discuss the controversial diabetes drug Avandia – at least in the sense of likely long-term impact – is not the scientific question under discussion, but the surfacing and handling of the raging interpersonal battle going on right now inside the Division of Cardiovascular and Renal Products. So I'll have to swallow my distaste and follow along with the drama. Two words that make us mistrust Duke: Anil Potti Christian Laettner Not that the scientific question at hand – does Avandia pose significant heart risks? – isn't interesting. It is. But if there’s one thing that everyone seems to agree on, it’s that we don’t have good data on the topic. Despite the re-adjudication of RECORD, no one trusts its design (and, ironically, the one trial with a design to rigorously answer the question was halted after intense pressure, despite an AdComm recommendation that it continue). And no one seems particularly enthused about changing the current status of Avandia: in all likelihood it will continue to be permitted to be marketed under heavy restrictions. Rather than changing the future of diabetes, I suspect the committee will be content to let us slog along the same mucky trail. The really interesting question, that will potentially impact CDER for years to come, is how it can function with frothing, open dissent among its staffers. As has been widely reported, FDA reviewer Tom Marciniak has written a rather wild and vitriolic assessment of the RECORD trial, excoriating most everyone involved. In a particularly stunning passage, Marciniak appears to claim that the entire output of anyone working at Duke University cannot be trusted because of the fraud committed by Duke cancer researcher Anil Potti: I would have thought that the two words “Anil Potti” are sufficient for convincing anyone that Duke University is a poor choice for a contractor whose task it is to confirm the integrity of scientific research. (One wonders how far Marciniak is willing to take his guilt-by-association theme. Are the words “Cheng Yi Liang” sufficient to convince us that all FDA employees, including Marciniak, are poor choices for deciding matter relating to publicly-traded companies? Should I not comment on government activities because I’m a resident of Illinois (my two words: “Rod Blagojevich”)?) Rather than censoring or reprimanding Marciniak, his supervisors have taken the extraordinary step of letting him publicly air his criticisms, and then they have in turn publicly criticized his methods and approach. I have been unable to think of a similar situation at any regulatory agency. The tolerance for dissent being displayed by FDA is, I believe, completely unprecedented. And that’s the cliffhanger for me: can the FDA’s commitment to transparency extend so far as to accommodate public disagreements about its own approval decisions? Can it do so even when the disagreements take an extremely nasty and inappropriate tone? Rather than considering that open debate is a good thing, will journalists jump on the drama and portray agency leadership as weak and indecisive? Will the usual suspects in Congress be able to exploit this disagreement for their own political gain? How many House subcommittees will be summoning Janet Woodcock in the coming weeks? I think what Bob Temple and Norman Stockbridge are doing is a tremendous experiment in open government. If they can pull it off, it could force other agencies to radically rethink how they go about crafting and implementing regulations. However, I also worry that it is politically simply not a viable approach, and that the agency will ultimately be seriously hurt by attacks from the media and legislators. Where is this coming from? As part of its recent PDUFA V commitment, the FDA put out a fascinating draft document, Structured Approach to Benefit-Risk Assessment in Drug Regulatory Decision-Making. It didn't get a lot of attention when first published back in February (few FDA documents do). However, it lays out a rather bold vision for how the FDA can acknowledge the existence of uncertainty in its evaluation of new drugs. Its proposed structure even envisions an open and honest accounting of divergent interpretations of data: When they're frothing at the mouth, even Atticusdoesn't let them publish a review A framework for benefit-risk decision-making that summarizes the relevant facts, uncertainties, and key areas of judgment, and clearly explains how these factors influence a regulatory decision, can greatly inform and clarify the regulatory discussion. Such a framework can provide transparency regarding the basis of conflicting recommendations made by different parties using the same information. (Emphasis mine.) Of course, the structured framework here is designed to reflect rational disagreement. Marciniak’s scattershot insults are in many ways a terrible first case for trying out a new level of transparency. The draft framework notes that safety issues, like Avandia, are some of the major areas of uncertainty in the regulatory process. Contrast this vision of coolly and systematically addressing uncertainties with the sad reality of Marciniak’s attack: In contrast to the prospective and highly planned studies of effectiveness, safety findings emerge from a wide range of sources, including spontaneous adverse event reports, epidemiology studies, meta-analyses of controlled trials, or in some cases from randomized, controlled trials. However, even controlled trials, where the evidence of an effect is generally most persuasive, can sometimes provide contradictory and inconsistent findings on safety as the analyses are in many cases not planned and often reflect multiple testing. A systematic approach that specifies the sources of evidence, the strength of each piece of evidence, and draws conclusions that explain how the uncertainty weighed on the decision, can lead to more explicit communication of regulatory decisions. We anticipate that this work will continue beyond FY 2013. I hope that work will continue beyond 2013. Thoughtful, open discussions of real uncertainties are one of the most worthwhile goals FDA can aspire to, even if it means having to learn how to do so without letting the Marciniaks of the world scuttle the whole endeavor. [Update June 6: Further bolstering the idea that the AdCom is just as much about FDA's ability to transparently manage differences of expert opinion in the face of uncertain data, CDER Director Janet Woodcock posted this note on the FDA's blog. She's pretty explicit about the bigger picture: There have been, and continue to be, differences of opinion and scientific disputes, which is not uncommon within the agency, stemming from varied conclusions about the existing data, not only with Avandia, but with other FDA-regulated products. At FDA, we actively encourage and welcome robust scientific debate on the complex matters we deal with — as such a transparent approach ensures the scientific input we need, enriches the discussions, and enhances our decision-making. I agree, and hope she can pull it off.] Full Article data quality drug safety Duke FDA GSK Tom Marciniak transparency
d Pediatric Trial Enrollment (Shameless DIA Self-Promotion, Part 1) By www.placebocontrol.com Published On :: Wed, 19 Jun 2013 22:19:00 +0000 [Fair Warning: I have generally tried to keep this blog separate from my corporate existence, but am making an exception for two quick posts about the upcoming DIA 2013 Annual Meeting.] Improving Enrollment in Pediatric Clinical Trials Logistically, ethically, and emotionally, involving children in medical research is greatly different from the same research in adults. Some of the toughest clinical trials I've worked on, across a number of therapeutic areas, have been pediatric ones. They challenge you to come up with different approaches to introducing and explaining clinical research – approaches that have to work for doctors, kids, and parents simultaneously. On Thursday June 27, Don Sickler, one of my team members, will be chairing a session titled “Parents as Partners: Engaging Caregivers for Pediatric Trials”. It should be a good session. Joining Don are 2 people I've had the pleasure of working with in the past. Both of them combine strong knowledge of clinical research with a massive amount of positive energy and enthusiasm (no doubt a big part of what makes them successful). However, they also differ in one key aspect: what they work on. One of them – Tristen Moors from Hyperion Therapeutics - works on an ultra-rare condition, Urea Cycle Disorder, a disease affecting only a few hundred children every year. On the other hand, Dr. Ann Edmunds is an ENT working in a thriving private practice. I met her because she was consistently the top enroller in a number of trials relating to tympanostomy tube insertion. Surgery to place “t-tubes” is one of the most common and routine outpatients surgeries there is, with an estimated half million kids getting tubes each year. Each presents a special challenge: for rare conditions, how do you even find enough patients? For routine procedures, how do you convince parents to complicate their (and their children’s) lives by signing up for a multi-visit, multi-procedure trial? Ann and Tristen have spent a lot of time tackling these issues, and should have some great advice to give. For more information on the session, here’s Don’s posting on our news blog. Full Article DIA pediatric trials
d Preview of Enrollment Analytics: Moving Beyond the Funnel (Shameless DIA Self-Promotion, Part 2) By www.placebocontrol.com Published On :: Fri, 21 Jun 2013 16:59:00 +0000 Are we looking at our enrollment data in the right way? I will be chairing a session on Tuesday on this topic, joined by a couple of great presenters (Diana Chung from Gilead and Gretchen Goller from PRA). Here's a short preview of the session: Hope to see you there. It should be a great discussion. Session Details: June 25, 1:45PM - 3:15PM Session Number: 241 Room Number: 205B 1. Enrollment Analytics: Moving Beyond the Funnel Paul Ivsin VP, Consulting Director CAHG Clinical Trials 2. Use of Analytics for Operational Planning Diana Chung, MSc Associate Director, Clinical Operations Gilead 3. Using Enrollment Data to Communicate Effectively with Sites Gretchen Goller, MA Senior Director, Patient Access and Retention Services PRA Full Article DIA metrics patient recruitment
d Brazen Scofflaws? Are Pharma Companies Really Completely Ignoring FDAAA? By www.placebocontrol.com Published On :: Wed, 31 Jul 2013 16:54:00 +0000 Results reporting requirements are pretty clear. Maybe critics should re-check their methods? Ben Goldacre has rather famously described the clinical trial reporting requirements in the Food and Drug Administration Amendments Act of 2007 as a “fake fix” that was being thoroughly “ignored” by the pharmaceutical industry. Pharma: breaking the law in broad daylight? He makes this sweeping, unconditional proclamation about the industry and its regulators on the basis of a single study in the BMJ, blithely ignoring the fact that a) the authors of the study admitted that they could not adequately determine the number of studies that were meeting FDAAA requirements and b) a subsequent FDA review that identified only 15 trials potentially out of compliance, out of a pool of thousands. Despite the fact that the FDA, which has access to more data, says that only a tiny fraction of studies are potentially noncompliant, Goldacre's frequently repeated claims that the law is being ignored seems to have caught on in the general run of journalistic and academic discussions about FDAAA. And now there appears to be additional support for the idea that a large percentage of studies are noncompliant with FDAAA results reporting requirements, in the form of a new study in the Journal of Clinical Oncology: "Public Availability of Results of Trials Assessing Cancer Drugs in the United States" by Thi-Anh-Hoa Nguyen, et al.. In it, the authors report even lower levels of FDAAA compliance – a mere 20% of randomized clinical trials met requirements of posting results on clinicaltrials.gov within one year. Unsurprisingly, the JCO results were immediately picked up and circulated uncritically by the usual suspects. I have to admit not knowing much about pure academic and cooperative group trial operations, but I do know a lot about industry-run trials – simply put, I find the data as presented in the JCO study impossible to believe. Everyone I work with in pharma trials is painfully aware of the regulatory environment they work in. FDAAA compliance is a given, a no-brainer: large internal legal and compliance teams are everywhere, ensuring that the letter of the law is followed in clinical trial conduct. If anything, pharma sponsors are twitchily over-compliant with these kinds of regulations (for example, most still adhere to 100% verification of source documentation – sending monitors to physically examine every single record of every single enrolled patient - even after the FDA explicitly told them they didn't have to). I realize that’s anecdotal evidence, but when such behavior is so pervasive, it’s difficult to buy into data that says it’s not happening at all. The idea that all pharmaceutical companies are ignoring a highly visible law that’s been on the books for 6 years is extraordinary. Are they really so brazenly breaking the rules? And is FDA abetting them by disseminating incorrect information? Those are extraordinary claims, and would seem to require extraordinary evidence. The BMJ study had clear limitations that make its implications entirely unclear. Is the JCO article any better? Some Issues In fact, there appear to be at least two major issues that may have seriously compromised the JCO findings: 1. Studies that were certified as being eligible for delayed reporting requirements, but do not have their certification date listed. The study authors make what I believe to be a completely unwarranted assumption: In trials for approval of new drugs or approval for a new indication, a certification [permitting delayed results reporting] should be posted within 1 year and should be publicly available. It’s unclear to me why the authors think the certifications “should be” publicly available. In re-reading FDAAA section 801, I don’t see any reference to that being a requirement. I suppose I could have missed it, but the authors provide a citation to a page that clearly does not list any such requirement. But their methodology assumes that all trials that have a certification will have it posted: If no results were posted at ClinicalTrials.gov, we determined whether the responsible party submitted a certification. In this case, we recorded the date of submission of the certification to ClinicalTrials.gov. If a sponsor gets approval from FDA to delay reporting (as is routine for all drugs that are either not approved for any indication, or being studied for a new indication – i.e., the overwhelming majority of pharma drug trials), but doesn't post that approval on the registry, the JCO authors deem that trial “noncompliant”. This is not warranted: the company may have simply chosen not to post the certification despite being entirely FDAAA compliant. 2. Studies that were previously certified for delayed reporting and subsequently reported results It is hard to tell how the authors treated this rather-substantial category of trials. If a trial was certified for delayed results reporting, but then subsequently published results, the certification date becomes difficult to find. Indeed, it appears in the case where there were results, the authors simply looked at the time from study completion to results posting. In effect, this would re-classify almost every single one of these trials from compliant to non-compliant. Consider this example trial: Phase 3 trial completes January 2010 Certification of delayed results obtained December 2010 (compliant) FDA approval June 2013 Results posted July 2013 (compliant) In looking at the JCO paper's methods section, it really appears that this trial would be classified as reporting results 3.5 years after completion, and therefore be considered noncompliant with FDAAA. In fact, this trial is entirely kosher, and would be extremely typical for many phase 2 and 3 trials in industry. Time for Some Data Transparency The above two concerns may, in fact, be non-issues. They certainly appear to be implied in the JCO paper, but the wording isn't terribly detailed and could easily be giving me the wrong impression. However, if either or both of these issues are real, they may affect the vast majority of "noncompliant" trials in this study. Given the fact that most clinical trials are either looking at new drugs, or looking at new indications for new drugs, these two issues may entirely explain the gap between the JCO study and the unequivocal FDA statements that contradict it. I hope that, given the importance of transparency in research, the authors will be willing to post their data set publicly so that others can review their assumptions and independently verify their conclusions. It would be more than a bit ironic otherwise. [Image credit: Shamless lawlessness via Flikr user willytronics.] Thi-Anh-Hoa Nguyen, Agnes Dechartres, Soraya Belgherbi, and Philippe Ravaud (2013). Public Availability of Results of Trials Assessing Cancer Drugs in the United States JOURNAL OF CLINICAL ONCOLOGY DOI: 10.1200/JCO.2012.46.9577 Full Article Ben Goldacre clinicaltrials.gov FDA FDAAA JCO oncology trials pharma legislation Pharmalot transparency
d Counterfeit Drugs in Clinical Trials? By www.placebocontrol.com Published On :: Thu, 08 Aug 2013 03:02:00 +0000 This morning I ran across a bit of a coffee-spitter: in the middle of an otherwise opaquely underinformative press release fromTranscelerate Biopharma about the launch of their Counterfeits flooding the market? Really? "Comparator Network" - which will perhaps streamline member companies' ability to obtain drugs from each other for clinical trials using active comparator arms - the CEO of the consortium, Dalvir Gill, drops a rather remarkable quote: "Locating and accessing these comparators at the right time, in the right quantities and with the accompanying drug stability and regulatory information we need, doesn't always happen efficiently. This is further complicated by infiltration of the commercial drug supply chain by counterfeit drugs. With the activation of our Comparator Network the participating TransCelerate companies will be able to source these comparator drugs directly from each other, be able to secure supply when they need it in the quantities they need, have access to drug data and totally mitigate the risk of counterfeit drugs in that clinical trial." [Emphasis added.] I have to admit to being a little floored by the idea that there is any sort of risk, in industry-run clinical trials, of counterfeit medication "infiltration". Does Gill know something that the rest of us don't? Or is this just an awkward slap at perceived competition – innuendo against the companies that currently manage clinical trial comparator drug supply? Or an attempt at depicting the trials of non-Transcelerate members as risky and prone to fraud? Either way, it could use some explaining. Thinking I might have missed something, I did do a quick literature search to see if I could come across any references to counterfeits in trials. Google Scholar and PubMed produced no useful results, but Wikipedia helpfully noted in its entry on counterfeit medications: Counterfeit drugs have even been known to have been involved in clinical drug trials.[citation needed] And on that point, I think we can agree: Citation needed. I hope the folks at Transcelerate will oblige. Full Article active comparators drug safety operations TransCelerate
d Every Unhappy PREA Study is Unhappy in its Own Way By www.placebocontrol.com Published On :: Wed, 04 Sep 2013 04:59:00 +0000 “Children are not small adults.” We invoke this saying, in a vague and hand-wavy manner, whenever we talk about the need to study drugs in pediatric populations. It’s an interesting idea, but it really cries out for further elaboration. If they’re not small adults, what are they? Are pediatric efficacy and safety totally uncorrelated with adult efficacy and safety? Or are children actually kind of like small adults in certain important ways? Pediatric post-marketing studies have been completed for over 200 compounds in the years since BPCA (2002, offering a reward of 6 months extra market exclusivity/patent life to any drug conducting requested pediatric studies) and PREA (2007, giving FDA power to require pediatric studies) were enacted. I think it is fair to say that at this point, it would be nice to have some sort of comprehensive idea of how FDA views the risks associated with treating children with medications tested only on adults. Are they in general less efficacious? More? Is PK in children predictable from adult studies a reasonable percentage of the time, or does it need to be recharacterized with every drug? Essentially, my point is that BPCA/PREA is a pretty crude tool: it is both too broad in setting what is basically a single standard for all new adult medications, and too vague as to what exactly that standard is. In fact, a 2008 published review from FDA staffers and a 2012 Institute of Medicine report both show one clear trend: in a significant majority of cases, pediatric studies resulted in validating the adult medication in children, mostly with predictable dose and formulation adjustments (77 of 108 compounds (71%) in the FDA review, and 27 of 45 (60%) in the IOM review, had label changes that simply reflected that use of the drug was acceptable in younger patients). So, it seems, most of the time, children are in fact not terribly unlike small adults. But it’s also true that the percentages of studies that show lack of efficacy, or bring to light a new safety issue with the drug’s use in children, is well above zero. There is some extremely important information here. To paraphrase John Wanamaker: we know that half our PREA studies are a waste of time; we just don’t know which half. This would seem to me to be the highest regulatory priority – to be able to predict which new drugs will work as expected in children, and which may truly require further study. After a couple hundred compounds have gone through this process, we really ought to be better positioned to understand how certain pharmacological properties might increase or decrease the risks of drugs behaving differently than expected in children. Unfortunately, neither the FDA nor the IOM papers venture any hypotheses about this – both end up providing long lists of examples of certain points, but not providing any explanatory mechanisms that might enable us to engage in some predictive risk assessment. While FDASIA did not advance PREA in terms of more rigorously defining the scope of pediatric requirements (or, better yet, requiring FDA to do so), it did address one lingering concern by requiring that FDA publish non-compliance letters for sponsors that do not meet their commitments. (PREA, like FDAAA, is a bit plagued by lingering suspicions that it’s widely ignored by industry.) The first batch of letters and responses has been published, and it offers some early insights into the problems engendered by the nebulous nature of PREA and its implementation. These examples, unfortunately, are still a bit opaque – we will need to wait on the FDA responses to the sponsors to see if some of the counter-claims are deemed credible. In addition, there are a few references to prior deferral requests, but the details of the request (and rationales for the subsequent FDA denials) do not appear to be publicly available. You can read FDA’s take on the new postings on their blog, or in the predictably excellent coverage from Alec Gaffney at RAPS. Looking through the first 4 drugs publicly identified for noncompliance, the clear trend is that there is no trend. All these PREA requirements have been missed for dramatically different reasons. Here’s a quick rundown of the drugs at issue – and, more interestingly, the sponsor responses: 1. Renvela - Genzyme (full response) Genzyme appears to be laying responsibility for the delay firmly at FDA’s feet here, basically claiming that FDA continued to pile on new requirements over time: Genzyme’s correspondence with the FDA regarding pediatric plans and design of this study began in 2006 and included a face to face meeting with FDA in May 2009. Genzyme submitted 8 revisions of the pediatric study design based on feedback from FDA including that received in 4 General Advice Letters. The Advice Letter dated February 17, 2011 contained further recommendations on the study design, yet still required the final clinical study report by December 31, 2011. This highlights one of PREA’s real problems: the requirements as specified in most drug approval letters are not specific enough to fully dictate the study protocol. Instead, there is a lot of back and forth between the sponsor and FDA, and it seems that FDA does not always fully account for their own contribution to delays in getting studies started. 2. Hectorol - Genzyme (full response) In this one, Genzyme blames the FDA not for too much feedback, but for none at all: On December 22, 2010, Genzyme submitted a revised pediatric development plan (Serial No. 212) which was intended to address FDA feedback and concerns that had been received to date. This submission included proposed protocol HECT05310. [...] At this time, Genzyme has not received feedback from the FDA on the protocol included in the December 22, 2010 submission. If this is true, it appears extremely embarrassing for FDA. Have they really not provided feedback in over 2.5 years, and yet still sending noncompliance letters to the sponsor? It will be very interesting to see an FDA response to this. 3. Cleviprex – The Medicines Company (full response) This is the only case where the pharma company appears to be clearly trying to game the system a bit. According to their response: Recognizing that, due to circumstances beyond the company’s control, the pediatric assessment could not be completed by the due date, The Medicines Company notified FDA in September 2010, and sought an extension. At that time, it was FDA’s view that no extensions were available. Following the passage of FDASIA, which specifically authorizes deferral extensions, the company again sought a deferral extension in December 2012. So, after hearing that they had to move forward in 2010, the company promptly waited 2 years to ask for another extension. During that time, the letter seems to imply that they did not try to move the study forward at all, preferring to roll the dice and wait for changing laws to help them get out from under the obligation. 4. Twinject/Adrenaclick – Amedra (full response) The details of this one are heavily redacted, but it may also be a bit of gamesmanship from the sponsor. After purchasing the injectors, Amedra asked for a deferral. When the deferral was denied, they simply asked for the requirements to be waived altogether. That seems backwards, but perhaps there's a good reason for that. --- Clearly, 4 drugs is not a sufficient sample to say anything definitive, especially when we don't have FDA's take on the sponsor responses. However, it is interesting that these 4 cases seem to reflect an overall pattern with BCPA and PREA - results are scattershot and anecdotal. We could all clearly benefit from a more systematic assessment of why these trials work and why some of them don't, with a goal of someday soon abandoning one-size-fits-all regulation and focusing resources where they will do the most good. Full Article BCPA FDA FDAAA FDASIA IOM pediatric trials PREA transparency
d Questionable Enrollment Math(s) - the Authors Respond By www.placebocontrol.com Published On :: Fri, 20 Sep 2013 04:09:00 +0000 The authors of the study I blogged about on Monday were kind enough to post a lengthy comment, responding in part to some of the issues I raised. I thought their response was interesting, and so reprint it in its entirety below, interjecting my own reactions as well. There were a number of points you made in your blog and the title of questionable maths was what caught our eye and so we reply on facts and provide context. Firstly, this is a UK study where the vast majority of UK clinical trials take place in the NHS. It is about patient involvement in mental health studies - an area where recruitment is difficult because of stigma and discrimination. I agree, in hindsight, that I should have titled the piece “questionable maths” rather than my Americanized “questionable math”. Otherwise, I think this is fine, although I’m not sure that anything here differs from my post. 1. Tripling of studies - You dispute NIHR figures recorded on a national database and support your claim with a lone anecdote - hardly data that provides confidence. The reason we can improve recruitment is that NIHR has a Clinical Research Network which provides extra staff, within the NHS, to support high quality clinical studies and has improved recruitment success. To be clear, I did not “dispute” the figures so much as I expressed sincere doubt that those figures correspond with an actual increase in actual patients consenting to participate in actual UK studies. The anecdote explains why I am skeptical – it's a bit like I've been told there was a magnitude 8 earthquake in Chicago, but neither I nor any of my neighbors felt anything. There are many reasons why reported numbers can increase in the absence of an actual increase. It’s worth noting that my lack of confidence in the NIHR's claims appears to be shared by the 2 UK-based experts quoted by Applied Clinical Trials in the article I linked to. 2. Large database: We have the largest database of detailed study information and patient involvement data - I have trawled the world for a bigger one and NIMH say there certainly isn't one in the USA. This means few places where patient impact can actually be measured 3. Number of studies: The database has 374 studies which showed among other results that service user involvement increased over time probably following changes by funders e.g. NIHR requests information in the grant proposal on how service users have been and will be involved - one of the few national funders to take this issue seriously. As far as I can tell, neither of these points is in dispute. 4. Analysis of patient involvement involves the 124 studies that have completed. You cannot analyse recruitment success unless then. I agree you cannot analyze recruitment success in studies that have not yet completed. My objection is that in both the KCL press release and the NIHR-authored Guardian article, the only number mentioned in 374, and references to the recruitment success findings came immediately after references to that number. For example: Published in the British Journal of Psychiatry, the researchers analysed 374 studies registered with the Mental Health Research Network (MHRN). Studies which included collaboration with service users in designing or running the trial were 1.63 times more likely to recruit to target than studies which only consulted service users. Studies which involved more partnerships - a higher level of Patient and Public Involvement (PPI) - were 4.12 times more likely to recruit to target. The above quote clearly implies that the recruitment conclusions were based on an analysis of 374 studies – a sample 3 times larger than the sample actually used. I find this disheartening. The complexity measure was developed following a Delphi exercise with clinicians, clinical academics and study delivery staff to include variables likely to be barriers to recruitment. It predicts delivery difficulty (meeting recruitment & delivery staff time). But of course you know all that as it was in the paper. Yes, I did know this, and yes, I know it because it was in the paper. In fact, that’s all I know about this measure, which is what led me to characterize it as “arbitrary and undocumented”. To believe that all aspects of protocol complexity that might negatively affect enrollment have been adequately captured and weighted in a single 17-point scale requires a leap of faith that I am not, at the moment, able to make. The extraordinary claim that all complexity issues have been accounted for in this model requires extraordinary evidence, and “we conducted a Delphi exercise” does not suffice. 6. All studies funded by NIHR partners were included – we only excluded studies funded without peer review, not won competitively. For the involvement analysis we excluded industry studies because of not being able to contact end users and where inclusion compromised our analysis reliability due to small group sizes. It’s only that last bit I was concerned about. Specifically, the 11 studies that were excluded due to being in “clinical groups” that were too small, despite the fact that “clinical groups” appear to have been excluded as non-significant from the final model of recruitment success. (Also: am I being whooshed here? In a discussion of "questionable math" the authors' enumeration goes from 4 to 6. I’m going to take the miscounting here as a sly attempt to see if I’m paying attention...) I am sure you are aware of the high standing of the journal and its robust peer review. We understand that our results must withstand the scrutiny of other scientists but many of your comments were unwarranted. This is the first in the world to investigate patient involvement impact. No other databases apart from the one held by the NIHR Mental Health Research Network is available to test – we only wish they were. I hope we can agree that peer review – no matter how "high standing" the journal – is not a shield against concern and criticism. Despite the length of your response, I’m still at a loss as to which of my comments specifically were unwarranted. In fact, I feel that I noted very clearly that my concerns about the study’s limitations were minuscule compared to my concerns about the extremely inaccurate way that the study has been publicized by the authors, KCL, and the NIHR. Even if I conceded every possible criticism of the study itself, there remains the fact that in public statements, you Misstated an odds ratio of 4 as “4 times more likely to” Overstated the recruitment success findings as being based on a sample 3 times larger than it actually was Re-interpreted, without reservation, a statistical association as a causal relationship Misstated the difference between the patient involvement categories as being a matter of merely “involving just one or two patients in the study team” And you did these consistently and repeatedly – in Dr Wykes's blog post, in the KCL press release, and in the NIHR-written Guardian article. To use the analogy from my previous post: if a pharmaceutical company had committed these acts in public statements about a new drug, public criticism would have been loud and swift. Your comment on the media coverage of odds ratios is an issue that scientists need to overcome (there is even a section in Wikipedia). It's highly unfair to blame "media coverage" for the use of an odds ratio as if it were a relative risk ratio. In fact, the first instance of "4 times more likely" appears in Dr Wykes's own blog post. It's repeated in the KCL press release, so you yourselves appear to have been the source of the error. You point out the base rate issue but of course in a logistic regression you also take into account all the other variables that may impinge on the outcome prior to assessing the effects of our key variable patient involvement - as we did – and showed that the odds ratio is 4.12 - So no dispute about that. We have followed up our analysis to produce a statement that the public will understand. Using the following equations: Model predicted recruitment lowest level of involvement exp(2.489-.193*8.8-1.477)/(1+exp(2.489-.193*8.8-1.477))=0.33 Model predicted recruitment highest level of involvement exp(2.489-.193*8.8-1.477+1.415)/(1+exp(2.489-.193*8.8-1.477+1.415)=0.67 For a study of typical complexity without a follow up increasing involvement from the lowest to the highest levels increased recruitment from 33% to 66% i.e. a doubling. So then, you agree that your prior use of “4 times more likely” was not true? Would you be willing to concede that in more or less direct English? This is important and is the first time that impact has been shown for patient involvement on the study success. Luckily in the UK we have a network that now supports clinicians to be involved and a system for ensuring study feasibility. The addition of patient involvement is the additional bonus that allows recruitment to increase over time and so cutting down the time for treatments to get to patients. No, and no again. This study shows an association in a model. The gap between that and a causal relationship is far too vast to gloss over in this manner. In summary, I thank the authors for taking the time to response, but I feel they've overreacted to my concerns about the study, and seriously underreacted to my more important concerns about their public overhyping of the study. I believe this study provides useful, though limited, data about the potential relationship between patient engagement and enrollment success. On the other hand, I believe the public positioning of the study by its authors and their institutions has been exaggerated and distorted in clearly unacceptable ways. I would ask the authors to seriously consider issuing public corrections on the 4 points listed above. Full Article NIHR patient recruitment trial delays UK trials
d Half of All Trials Unpublished* By www.placebocontrol.com Published On :: Wed, 04 Dec 2013 17:48:00 +0000 (*For certain possibly nonstandard uses of the word "unpublished") This is an odd little study. Instead of looking at registered trials and following them through to publication, this study starts with a random sample of phase 3 and 4 drug trials that already had results posted on ClinicalTrials.gov - so in one, very obvious sense, none of the trials in this study went unpublished. Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in JournalsCarolina Riveros, Agnes Dechartres, Elodie Perrodeau, Romana Haneef, Isabelle Boutron, Philippe Ravaud But here the authors are concerned with publication in medical journals, and they were only able to locate journal articles covering about half (297/594) of trials with registered results. It's hard to know what to make of these results, exactly. Some of the "missing" trials may be published in the future (a possibility the authors acknowledge), some may have been rejected by one or more journals (FDAAA requires posting the results to ClinicalTrials.gov, but it certainly doesn't require journals to accept trial reports), and some may be pre-FDAAA trials that sponsors have retroactively added to ClinicalTrials.gov even though development on the drug has ceased. It would have been helpful had the authors reported journal publication rates stratified by the year the trials completed - this would have at least given us some hints regarding the above. More than anything I still find it absolutely bizarre that in a study this small, the entire dataset is not published for review. One potential concern is the search methodology used by the authors to match posted and published trials. If the easy routes (link to article already provided in ClinicalTrials.gov, or NCT number found in a PubMed search) failed, a manual search was performed: The articles identified through the search had to match the corresponding trial in terms of the information registered at ClinicalTrials.gov (i.e., same objective, same sample size, same primary outcome, same location, same responsible party, same trial phase, and same sponsor) and had to present results for the primary outcome. So it appears that a reviewed had to score the journal article as an exact match on 8 criteria in order for the trial to be considered the same. That could easily lead to exclusion of journal articles on the basis of very insubstantial differences. The authors provide no detail on this; and again, that would be easy to verify if the study dataset was published. The reason I harp on this, and worry about the matching methodology, is that two of the authors of this study were also involved in a methodologically opaque and flawed study about clinical trial results posted in the JCO. In that study, as well, the authors appeared to use an incorrect methodology to identify published clinical trials. When I pointed the issues out, the corresponding author merely reiterated what was already (insufficiently) in the paper's Methodology section. I find it strange beyond belief, and more than a little hypocritical, that researchers would use a public, taxpayer-funded database as the basis of their studies, and yet refuse to provide their data for public review. There are no technological or logistical issues preventing this kind of sharing, and there is an obvious ethical point in favor of transparency. But if the authors are reasonably close to correct in their results, I'm not sure what to make of this study. The Nature article covering this study contend that [T]he [ClinicalTrials.gov] database was never meant to replace journal publications, which often contain longer descriptions of methods and results and are the basis for big reviews of research on a given drug. I suppose that some journal articles have better methodology sections, although this is far from universally true (and, like this study here, these methods are often quite opaquely described and don't support replication). As for results, I don't believe that's the case. In this study, the opposite was true: ClinicalTrial.gov results were generally more complete than journal results. And I have no idea why the registry wouldn't surpass journals as a more reliable and complete source of information for "big reviews". Perhaps it is a function of my love of getting my hands dirty digging into the data, but if we are witnessing a turning point where journal articles take a distant back seat to the ClinicalTrials.gov registry, I'm enthused. ClinicalTrials.gov is public, free, and contains structured data; journal articles are expensive, unparsable, and generally written in painfully unclear language. To me, there's really no contest. Carolina Riveros, Agnes Dechartres, Elodie Perrodeau, Romana Haneef, Isabelle Boutron, & Philippe Ravaud (2013). Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals PLoS Medicine DOI: 10.1371/journal.pmed.1001566 Full Article clinicaltrials.gov FDAAA PLoS transparency
d Patient Recruitment: Taking the Low Road By www.placebocontrol.com Published On :: Thu, 19 Dec 2013 22:36:00 +0000 The Wall Street Journal has an interesting article on the use of “Big Data” to identify and solicit potential clinical trial participants. The premise is that large consumer data aggregators like Experian can target patients with certain diseases through correlations with non-health behavior. Examples given include “a preference for jazz” being associated with arthritis and “shopping online for clothes” being an indicator of obesity. We've seen this story before. In this way, allegedly, clinical trial patient recruitment companies can more narrowly target their solicitations* for patients to enroll in clinical trials. In the spirit of full disclosure, I should mention that I was interviewed by the reporter of this article, although I am not quoted. My comments generally ran along three lines, none of which really fit in with the main storyline of the article: I am highly skeptical that these analyses are actually effective at locating patients These methods aren't really new – they’re the same tactics that direct marketers have been using for years Most importantly, the clinical trials community can – and should – be moving towards open and collaborative patient engagement. Relying on tactics like consumer data snooping and telemarketing is an enormous step backwards. The first point is this: certainly some diseases have correlates in the real world, but these correlates tend to be pretty weak, and are therefore unreliable predictors of disease. Maybe it’s true that those struggling with obesity tend to buy more clothes online (I don’t know if it’s true or not – honestly it sounds a bit more like an association built on easy stereotypes than on hard data). But many obese people will not shop online (they will want to be sure the clothes actually fit), and vast numbers of people with low or average BMIs will shop for clothes online. So the consumer data will tend to have very low predictive value. The claims that liking jazz and owning cats are predictive of having arthritis are even more tenuous. These correlates are going to be several times weaker than basic demographic information like age and gender. And for more complex conditions, these associations fall apart. Marketers claim to solve this by factoring a complex web of associations through a magical black box – th WSJ article mentions that they “applied a computed algorithm” to flag patients. Having seen behind the curtain on a few of these magic algorithms, I can confidently say that they are underwhelming in their sophistication. Hand-wavy references to Big Data and Algorithms are just the tools used to impress pharma clients. (The down side to that, of course, is that you can’t help but come across as big brotherish – see this coverage from Forbes for a taste of what happens when people accept these claims uncritically.) But the effectiveness of these data slice-n-dicing activities is perhaps beside the point. They are really just a thin cover for old-fashioned boiler room tactics: direct mail and telemarketing. When I got my first introduction to direct marketing in the 90’s, it was the exact same program – get lead lists from big companies like Experian, then aggressively mail and call until you get a response. The limited effectiveness and old-school aggressiveness of these programs comes is nicely illustrated in the article by one person’s experience: Larna Godsey, of Wichita, Kan., says she received a dozen phone calls about a diabetes drug study over the past year from a company that didn't identify itself. Ms. Godsey, 63, doesn't suffer from the disease, but she has researched it on the Internet and donated to diabetes-related causes. "I don't know if it's just a coincidence or if they're somehow getting my information," says Ms. Godsey, who filed a complaint with the FTC this year. The article notes that one recruitment company, Acurian, has been the subject of over 500 FTC complaints regarding its tactics. It’s clear that Big Data is just the latest buzzword lipstick on the telemarketing pig. And that’s the real shame of it. We have arrived at an unprecedented opportunity for patients, researchers, and private industry to come together and discuss, as equals, research priorities and goals. Online patient communities like Inspire and PatientsLikeMe have created new mechanisms to share clinical trial opportunities and even create new studies. Dedicated disease advocates have jumped right into the world of clinical research, with groups like the Cystic Fibrosis Foundation and Michael J. Fox Foundation no longer content with raising research funds, but actively leading the design and operations of new studies. Some – not yet enough – pharmaceutical companies have embraced the opportunity to work more openly and honestly with patient groups. The scandal of stories like this is not the Wizard of Oz histrionics of secret computer algorithms, but that we as an industry continue to take the low road and resort to questionable boiler room tactics. It’s past time for the entire patient recruitment industry to drop the sleaze and move into the 21st century. I would hope that patient groups and researchers will come together as well to vigorously oppose these kinds of tactics when they encounter them. (*According to the article, Acurian "has said that calls related to medical studies aren't advertisements as defined by law," so we can agree to call them "solicitations".) Full Article Big Data CFF direct mail direct to patient ethics Inspire MJFF patient engagement patient recruitment PatientsLikeMe telemarketing
d Can a Form Letter from FDA "Blow Your Mind"? By www.placebocontrol.com Published On :: Mon, 06 Jan 2014 22:14:00 +0000 Adam Feuerstein appears to be a generally astute observer of the biotech scene. As a finance writer, he's accosted daily with egregiously hyped claims from small drug companies and their investors, and I think he tends to do an excellent job of spotting cases where breathless excitement is unaccompanied by substantive information. However, Feuerstein's healthy skepticism seems to have abandoned him last year in the case of a biotech called Sarepta Therapeutics, who released some highly promising - but also incredibly limited - data on their treatment for Duchenne muscular dystrophy. After a disappointing interaction with the FDA, Sarepta's stock dropped, and Feuerstein appeared to realize that he'd lost some objectivity on the topic. However, with the new year comes new optimism, and Feuerstein seems to be back to squinting hard at tea leaves - this time in the case of a form letter from the FDA. He claims that the contents of the letter will "blow your mind". To him, the key passage is: We understand that you feel that eteplirsen is highly effective, and may be confused by what you have read or heard about FDA's actions on eteplirsen. Unfortunately, the information reported in the press or discussed in blogs does not necessarily reflect FDA's position. FDA has reached no conclusions about the possibility of using accelerated approval for any new drug for the treatment of Duchenne muscular dystrophy, and for eteplirsen in particular. Feuerstein appears to think that the fact that FDA "has reached no conclusions" may mean that it may be "changing its mind". To which he adds: "Wow!" Adam Feuerstein: This time, too much froth, not enough coffee? I'm not sure why he thinks that. As far as I can tell, the FDA will never reach a conclusion like this before its gone through the actual review process. After all, if FDA already knows the answer before the full review, what would the point of the review even be? It would seem a tremendous waste of agency resources. Not to mention how non-level the playing field would be if some companies were given early yes/no decisions while others had to go through a full review. It seems fair to ask: is this a substantive change by FDA review teams, or would it be their standard response to any speculation about whether and how they would approve or reject a new drug submission? Can Feuerstein point to other cases where FDA has given a definitive yes or no on an application before the application was ever filed? I suspect not, but am open to seeing examples. A more plausible theory for this letter is that the FDA is attempting a bit of damage control. It is not permitted to share anything specific it said or wrote to Sarepta about the drug, and has come under some serious criticism for “rejecting” Sarepta’s Accelerated Approval submission. The agency has been sensitive to the DMD community, even going so far as to have Janet Woodcock and Bob Temple meet with DMD parents and advocates last February. Sarepta has effectively positioned FDA as the reason for it’s delay in approval, but no letters have actually been published, so the conversation has been a bit one-sided. This letter appears to be an attempt at balancing perspectives a bit, although the FDA is still hamstrung by its restriction on relating any specific communications. Ultimately, this is a form letter that contains no new information: FDA has reached no conclusions because FDA is not permitted to reach conclusions until it has completed a fair and thorough review, which won't happen until the drug is actually submitted for approval. We talk about "transparency" in terms of releasing clinical trials data, but to me there is a great case to be made for increase regulatory transparency. The benefits to routine publication of most FDA correspondence and meeting results (including such things as Complete Response letters, explaining FDA's thinking when it rejects new applications) would actually go a long way towards improving public understanding of the drug review and approval process. Full Article accelerated approval Adam Feuerstein DMD FDA Sarepta Therapeutics transparency
d Megafund versus Megalosaurus: Funding Drug Development By www.placebocontrol.com Published On :: Mon, 13 Jan 2014 04:36:00 +0000 This new 10-minute TEDMED talk is getting quite a bit of attention: (if embedded video does not work, try the TED site itself.) In it, Roger Stein claims to have created an approach to advancing drugs through clinical trials that will "fundamentally change the way research for cancer and lots of other things gets done". Because the costs of bringing a drug to market are so high, time from discovery to marketing is so long, and the chances of success of any individual drug are so grim, betting on any individual drug is foolish, according to Stein. Instead, risks for a large number of potential assets should be pooled, with the eventual winners paying for the losers. To do this, Stein proposes what he calls a "megafund" - a large collection of assets (candidate therapies). Through some modeling and simulations, Stein suggests some of the qualities of an ideal megafund: it would need in the neighborhood of $3-15 billion to acquire and manage 80-150 drugs. A fund of this size and with these assets would be able to provide an equity yield of about 12%, which would be "right in the investment sweet spot of pension funds and 401(k) plans". Here's what I find striking about those numbers: let's compare Stein's Megafund to everyone's favorite Megalosaurus, the old-fashioned Big Pharma dinosaur sometimes known as Pfizer: Megafund (Stein) Megalosaurus (Pfizer) Funding $3-15 billion $9 billion estimated 2013 R&D spend Assets 80-150 81 (in pipeline, plus many more in preclinical) Return on Equity 12% (estimated) 9.2% (last 10 years) to 13.2% (last 5) Since Pfizer's a dinosaur, it can't possibly compete withthe sleek, modern Megafund, right? Right? These numbers look remarkably similar. Pfizer - and a number of its peers - are spending Megafund-sized budget each year to shepherd through a Megafund-sized number of compounds. (Note many of Pfizer's peers have substantially fewer drugs in their published pipelines, but they own many times more compounds - the pipeline is just the drugs what they've elected to file an IND on.) What am I missing here? I understand that a fund is not a company, and there may be some benefits to decoupling asset management decisions from actual operations, but this won't be a tremendous gain, and would presumably be at least partially offset by increased transaction costs (Megafund has to source, contract, manage, and audit vendors to design and run all its trials, after all, and I don't know why I'd think it could do that any more cheaply than Big Pharma can). And having a giant drug pipeline's go/no go decisions made by "financial engineers" rather than pharma industry folks would seem like a scenario that's only really seen as an upgrade by the financial engineers themselves. A tweet from V.S. Schulz pointed me to a post on Derek Lowe's In the Pipeline blog. which lead to a link to this paper by Stein and 2 others in Nature Biotechnology from a year and a half ago. The authors spend most of their time differentiating themselves from other structures in the technical, financial details rather than explaining why megafund would work better at finding new drugs. However, they definitely think this is qualitatively different from existing pharma companies, and offer a couple reasons. First, [D]ebt financing can be structured to be more “patient” than private or public equity by specifying longer maturities; 10- to 20-year maturities are not atypical for corporate bonds. ... Such long horizons contrast sharply with the considerably shorter horizons of venture capitalists, and the even shorter quarterly earnings cycle and intra-daily price fluctuations faced by public companies. I'm not sure where this line of though is coming from. Certainly all big pharma companies' plans extend decades into the future - there may be quarterly earnings reports to file, but that's a force exerted far more on sales and marketing teams than on drug development. The financing of pharmaceutical development is already extremely long term. Even in the venture-backed world, Stein and team are wrong if they believe there is pervasive pressure to magically deliver drugs in record time. Investors and biotech management are both keenly aware of the tradeoffs between speed and regulatory success. Even this week's came-from-nowhere Cinderella story, Intercept Pharmaceuticals, was founded with venture money over a decade ago - these "longer maturities" are standard issue in biotech. We aren't making iPhone apps here, guys. Second, Although big pharma companies are central to the later stages of drug development and the marketing and distributing of approved drugs, they do not currently play as active a role at the riskier preclinical and early stages of development Again, I'm unsure why this is supposed to be so. Of Pfizer's 81 pipeline compounds, 55 are in Phase 1 or 2 - a ratio that's pretty heavy on early, risky project, and that's not too different from industry as a whole. Pfizer does not publish data on the number of compounds it currently has undergoing preclinical testing, but there's no clear reason I can think of to assume it's a small number. So, is Megafund truly a revolutionary idea, or is it basically a mathematical deck-chair-rearrangement for the "efficiencies of scale" behemoths we've already got? [Image: the world's first known dino, Megalosaurus, via Wikipedia.] Full Article big pharma drug development Pfizer
d These Words Have (Temporarily) Relocated By www.placebocontrol.com Published On :: Tue, 18 Mar 2014 14:17:00 +0000 Near the end of last year, I had the bright idea of starting a second blog, Placebo Lead-In, to capture a lot of smaller items that I found interesting but wasn't going to work up into a full-blown, 1000 word post. According to Murphy’s Law, or the Law of Unintended Consequences, or the Law of Biting Off More Than You Can Chew, or some such similar iron rule of the universe, what happened next should have been predictable. First, my team at CAHG Trials launched a new blog, First Patient In. FPI is dedicated to an open discussion of patient recruitment ideas, and I’m extremely proud of what we've published so far. Next, I was invited to be a guest blogger for the upcoming Partnerships in Clinical Trials Conference. Suddenly, I've gone from 1 blog to 4. And while my writing output appears to have increased, it definitely hasn't quadrupled. So this blog has been quiet for a bit too long as a result. The good news is that the situation is temporary - Partnerships will actually happen at the end of this month. (If you’re going: drop me a line and let’s meet. If you’re not: you really should come and join us!) My contributions to FPI will settle into a monthly post, as I have a fascinating and clever team to handle most of the content. In case you've missed it, then, here is a brief summary of my posts elsewhere over the past 2 months. First Patient In How to Catalyze a Clinical Trial - My inaugural post introducing the blog and its purpose Video: Predicting Referral Conversion in Clinical Trial Advertising - A somewhat technical but very important topic, how to visualize and model the “real time” results of recruitment advertising at the sites. The Crystal Ball is on the Fritz - What to do with a broken enrollment feasibility process, and how asking will never be as good as measuring Partnerships in Clinical Trials The New Breed of Clinical Trial Matchmakers - A (hopefully pretty complete, thanks to knowledgeable commenters) listing of services looking to match interested patients to clinical trials Rethinking Patient Enrollment, in One Graphic - The perils of predictability in site-based enrollment Seize the Data! Will Big Data Save Us from Ourselves? - My take on what I consider to be the large and serious obstacles in the way of “Big Data” solutions for patient recruitment Please take a look, and I will see you back here soon. [Photo credit: detour sign via Flikr user crossley] Full Article Big Data metrics patient engagement patient recruitment site relationship management
d Patient Centered Trials - Your Thoughts Needed By www.placebocontrol.com Published On :: Thu, 31 Jul 2014 17:43:00 +0000 The good folks down at eyeforpharma have asked me to write a few blog posts in the run-up to their Patient Centered Clinical Trials conference in Boston this September. In my second article -Buzzword Innovation: The Patient Centricity “Fad” and the Token Patient - I went over some concerns I have regarding the sudden burst of enthusiasm for patient centricity in the clinical trial world. Apparently, that hit a nerve – in an email, Ulrich Neumann tells me that “your last post elicited quite a few responses in my inbox (varied, some denouncing it as a fad, others strongly protesting the notion, hailing it as the future).” In preparing my follow up post, I’ve spoken to a couple people on the leading edge of patient engagement: Abbe Steel, CEO of HealthiVibe, which is focused on bringing greater patient input into the earliest stages of trial design through focus groups and patient surveys Casey Quinlan, co-founder of Patients for Clinical Research, which aims to be a force in patient education and engagement for clinical trials In addition to their thoughts, eyeforpharma is keenly interested in hearing from more people. They've even posted a survey – from Ulrich: To get a better idea of what other folks think of the idea, I am sending out a little ad hoc survey. Only 4 questions (so people hopefully do it). Added benefit: There is a massive 50% one-time discount for completed surveys until Friday connected to it as an incentive). So, here are two things for you to do: Complete the survey and share your thoughts Come to the conference and tell us all exactly what you think Look forward to seeing you there. [Conflict of Interest Disclosure: I am attending the Patient Centered Clinical Trials conference. Having everyone saying the same thing at such conferences conflicts with my ability to find them interesting.] Full Article eyeforpharma HealthiVibe patient engagement PFCR
d Waiver of Informed Consent - proposed changes in the 21st Century Cures Act By www.placebocontrol.com Published On :: Tue, 14 Jul 2015 20:21:00 +0000 Adam Feuerstein points out - and expresses considerable alarm over - an overlooked clause in the 21st Century Cures Act: Waive informed consent requirement for clinical trials?!?! Unbelievable but true if #Path2Cures becomes law. pic.twitter.com/dqmWPpxPdE — Adam Feuerstein (@adamfeuerstein) July 14, 2015 In another tweet, he suggests that the act will "decimate" informed consent in drug trials. Subsequent responses and retweets did nothing to clarify the situation, and if anything tended to spread, rather than address, Feuerstein's confusion. Below is a quick recap of the current regulatory context and a real-life example of where the new wording may be helpful. In short, though, I think it's safe to say: Waiving informed consent is not new; it's already permitted under current regs The standards for obtaining a waiver of consent are stringent They may, in fact, be too stringent in a small number of situations The act may, in fact, be helpful in those situations Feuerstein may, in fact, need to chill out a little bit (For the purposes of this discussion, I’m talking about drug trials, but I believe the device trial situation is parallel.) Section 505(i) - the section this act proposes to amend - instructs the Secretary of Health and Human Services to propagate rules regarding clinical research. Subsection 4 addresses informed consent: …the manufacturer, or the sponsor of the investigation, require[e] that experts using such drugs for investigational purposes certify to such manufacturer or sponsor that they will inform any human beings to whom such drugs, or any controls used in connection therewith, are being administered, or their representatives, that such drugs are being used for investigational purposes and will obtain the consent of such human beings or their representatives, except where it is not feasible or it is contrary to the best interests of such human beings. [emphasis mine] Note that this section already recognizes situations where informed consent may be waived for practical or ethical reasons. These rules were in fact promulgated under 45 CFR part 46, section 116. The relevant bit – as far as this conversation goes – regards circumstances under which informed consent might be fully or partially waived. Specifically, there are 4 criteria, all of which need to be met: (1) The research involves no more than minimal risk to the subjects; (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects; (3) The research could not practicably be carried out without the waiver or alteration; and (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation. In practice, this is an especially difficult set of criteria to meet for most studies. Criterion (1) rules out most “conventional” clinical trials, because the hallmarks of those trials (use of an investigational medicine, randomization of treatment, blinding of treatment allocation) are all deemed to be more than “minimal risk”. That leaves observational studies – but even many of these cannot clear the bar of criterion (3). That word “practicably” is a doozy. Here’s an all-too-real example from recent personal experience. A drug manufacturer wants to understand physicians’ rationales for performing a certain procedure. It seems – but there is little hard data – that a lot of physicians do not strictly follow guidelines on when to perform the procedure. So we devise a study: whenever the procedure is performed, we ask the physician to complete a quick form categorizing why they made their decision. We also ask him or her to transcribe a few pieces of data from the patient chart. Even though the patients aren’t personally identifiable, the collection of medical data qualifies this as a clinical trial. It’s a minimal risk trial, definitely: the trial doesn’t dictate at all what the doctor should do, it just asks him or her to record what they did and why, and supply a bit of medical context for the decision. All told, we estimated 15 minutes of physician time to complete the form. The IRB monitoring the trial, however, denied our request for a waiver of informed consent, since it was “practicable” (not easy, but possible) to obtain informed consent from the patient. Informed consent – even with a slimmed-down form – was going to take a minimum of 30 minutes, so the length of the physician’s involvement tripled. In addition, many physicians opted out of the trial because they felt that the informed consent process added unnecessary anxiety and alarm for their patients, and provided no corresponding benefit. The end result was not surprising: the budget for the trial more than doubled, and enrollment was far below expectations. Which leads to two questions: 1. Did the informed consent appreciably help a single patient in the trial? Very arguably, no. Consenting to being “in” the trial made zero difference in the patients’ care, added time to their stay in the clinic, and possibly added to their anxiety. 2. Was less knowledge collected as a result? Absolutely, yes. The sponsor could have run two studies for the same cost. Instead, they ultimately reduced the power of the trial in order to cut losses. Bottom line, it appears that the modifications proposed in the 21st Century Cures Act really only targets trials like the one in the example. The language clearly retains criteria 1 and 2 of the current HHS regs, which are the most important from a patient safety perspective, but cuts down the “practicability” requirement, potentially permitting high quality studies to be run with less time and cost. Ultimately, it looks like a very small, but positive, change to the current rules. The rest of the act appears to be a mash-up of some very good and some very bad (or at least not fully thought out) ideas. However, this clause should not be cause for alarm. Full Article 21st Century Cures Act Adam Feuerstein informed consent
d The first paid research subject in written history? By www.placebocontrol.com Published On :: Mon, 21 Nov 2016 20:31:00 +0000 On this date 349 years ago, Samuel Pepys relates in his famous diary a remarkable story about an upcoming medical experiment. As far as I can tell, this is the first written description of a paid research subject. According to his account, the man (who he describes as “a little frantic”) was to be paid to undergo a blood transfusion from a sheep. It was hypothesized that the blood of this calm and docile animal would help to calm the man. Some interesting things to note about this experiment: Equipoise. There is explicit disagreement about what effect the experimental treatment will have: according to Pepys, "some think it may have a good effect upon him as a frantic man by cooling his blood, others that it will not have any effect at all". Results published. An account of the experiment was published just two weeks later in the journal Philosophical Transactions. Medical Privacy. In this subsequent write-up, the research subject is identified as Arthur Coga, a former Cambridge divinity student. According to at least one account, being publicly identified had a bad effect on Coga, as people who had heard of him allegedly succeeded in getting him to spend his stipend on drink (though no sources are provided to confirm this story). Patient Reported Outcome. Coga was apparently chosen because, although mentally ill, he was still considered educated enough to give an accurate description of the treatment effect. Depending on your perspective, this may also be a very early account of the placebo effect, or a classic case of ignoring the patient’s experience. Because even though his report was positive, the clinicians remained skeptical. From the journal article: The Man after this operation, as well as in it, found himself very well, and hath given in his own Narrative under his own hand, enlarging more upon the benefit, he thinks, he hath received by it, than we think fit to own as yet. …and in fact, a subsequent diary entry from Pepys mentions meeting Coga, with similarly mixed impressions: “he finds himself much better since, and as a new man, but he is cracked a little in his head”. The amount Coga was paid for his participation? Twenty shillings – at the time, that was exactly one Guinea. [Image credit: Wellcome Images] Full Article benefits of clinical trials ethics Guinea Pigs PRO sample size
d The Streetlight Effect and 505(b)(2) approvals By www.placebocontrol.com Published On :: Sat, 18 Mar 2017 20:35:00 +0000 It is a surprisingly common peril among analysts: we don’t have the data to answer the question we’re interested in, so we answer a related question where we do have data. Unfortunately, the new answer turns out to shed no light on the original interesting question. This is sometimes referred to as the Streetlight Effect – a phenomenon aptly illustrated by Mutt and Jeff over half a century ago: This is the situation that the Tufts Center for the Study of Drug Development seems to have gotten itself into in its latest "Impact Report". It’s worth walking through the process of how an interesting question ends up in an uninteresting answer. So, here’s an interesting question: My company owns a drug that may be approvable through FDA’s 505(b)(2) pathway. What is the estimated time and cost difference between pursuing 505(b)(2) approval and conventional approval? That’s "interesting", I suppose I should add, for a certain subset of folks working in drug development and commercialization. It’s only interesting to that peculiar niche, but for those people I suspect it’s extremely interesting - because it is a real situation that a drug company may find itself in, and there are concrete consequences to the decision. Unfortunately, this is also a really difficult question to answer. As phrased, you'd almost need a randomized trial to answer it. Let’s create a version which is less interesting but easier to answer: What are the overall development time and cost differences between drugs seeking approval via 505(b)(2) and conventional pathways? This is much easier to answer, as pharmaceutical companies could look back on development times and costs of all their compounds, and directly compare the different types. It is, however, a much less useful question. Many new drugs are simply not eligible for 505(b)(2) approval. If those drugs Extreme qualitative differences of 505(b)(2) drugs. Source: Thomson Reuters analysis via RAPS are substantially different in any way (riskier, more novel, etc.), then they will change the comparison in highly non-useful ways. In fact, in 2014, only 1 drug classified as a New Molecular Entity (NME) went through 505(b)(2) approval, versus 32 that went through conventional approval. And in fact, there are many qualities that set 505(b)(2) drugs apart. So we’re likely to get a lot of confounding factors in our comparison, and it’s unclear how the answer would (or should) guide us if we were truly trying to decide which route to take for a particular new drug. It might help us if we were trying to evaluate a large-scale shift to prioritizing 505(b)(2) eligible drugs, however. Unfortunately, even this question is apparently too difficult to answer. Instead, the Tufts CSDD chose to ask and answer yet another variant: What is the difference in time that it takes the FDA for its internal review process between 505(b)(2) and conventionally-approved drugs? This question has the supreme virtue of being answerable. In fact, I believe that all of the data you’d need is contained within the approval letter that FDA posts publishes for each new approved drug. But at the same time, it isn’t a particularly interesting question anymore. The promise of the 505(b)(2) pathway is that it should reduce total development time and cost, but on both those dimensions, the report appears to fall flat. Cost: This analysis says nothing about reduced costs – those savings would mostly come in the form of fewer clinical trials, and this focuses entirely on the FDA review process. Time: FDA review and approval is only a fraction of a drug’s journey from patent to market. In fact, it often takes up less than 10% of the time from initial IND to approval. So any differences in approval times will likely easily be overshadowed by differences in time spent in development. But even more fundamentally, the problem here is that this study gives the appearance of providing an answer to our original question, but in fact is entirely uninformative in this regard. The accompanying press release states: The 505(b)(2) approval pathway for new drug applications in the United States, aimed at avoiding unnecessary duplication of studies performed on a previously approved drug, has not led to shorter approval times. This is more than a bit misleading. The 505(b)(2) statute does not in any way address approval timelines – that’s not it’s intent. So showing that it hasn’t led to shorter approval times is less of an insight than it is a natural consequence of the law as written. Most importantly, showing that 505(b)(2) drugs had a longer average approval time than conventionally-approved drugs in no way should be interpreted as adding any evidence to the idea that those drugs were slowed down by the 505(b)(2) process itself. Because 505(b)(2) drugs are qualitatively different from other new molecules, this study can’t claim that they would have been developed faster had their owners initially chosen to go the route of conventional approval. In fact, such a decision might have resulted in both increased time in trials and increased approval time. This study simply is not designed to provide an answer to the truly interesting underlying question. [Disclosure: the above review is based entirely on a CSDD press release and summary page. The actual report costs $125, which is well in excess of this blog’s expense limit. It is entirely possible that the report itself contains more-informative insights, and I’ll happily update that post if that should come to my attention.] Full Article 505(b)(2) drug development FDA metrics trial costs Tufts CSDD
d Retention metrics, simplified By www.placebocontrol.com Published On :: Thu, 30 Mar 2017 19:53:00 +0000 [Originally posted on First Patient In] In my experience, most clinical trials do not suffer from significant retention issues. This is a testament to the collaborative good will of most patients who consent to participate, and to the patient-first attitude of most research coordinators. However, in many trials – especially those that last more than a year – the question of whether there is a retention issue will come up at some point while the trial’s still going. This is often associated with a jump in early terminations, which can occur as the first cohort of enrollees has been in the trial for a while. It’s a good question to ask midstream: are we on course to have as many patients fully complete the trial as we’d originally anticipated? However, the way we go about answering the question is often flawed and confusing. Here’s an example: a sponsor came to us with what they thought was a higher rate of early terminations than expected. The main problem? They weren't actually sure. Here’s their data. Can you tell? Original retention graph. Click to enlarge. If you can, please let me know how! While this chart is remarkably ... full of numbers, it provides no actual insight into when patients are dropping out, and no way that I can tell to project eventual total retention. In addition, measuring the “retention rate” as a simple ratio of active to terminated patients will not provide an accurate benchmark until the trial is almost over. Here's why: patients tend to drop out later in a trial, so as long as you’re enrolling new patients, your retention rate will be artificially high. When enrollment ends, your retention rate will appear to drop rapidly – but this is only because of the artificial lift you had earlier. In fact, that was exactly the problem the sponsor had: when enrollment ended, the retention rate started dropping. It’s good to be concerned, but it’s also important to know how to answer the question. Fortunately, there is a very simple way to get a clear answer in most cases – one that’s probably already in use by your biostats team around the corner: the Kaplan-Meier “survival” curve. Here is the same study data, but patient retention is simply depicted as a K-M graph. The key difference is that instead of calendar dates, we used the relative measure of time in the trial for each patient. That way we can easily spot where the trends are. In this case, we were able to establish quickly that patient drop-outs were increasing at a relatively small constant rate, with a higher percentage of drops coinciding with the one-year study visit. Most importantly, we were able to very accurately predict the eventual number of patients who would complete the trial. And it only took one graph! Full Article metrics patient retention
d REMOTE Redux: DTP trials are still hard By www.placebocontrol.com Published On :: Tue, 23 May 2017 17:44:00 +0000 Maybe those pesky sites are good for something after all. It's been six years since Pfizer boldly announced the launch of its "clinical trial in a box". The REMOTE trial was designed to be entirely online, and involved no research sites: study information and consent was delivered via the web, and medications and diaries were shipped directly to patients' homes. Despite the initial fanfare, within a month REMOTE's registration on ClinicalTrials.gov was quietly reduced from 600 to 283. The smaller trial ended not with a bang but a whimper, having randomized only 18 patients in over a year of recruiting. Still, the allure of direct to patient clinical trials remains strong, due to a confluence of two factors. First, a frenzy of interest in running "patient centric clinical trials". Sponsors are scrambling to show they are doing something – anything – to show they have shifted to a patient-centered mindset. We cannot seem to agree what this means (as a great illustration of this, a recent article in Forbes on "How Patients Are Changing Clinical Trials" contained no specific examples of actual trials that had been changed by patients), but running a trial that directly engages patients wherever they are seems like it could work. The less-openly-discussed other factor leading to interest in these DIY trials is sponsors' continuing willingness to heap almost all of the blame for slow-moving studies onto their research sites. If it’s all the sites’ fault – the reasoning goes – then cutting them out of the process should result in trials that are both faster and cheaper. (There are reasons to be skeptical about this, as I have discussed in the past, but the desire to drop all those pesky sites is palpable.) However, while a few proof-of-concept studies have been done, there really doesn't seem to have been another trial to attempt a full-blown direct-to-patient clinical trial. Other pilots have been more successful, but had fairly lightweight protocols. For all its problems, REMOTE was a seriously ambitious project that attempted to package a full-blown interventional clinical trial, not an observational study. In this context, it's great to see published results of the TAPIR Trial in vasculitis, which as far as I can tell is the first real attempt to run a DIY trial of a similar magnitude to REMOTE. TAPIR was actually two parallel trials, identical in every respect except for their sites: one trial used a traditional group of 8 sites, while the other was virtual and recruited patients from anywhere in the country. So this was a real-time, head-to-head assessment of site performance. And the results after a full two years of active enrollment? Traditional sites: 49 enrolled Patient centric: 10 enrolled Even though we’re six years later, and online/mobile communications are even more ubiquitous, we still see the exact same struggle to enroll patients. Maybe it’s time to stop blaming the sites? To be fair, they didn’t exactly set the world on fire – and I’m guessing the total cost of activating the 8 sites significantly exceeded the costs of setting up the virtual recruitment and patient logistics. But still, the site-less, “patient centric” approach once again came up astonishingly short. Krischer J, Cronholm PF, Burroughs C, McAlear CA, Borchin R, Easley E, Davis T, Kullman J, Carette S, Khalidi N, Koening C, Langford CA, Monach P, Moreland L, Pagnoux C, Specks U, Sreih AG, Ytterberg S, Merkel PA, & Vasculitis Clinical Research Consortium. (2017). Experience With Direct-to-Patient Recruitment for Enrollment Into a Clinical Trial in a Rare Disease: A Web-Based Study. Journal of medical Internet research, 19 (2) PMID: 28246067 Full Article direct to patient DTP patient recruitment research sites site relationship management trial delays
d For good sleep and good health, regulate your exposure to light By www.npr.org Published On :: Sat, 09 Nov 2024 07:58:39 -0500 Your daily light exposure impacts your health. A new study finds that too much light at night and not enough natural light during the day can be harmful. This story first aired on Morning Edition on Nov. 4, 2024. Full Article
d Hospitals face months of IV fluid shortages after Helene damages N.C. factory By www.npr.org Published On :: Sun, 10 Nov 2024 08:34:20 -0500 Hospitals have been forced to innovate with new ways of hydrating patients and giving them medications, after a key factory that produces IV fluid bags flooded during Hurricane Helene. (This story first aired on Morning Edition on Nov. 7, 2024.) Full Article
d A human bird flu case is thought to be found in Canada for the first time By www.npr.org Published On :: Mon, 11 Nov 2024 03:18:54 -0500 A person has tested positive in British Columbia, Canadian health officials said, though the results must be sent to another lab for confirmation. Full Article
d What does a 2nd Trump term mean for the Affordable Care Act? By www.npr.org Published On :: Mon, 11 Nov 2024 04:47:01 -0500 President-elect Donald Trump tried unsuccessfully to get rid of the Affordable Care Act during his first term. What action will he take this time around? Full Article
d More young people are surviving cancer. Then they face a life altered by it By www.npr.org Published On :: Mon, 11 Nov 2024 05:00:00 -0500 More people are getting cancer in their 20s, 30s, and 40s, and surviving, thanks to rapid advancement in care. Many will have decades of life ahead of them, which means they face greater and more complex challenges in survivorship. Lourdes Monje is navigating these waters at age 29. Full Article
d Patrick Dempsey aims to raise awareness of cancer disparities and encourage screening By www.npr.org Published On :: Mon, 11 Nov 2024 05:18:50 -0500 NPR's Leila Fadel talks with actor Patrick Dempsey about his efforts to raise money for cancer treatment and prevention. Full Article
d Remarkably resilient refugees: A teen on his own, a woman who was raped By www.npr.org Published On :: Tue, 12 Nov 2024 07:57:31 -0500 Sudan's civil war has displaced 10 million citizens. Here are profiles of two young people from the most vulnerable groups: an unaccompanied minor caring for twin brothers, a woman who was raped. Full Article
d Online yoga classes prove helpful for back pain in new study By www.npr.org Published On :: Tue, 12 Nov 2024 08:00:00 -0500 Participant reported relief from chronic low back pain and reduced need for pain-relief medications. Full Article
d Menjelajahi Dunia Keajaiban Slot Online Pragmatic Play By biosimilarnews.com Published On :: Fri, 16 Feb 2024 09:20:08 +0000 Dunia perjudian daring telah menyaksikan kemunculan penyedia perangkat lunak yang menghebohkan, dan di antara mereka, Pragmatic Play telah berhasil menarik perhatian para pemain dengan berbagai slot online unggulan. Dalam artikel… The post Menjelajahi Dunia Keajaiban Slot Online Pragmatic Play appeared first on Biosimilarnews. Full Article News Review Tips & Trik
d Kumpulan Game Slot Gacor Dengan Persentase RTP Tertinggi Hari Ini By biosimilarnews.com Published On :: Tue, 12 Mar 2024 04:30:20 +0000 Dalam dunia perjudian online yang terus berkembang, pencarian para pemain untuk menemukan peluang terbaik dalam meraih kemenangan mengarah pada fenomena populer: kumpulan game slot gacor dengan persentase RTP tertinggi hari… The post Kumpulan Game Slot Gacor Dengan Persentase RTP Tertinggi Hari Ini appeared first on Biosimilarnews. Full Article News
d Nama-Nama Provider Slot Online Terbaik 2024 By biosimilarnews.com Published On :: Fri, 22 Mar 2024 04:47:49 +0000 Industri slot online terus mekar dan mengukir epik baru dalam dunia judi online. Tahun 2024 menjadi saksi bagi loncatan tinggi dalam inovasi dan hiburan, terutama dari para provider terkemuka yang… The post Nama-Nama Provider Slot Online Terbaik 2024 appeared first on Biosimilarnews. Full Article News
d Tips Rahasia Menang Mudah Main Slot Online Gacor By biosimilarnews.com Published On :: Wed, 03 Apr 2024 04:51:49 +0000 Mengungkap rahasia menang mudah dalam bermain slot online gacor menjadi dambaan setiap pemain judi daring. Pertama, perhatikan dengan seksama pemilihan mesin slot yang tepat. Pilihlah mesin dengan tingkat pembayaran atau… The post Tips Rahasia Menang Mudah Main Slot Online Gacor appeared first on Biosimilarnews. Full Article News Tips & Trik
d Provider Judi Slot Gacor Online Terbaik serta Populer di Tahun 2024 By biosimilarnews.com Published On :: Fri, 10 May 2024 05:19:52 +0000 Seolah-olah melintasi portal waktu, kita memasuki tahun 2024 dengan deretan provider judi slot online yang tidak hanya menemani, tetapi juga menggoda imajinasi. Setiap klik, setiap putaran gulungan, membuka lembaran baru… The post Provider Judi Slot Gacor Online Terbaik serta Populer di Tahun 2024 appeared first on Biosimilarnews. Full Article News