me ¿Cuál es el papel de la FDA en la regulación de los medicamentos? (30 segundos) By www.flickr.com Published On :: Tue, 13 Aug 2024 12:15:55 -0700 The U.S. Food and Drug Administration posted a video: Quizás sepa que la FDA es responsable de aprobar los medicamentos nuevos, como medicamentos de receta, genéricos, biosimilares y de venta libre, y de garantizar que esos medicamentos sean seguros, de alta calidad y funcionen como se supone que deben hacerlo. Pero nuestro trabajo no termina ahí. Continuamos monitoreando la seguridad y calidad de los medicamentos aprobados en los años venideros. Aprenda más sobre nuestro papel en la regulación de estos medicamentos. Para obtener más información sobre el papel de la FDA en la regulación y la aprobación de medicamentos, visite nuestro sitio web en www.fda.gov/drugs/information-consumers-and-patients-drug... Full Article
me ¿Cómo aprueba la FDA los medicamentos nuevos? (30 segundos) By www.flickr.com Published On :: Tue, 13 Aug 2024 12:19:23 -0700 The U.S. Food and Drug Administration posted a video: Los medicamentos de receta pasan por muchos pasos y fases importantes antes de que los aprobemos. Las investigaciones, los datos y la evidencia deben demostrar que el medicamento es seguro y eficaz para el uso previsto. Aprenda más sobre el proceso de aprobación de la FDA de principio a fin. Para obtener más información sobre el papel de la FDA en la regulación y la aprobación de medicamentos, visite nuestro sitio web en www.fda.gov/drugs/information-consumers-and-patients-drug... Full Article
me ¿Qué hace la FDA después de que aprueba los medicamentos? (30 segundos) By www.flickr.com Published On :: Tue, 13 Aug 2024 12:23:26 -0700 The U.S. Food and Drug Administration posted a video: La FDA monitorea continuamente datos en tiempo real de pacientes, fabricantes de medicamentos y profesionales de la salud, incluyendo informes de reacciones adversas a los medicamentos de receta. Según estos datos, podemos actualizar las etiquetas de los medicamentos o, en casos excepcionales, solicitar la retirada del mercado. Aprenda más sobre el proceso de la FDA para el monitoreo continuo de los medicamentos aprobados. Para obtener más información sobre el papel de la FDA en la regulación y la aprobación de medicamentos, visite nuestro sitio web en www.fda.gov/drugs/information-consumers-and-patients-drug... Full Article
me Orphan Drugs at 30: Will Success Become Too Expensive? By www.fdamatters.com Published On :: Mon, 14 Jan 2013 19:09:11 +0000 The Orphan Drug Act (ODA) turned 30 this month, demonstrating that good laws really can have an enduring impact. Amidst the celebrations, a reporter asked me a provocative question: can we afford more orphan drugs costing hundreds of thousands of dollars per year? FDA Matters answered “yes.” However, I added a caveat that should worry everyone eager for orphan drugs to succeed. When genomics and personalized medicine become successful, this will multiply the number of rare diseases and the overall cost of orphan drugs, perhaps beyond what the system can bear. Full Article FDA and Congress FDA and Industry Orphan Drugs
me Funding Cutbacks at FDA: A Sequester Primer By www.fdamatters.com Published On :: Thu, 07 Mar 2013 17:59:38 +0000 At a time when FDA’s responsibilities continue to grow rapidly, the agency has been caught in an across-the-board reduction (sequester) in federal discretionary spending, effective March 2, 2013. Although Congress may yet reverse course and restore money to affected federal agencies, this is not considered a high probability. Altogether, FDA will lose about $209 million between now and September 30, 2013. This will reduce inspections, slow drug and device approvals, and restrict implementation of the Food Safety Modernization Act and other recent legislation. Because of the many questions about the process and outcome, this is FDA Matters’ primer on the sequester of FDA funds. Full Article FDA and Congress FDA Appropriations Planning for FDA's Future
me FDA’s Indifferent Attitude Towards the First Amendment By www.fdamatters.com Published On :: Mon, 01 Apr 2013 17:09:04 +0000 The FDA doesn’t care about the First Amendment rights of the companies it regulates. It cares even less about the “free speech” rights of those companies’ sales and marketing representatives. And why should the agency care? One of FDA’s primary missions is to protect the public health and safety of the American people from illegal, adulterated and misbranded products. Doing so involves restraining food, drug, device and cosmetics companies from committing fraudulent and deceptive acts that are not protected by companies’ commercial free speech rights. Nonetheless, FDA Matters envisions opportunities for FDA and industry to broaden permissible product communications. The key is understanding history, not constitutional law. Full Article Drug Approval and Access FDA Accountability and Transparency FDA and Industry FDA Leadership Insight on FDA-regulated Industries
me Minnesota Becomes 18th State to Adopt Consumer Data Privacy Law By cohealthcom.org Published On :: Wed, 05 Jun 2024 15:27:53 +0000 On May 24, Governor Tim Walz signed into law Minnesota’s new comprehensive data privacy law, the Minnesota Consumer Data Privacy Act (HF 4757 referenced as the MCDPA). The MCDPA goes into effect on July 31, 2025, with some exceptions for colleges and universities (who have until 2029). The MCDPA is similar to other state privacy laws, […] Full Article Data Privacy Legislative consumer privacy data privacy legislation Minnesota MN Washington Privacy Act
me Rhode Island Governor Vetoes Consumer Data Privacy Act By cohealthcom.org Published On :: Wed, 03 Jul 2024 14:32:46 +0000 On June 25th, Rhode Island Governor Daniel McKee transmitted without signature (effectively a pocket veto) the Rhode Island Data Transparency and Privacy Protection Act (SB 2500 / HB 7787). The act is based on the Washington Privacy Act model but diverges from the prevalent forms of that model in two ways. First, the act contains a unique […] Full Article Data Privacy Legislative consumer privacy data privacy legislation Rhode Island RI veto Washington Privacy Act
me FTC Finalizes “Click-to-Cancel” Rule to Make It Easier for Consumers to End Recurring Subscriptions and Memberships By cohealthcom.org Published On :: Fri, 18 Oct 2024 20:56:52 +0000 Sorry, but you do not have permission to view this content. Full Article Regulatory/FDA Federal Trade Commission final rule FTC Good Rx HBRN Jim Potter PHI privacy
me Former CHC Board Chairs Sharon Callahan and Nick Colucci Named as 2025 MAHF Inductees By cohealthcom.org Published On :: Fri, 25 Oct 2024 20:39:55 +0000 Two former board chairs of the Coalition for Healthcare Communication (CHC) were named as the Medical Advertising Hall of Fame (MAHF) 2025 inductees – Sharon Callahan, former Chief Client Officer at Omnicom Health Group (OHG), and Nick Colucci, former Chairman and CEO of Publicis Health/COO of Publicis Groupe North America. The inductees will be honored […] Full Article CHC News chc MAHF Nick Colucci Sharon Callahan
me Don’t Confuse the Art and Science of Medicine: PCI vs CABG for Left Main Disease By www.cardiobrief.org Published On :: Thu, 20 Aug 2020 15:56:47 +0000 It is often said that medicine is both an art and a science. In an imperfect world this is both inevitable and desirable. But it is extremely important that the two should not be confused with each other. In particular, because the “science” side of the equation has achieved overwhelming prestige and authority, it is...Click here to continue reading... Full Article Interventional Cardiology & Surgery People Places & Events Policy & Ethics CABG guidelines left main disease LMCA PCI
me Should the ACC Have a Live Meeting? By www.cardiobrief.org Published On :: Mon, 25 Jan 2021 17:18:49 +0000 I was surprised to learn from a recent press release that the American College of Cardiology is planning to have some live participants at its annual scientific sessions meeting in May. The college said it is also partnering with a technology company to offer these participants a wearable monitoring device “as an added safety measure...Click here to continue reading... Full Article People Places & Events Policy & Ethics Prevention Epidemiology & Outcomes American College of Cardiology COVID-19 pandemic Scientific Sessions
me EMA recommends approval of aflibercept biosimilars Afqlir and Opuviz By www.gabionline.net Published On :: Wed, 23 Oct 2024 08:39:24 +0000 <p>On 19 September 2024, the European Medicines Agency’s (EMA) Committee for Medicinal Products for Human Use (CHMP) adopted a positive opinion, recommending the granting of marketing authorization for two aflibercept biosimilars: Sandoz’s Afqlir and Samsung Bioepis’s Opuviz. These products are biosimilars of the reference product Eylea, developed by Regeneron and Bayer.</p> Full Article
me Medicines for Europe 23rd Regulatory Affairs Conference 2025 By www.gabionline.net Published On :: Tue, 05 Nov 2024 09:04:58 +0000 <p> <b>23rd Regulatory Affairs Conference 202</b><b>5</b><br /> <b>27</b><b>‒</b><b>28 February 2025</b><br /> Hilton Amsterdam Airport Schiphol<br />Amsterdam, The Netherlands</p> Full Article
me NPRA Malaysia trials new timelines for variation applications By www.gabionline.net Published On :: Tue, 05 Nov 2024 09:13:55 +0000 <p>In May 2024, Malaysia’s National Pharmaceutical Regulatory Agency (NPRA) announced that it will trial new timelines for variation applications of registered pharmaceutical products and natural health supplements (TMHS).</p> Full Article
me Bio-Thera and Gedeon Richter partner to commercialize Stelara biosimilar BAT2206 By www.gabionline.net Published On :: Tue, 05 Nov 2024 09:14:16 +0000 <p>In October 2024, China based Bio-Thera Solutions (Bio-Thera) and Hungary’s Gedeon Richter announced they have reached an exclusive commercialization and license agreement for BAT2206, a biosimilar candidate to Johnson & Johnson’s Stelara (ustekinumab).</p> Full Article
me Transforming healthcare: CinnaGen’s leadership in follow-on biologicals/ biosimilars development and market expansion By www.gabionline.net Published On :: Fri, 08 Nov 2024 13:18:16 +0000 <p> <b>Abstract</b><br />CinnaGen, the largest biopharmaceutical company in the MENA region, is a leader in developing follow-on biologicals/biosimilars. Dr Haleh Hamedifar, Chairperson of CinnaGen, spoke to GaBI<i> </i>(Generics and Biosimilars Initiative) about the company’s strategic focus, which includes expanding its product portfolio, entering highly regulated global markets, and advancing affordable treatments for conditions such as multiple sclerosis and immunological diseases—transforming healthcare in underserved regions.</p><p><b>Keywords</b>: Biosimilars, clinical development, commercialization, MENA</p> Full Article
me Government 'miscommunicated' PPE stock levels to pharmacies during first COVID-19 wave, MPs told By www.pharmaceutical-journal.com Published On :: Wed, 10 Feb 2021 12:17 GMT The government implied wholesalers had more personal protective equipment in stock than was the case during the first wave of the COVID-19 pandemic, the Healthcare Distribution Association has said. Full Article
me Pharmacy negotiators in talks over plans to distribute COVID-19 treatments in primary care By www.pharmaceutical-journal.com Published On :: Wed, 10 Feb 2021 15:25 GMT The Pharmaceutical Services Negotiating Committee is in talks with the government over potential plans to distribute COVID-19 treatments in primary care. Full Article
me Prime minister vows to reimburse community pharmacy's COVID-19 costs 'as soon as possible' By www.pharmaceutical-journal.com Published On :: Thu, 11 Feb 2021 12:29 GMT Community pharmacies should be reimbursed for their additional costs during the COVID-19 pandemic “as soon as possible”, the prime minister has told The Pharmaceutical Journal. Full Article
me Regulator looking at 'flexibility' that would allow overseas candidates to sit registration assessment By www.pharmaceutical-journal.com Published On :: Thu, 11 Feb 2021 14:53 GMT The General Pharmaceutical Council has said it is “double, treble, quadruple-checking” for any “flexibility” that would allow all overseas candidates to sit the March 2021 registration assessment exam in their countries of residence. Full Article
me Nearly 200 women were prescribed valproate during pregnancy between April 2018 and September 2020 By www.pharmaceutical-journal.com Published On :: Fri, 12 Feb 2021 15:47 GMT Some 180 women were prescribed valproate, a medicine used to treat epilepsy and bipolar disorder, during their pregnancy within a 2.5 year interval, NHS data has revealed. Full Article
me Overseas candidates will be allowed to sit registration assessment remotely, regulator says By www.pharmaceutical-journal.com Published On :: Tue, 16 Feb 2021 12:05 GMT The General Pharmaceutical Council has said most candidates living in countries with a two-hour or more time difference from the UK will be able to apply to sit the registration assessment at home. Full Article
me Chiesi launches postal asthma inhaler recycling scheme By www.pharmaceutical-journal.com Published On :: Wed, 17 Feb 2021 14:34 GMT The UK’s first postal inhaler recycling scheme has been launched by pharmaceutical company Chiesi to support a more sustainable way of living for people with respiratory illnesses. Full Article
me Stars and Scars… Some Lessons Learned About Leadership By lifescivc.com Published On :: Thu, 02 May 2024 11:00:41 +0000 By Arthur O. Tzianabos, PhD, CEO of Lifordi Immunotherapeutics, as part of the From the Trenches feature of LifeSciVC As the biotech industry continues to pick up steam, I have been getting a number of phone calls from folks in The post Stars and Scars… Some Lessons Learned About Leadership appeared first on LifeSciVC. Full Article Corporate Culture From The Trenches Leadership
me A Primer on Early-Stage Biotech VC By lifescivc.com Published On :: Wed, 31 Jul 2024 11:00:55 +0000 By Aimee Raleigh, Principal at Atlas Venture, as part of the From The Trenches feature of LifeSciVC From the outside, one might assume all biotech venture capital (VC) firms are more similar than different. However, once you look under the The post A Primer on Early-Stage Biotech VC appeared first on LifeSciVC. Full Article Biotech startup advice From The Trenches Talent
me Medicinal Chemistry In The Age Of Artificial Intelligence By lifescivc.com Published On :: Mon, 05 Aug 2024 11:00:53 +0000 By Peter Tummino, CSO of Nimbus Therapeutics, as part of the From The Trenches feature of LifeSciVC “Over the next five to 10 years, our goal is to become a company that’s leading the world in personalized medicines, a company The post Medicinal Chemistry In The Age Of Artificial Intelligence appeared first on LifeSciVC. Full Article Drug discovery From The Trenches R&D Productivity
me A Biotech Midsummer’s Madness By lifescivc.com Published On :: Wed, 21 Aug 2024 15:30:17 +0000 By Arthur Tzianabos, CEO of Lifordi Immunotherapeutics, as part of the From The Trenches feature of LifeSciVC Greetings from Lake Winnipesaukee in NH where I am at this time every year. It’s midsummer and vacation time for me and the The post A Biotech Midsummer’s Madness appeared first on LifeSciVC. Full Article Bioentrepreneurship From The Trenches Strategy
me Keeping It Simple: What Really Matters For Emerging Enterprises By lifescivc.com Published On :: Wed, 04 Sep 2024 11:00:46 +0000 By Ankit Mahadevia, chairman of Spero Therapeutics, as part of the From The Trenches feature of LifeSciVC A common theme in startup literature is that by cutting a range of unnecessary tasks, a step-change in results will follow. I’ve found The post Keeping It Simple: What Really Matters For Emerging Enterprises appeared first on LifeSciVC. Full Article Bioentrepreneurship Biotech startup advice Corporate Culture From The Trenches
me Reflections On My Experience As A Board Member By lifescivc.com Published On :: Wed, 18 Sep 2024 11:00:01 +0000 By Ivana Magovčević-Liebisch, CEO of Vigil Neuroscience, as part of the From The Trenches feature of LifeSciVC In an industry where boom and bust cycles occur regularly and 90 percent of drug candidates fail to reach the market, an outstanding The post Reflections On My Experience As A Board Member appeared first on LifeSciVC. Full Article Boards and governance From The Trenches Leadership
me ESMO Reflections: Glimmers of Hope with the Next Wave of I-O Therapies? By lifescivc.com Published On :: Thu, 10 Oct 2024 11:00:02 +0000 By Jonathan Montagu, CEO of HotSpot Therapeutics, as part of the From The Trenches feature of LifeSciVC HotSpot’s trip to Barcelona for the recent European Society of Medical Oncology (ESMO) Annual Meeting was no ‘European Vacation,’ but it was certainly The post ESMO Reflections: Glimmers of Hope with the Next Wave of I-O Therapies? appeared first on LifeSciVC. Full Article From The Trenches Portfolio news Science & Medicine
me UK universities and NHS trusts that flout the rules on clinical trials identified in report to Parliament By www.alltrials.net Published On :: Thu, 24 Oct 2019 00:19:34 +0000 An AllTrials report for the House of Commons Science and Technology Select Committee this week has found that 33 NHS trust sponsors and six UK universities are reporting none of their clinical trial results, while others have gone from 0% to 100% following an announcement from the Select Committee in January that universities and NHS […] Full Article News Uncategorized
me Clinical Trial Enrollment, ASCO 2013 Edition By www.placebocontrol.com Published On :: Thu, 30 May 2013 17:43:00 +0000 Even by the already-painfully-embarrassingly-low standards of clinical trial enrollment in general, patient enrollment in cancer clinical trials is slow. Horribly slow. In many cancer trials, randomizing one patient every three or four months isn't bad at all – in fact, it's par for the course. The most commonly-cited number is that only 3% of cancer patients participate in a trial – and although exact details of how that number is measured are remarkably difficult to pin down, it certainly can't be too far from reality. Ultimately, the cost of slow enrollment is borne almost entirely by patients; their payment takes the form of fewer new therapies and less evidence to support their treatment decisions. So when a couple dozen thousand of the world's top oncologists fly into Chicago to meet, you'd figure that improving accrual would be high on everyone’s agenda. You can't run your trial without patients, after all. But every year, the annual ASCO meeting underdelivers in new ideas for getting more patients into trials. I suppose this a consequence of ASCO's members-only focus: getting the oncologists themselves to address patient accrual is a bit like asking NASCAR drivers to tackle the problems of aerodynamics, engine design, and fuel chemistry. Nonetheless, every year, a few brave souls do try. Here is a quick rundown of accrual-related abstracts at this year’s meeting, conveniently sorted into 3 logical categories: 1. As Lord Kelvin may or may not have said, “If you cannot measure it, you cannot improve it.” Abstract e15572: Inadequate data availability on clinical trial accrual and its effect on progress in cancer research Probably the most sensible of this year's crop, because rather than trying to make something out of nothing, the authors measure exactly how pervasive the nothing is. Specifically, they attempt to obtain fairly basic patient accrual data for the last three years' worth of clinical trials in kidney cancer. Out of 108 trials identified, they managed to get – via search and direct inquiries with the trial sponsors – basic accrual data for only 43 (40%). That certainly qualifies as “terrible”, though the authors content themselves with “poor”. Interestingly, exactly zero of the 32 industry-sponsored trials responded to the authors' initial survey. This fits with my impression that pharma companies continue to think of accrual data as proprietary, though what sort of business advantage it gives them is unclear. Any one company will have only run a small fraction of these studies, greatly limiting their ability to draw anything resembling a valid conclusion. Abstract TPS6645: Predictors of accrual success for cooperative group trials: The Cancer and Leukemia Group B (Alliance) experience CALGB investigators look at 110 trials over the past 10 years to see if they can identify any predictive markers of successful enrollment. Unfortunately, the trials themselves are pretty heterogeneous (accrual periods ranged from 6 months to 8.8 years), so finding a consistent marker for successful trials would seem unlikely. And, in fact, none of the usual suspects (e.g., startup time, disease prevalence) appears to have been significant. The exception was provision of medication by the study, which was positively associated with successful enrollment. The major limitation with this study, apart from the variability of trials measured, is in its definition of “successful”, which is simply the total number of planned enrolled patients. Under both of their definitions, a slow-enrolling trial that drags on for years before finally reaching its goal is successful, whereas if that same trial had been stopped early it is counted as unsuccessful. While that sometimes may be the case, it's easy to imagine situations where allowing a slow trial to drag on is a painful waste of resources – especially if results are delayed enough to bring their relevance into question. Even worse, though, is that a trial’s enrollment goal is itself a prediction. The trial steering committee determines how many sites, and what resources, will be needed to hit the number needed for analysis. So in the end, this study is attempting to identify predictors of successful predictions, and there is no reason to believe that the initial enrollment predictions were made with any consistent methodology. 2. If you don't know, maybe ask somebody? Abstract 8592: Strategies to overcome barriers to accrual (BtA) to NCI-sponsored clinical trials: A project of the NCI-Myeloma Steering Committee Accrual Working Group (NCI-MYSC AWG) Abstract 1596: Rapid online feedback to improve clinical trial accrual: CODEL anaplastic glioma (AG) (NCCTG/Alliance N0577) as a model With these two abstracts we celebrate and continue the time-honored tradition of alchemy, whereby we transmute base opinion into golden data. The magic number appears to be 100: if you've got 3 digits' worth of doctors telling you how they feel, that must be worth something. In the first abstract, a working group is formed to identify and vote on the major barriers to accrual in oncology trials. Then – and this is where the magic happens – that same group is asked to identify and vote on possible ways to overcome those barriers. In the second, a diverse assortment of community oncologists were given an online survey to provide feedback on the design of a phase 3 trial in light of recent new data. The abstract doesn't specify who was initially sent the survey, so we cannot tell response rate, or compare survey responders to the general population (I'll take a wild guess and go with “massive response bias”). Market research is sometimes useful. But what cancer clinical trial do not need right now are more surveys are working groups. The “strategies” listed in the first abstract are part of the same cluster of ideas that have been on the table for years now, with no appreciable increase in trial accrual. 3. The obligatory “What the What?” abstract Abstract 6564: Minority accrual on a prospective study targeting a diverse U.S. breast cancer population: An analysis of Wake Forest CCOP research base protocol 97609 The force with which my head hit my desk after reading this abstract made me concerned that it had left permanent scarring. If this had been re-titled “Poor Measurement of Accrual Factors Leads to Inaccurate Accrual Reporting”, would it still have been accepted for this year’s meeting? That's certainly a more accurate title. Let’s review: a trial intends to enroll both white and minority patients. Whites enroll much faster, leading to a period where only minority patients are recruited. Then, according to the authors, “an almost 4-fold increase in minority accrual raises question of accrual disparity.” So, sites will only recruit minority patients when they have no choice? But wait: the number of sites wasn't the same during the two periods, and start-up times were staggered. Adjusting for actual site time, the average minority accrual rate was 0.60 patients/site/month in the first part and 0.56 in the second. So the apparent 4-fold increase was entirely an artifact of bad math. This would be horribly embarrassing were it not for the fact that bad math seems to be endemic in clinical trial enrollment. Failing to adjust for start-up time and number of sites is so routine that not doing it is grounds for a presentation. The bottom line What we need now is to rigorously (and prospectively) compare and measure accrual interventions. We have lots of candidate ideas, and there is no need for more retrospective studies, working groups, or opinion polls to speculate on which ones will work best. Where possible, accrual interventions should themselves be randomized to minimize confounding variables which prevent accurate assessment. Data needs to be uniformly and completely collected. In other words, the standards that we already use for clinical trials need to be applied to the enrollment measures we use to engage patients to participate in those trials. This is not an optional consideration. It is an ethical obligation we have to cancer patients: we need to assure that we are doing all we can to maximize the rate at which we generate new evidence and test new therapies. [Image credit: Logarithmic turtle accrual rates courtesy of Flikr user joleson.] Full Article ASCO ethics oncology trials patient recruitment trial delays trial design
me Pediatric Trial Enrollment (Shameless DIA Self-Promotion, Part 1) By www.placebocontrol.com Published On :: Wed, 19 Jun 2013 22:19:00 +0000 [Fair Warning: I have generally tried to keep this blog separate from my corporate existence, but am making an exception for two quick posts about the upcoming DIA 2013 Annual Meeting.] Improving Enrollment in Pediatric Clinical Trials Logistically, ethically, and emotionally, involving children in medical research is greatly different from the same research in adults. Some of the toughest clinical trials I've worked on, across a number of therapeutic areas, have been pediatric ones. They challenge you to come up with different approaches to introducing and explaining clinical research – approaches that have to work for doctors, kids, and parents simultaneously. On Thursday June 27, Don Sickler, one of my team members, will be chairing a session titled “Parents as Partners: Engaging Caregivers for Pediatric Trials”. It should be a good session. Joining Don are 2 people I've had the pleasure of working with in the past. Both of them combine strong knowledge of clinical research with a massive amount of positive energy and enthusiasm (no doubt a big part of what makes them successful). However, they also differ in one key aspect: what they work on. One of them – Tristen Moors from Hyperion Therapeutics - works on an ultra-rare condition, Urea Cycle Disorder, a disease affecting only a few hundred children every year. On the other hand, Dr. Ann Edmunds is an ENT working in a thriving private practice. I met her because she was consistently the top enroller in a number of trials relating to tympanostomy tube insertion. Surgery to place “t-tubes” is one of the most common and routine outpatients surgeries there is, with an estimated half million kids getting tubes each year. Each presents a special challenge: for rare conditions, how do you even find enough patients? For routine procedures, how do you convince parents to complicate their (and their children’s) lives by signing up for a multi-visit, multi-procedure trial? Ann and Tristen have spent a lot of time tackling these issues, and should have some great advice to give. For more information on the session, here’s Don’s posting on our news blog. Full Article DIA pediatric trials
me Preview of Enrollment Analytics: Moving Beyond the Funnel (Shameless DIA Self-Promotion, Part 2) By www.placebocontrol.com Published On :: Fri, 21 Jun 2013 16:59:00 +0000 Are we looking at our enrollment data in the right way? I will be chairing a session on Tuesday on this topic, joined by a couple of great presenters (Diana Chung from Gilead and Gretchen Goller from PRA). Here's a short preview of the session: Hope to see you there. It should be a great discussion. Session Details: June 25, 1:45PM - 3:15PM Session Number: 241 Room Number: 205B 1. Enrollment Analytics: Moving Beyond the Funnel Paul Ivsin VP, Consulting Director CAHG Clinical Trials 2. Use of Analytics for Operational Planning Diana Chung, MSc Associate Director, Clinical Operations Gilead 3. Using Enrollment Data to Communicate Effectively with Sites Gretchen Goller, MA Senior Director, Patient Access and Retention Services PRA Full Article DIA metrics patient recruitment
me Questionable Enrollment Math at the UK's NIHR By www.placebocontrol.com Published On :: Mon, 16 Sep 2013 18:04:00 +0000 There has been considerable noise coming out of the UK lately about successes in clinical trial enrollment. First, a couple months ago came the rather dramatic announcement that clinical trial participation in the UK had "tripled over the last 6 years". That announcement, by the chief executive of the Sweet creature of bombast: is Sir John writing press releases for the NIHR? National Institute of Health Research's Clinical Research Network, was quickly and uncritically picked up by the media. That immediately caught my attention. In large, global trials, most pharmaceutical companies I've worked with can do a reasonable job of predicting accrual levels in a given country. I like to think that if participation rates in any given country had jumped that heavily, I’d have heard something. (To give an example: looking at a quite-typical study I worked on a few years ago: UK sites were overall slightly below the global average. The highest-enrolling countries were about 2.5 times as fast. So, a 3-fold increase in accruals would have catapulted the UK from below average to the fastest-enrolling country in the world.) Further inquiry, however, failed to turn up any evidence that the reported tripling actually corresponded to more human beings enrolled in clinical trials. Instead, there is some reason to believe that all we witnessed was increased reporting of trial participation numbers. Now we have a new source of wonder, and a new giant multiplier coming out of the UK. As the Director of the NIHR's Mental Health Research Network, Til Wykes, put it in her blog coverage of her own paper: Our research on the largest database of UK mental health studies shows that involving just one or two patients in the study team means studies are 4 times more likely to recruit successfully. Again, amazing! And not just a tripling – a quadrupling! Understand: I spend a lot of my time trying to convince study teams to take a more patient-focused approach to clinical trial design and execution. I desperately want to believe this study, and I would love having hard evidence to bring to my clients. At first glance, the data set seems robust. From the King's College press release: Published in the British Journal of Psychiatry, the researchers analysed 374 studies registered with the Mental Health Research Network (MHRN). Studies which included collaboration with service users in designing or running the trial were 1.63 times more likely to recruit to target than studies which only consulted service users. Studies which involved more partnerships - a higher level of Patient and Public Involvement (PPI) - were 4.12 times more likely to recruit to target. But here the first crack appears. It's clear from the paper that the analysis of recruitment success was not based on 374 studies, but rather a much smaller subset of 124 studies. That's not mentioned in either of the above-linked articles. And at this point, we have to stop, set aside our enthusiasm, and read the full paper. And at this point, critical doubts begin to spring up, pretty much everywhere. First and foremost: I don’t know any nice way to say this, but the "4 times more likely" line is, quite clearly, a fiction. What is reported in the paper is a 4.12 odds ratio between "low involvement" studies and "high involvement" studies (more on those terms in just a bit). Odds ratios are often used in reporting differences between groups, but they are unequivocally not the same as "times more likely than". This is not a technical statistical quibble. The authors unfortunately don’t provide the actual success rates for different kinds of studies, but here is a quick example that, given other data they present, is probably reasonably close: A Studies: 16 successful out of 20 Probability of success: 80% Odds of success: 4 to 1 B Studies: 40 successful out of 80 Probability of success: 50% Odds of success: 1 to 1 From the above, it’s reasonable to conclude that A studies are 60% more likely to be successful than B studies (the A studies are 1.6 times as likely to succeed). However, the odds ratio is 4.0, similar to the difference in the paper. It makes no sense to say that A studies are 4 times more likely to succeed than B studies. This is elementary stuff. I’m confident that everyone involved in the conduct and analysis of the MHRN paper knows this already. So why would Dr Wykes write this? I don’t know; it's baffling. Maybe someone with more knowledge of the politics of British medicine can enlighten me. If a pharmaceutical company had promoted a drug with this math, the warning letters and fines would be flying in the door fast. And rightly so. But if a government leader says it, it just gets recycled verbatim. The other part of Dr Wykes's statement is almost equally confusing. She claims that the enrollment benefit occurs when "involving just one or two patients in the study team". However, involving one or two patients would seem to correspond to either the lowest ("patient consultation") or the middle level of reported patient involvement (“researcher initiated collaboration”). In fact, the "high involvement" categories that are supposed to be associated with enrollment success are studies that were either fully designed by patients, or were initiated by patients and researchers equally. So, if there is truly a causal relationship at work here, improving enrollment would not be merely a function of adding a patient or two to the conversation. There are a number of other frustrating aspects of this study as well. It doesn't actually measure patient involvement in any specific research program, but uses just 3 broad categories (that the researchers specified at the beginning of each study). It uses an arbitrary and undocumented 17-point scale to measure "study complexity", which collapses and quite likely underweights many critical factors into a single number. The enrollment analysis excluded 11 studies because they weren't adequate for a factor that was later deemed non-significant. And probably the most frustrating facet of the paper is that the authors share absolutely no descriptive data about the studies involved in the enrollment analysis. It would be completely impossible to attempt to replicate its methods or verify its analysis. Do the authors believe that "Public Involvement" is only good when it’s not focused on their own work? However, my feelings about the study and paper are an insignificant fraction of the frustration I feel about the public portrayal of the data by people who should clearly know better. After all, limited evidence is still evidence, and every study can add something to our knowledge. But the public misrepresentation of the evidence by leaders in the area can only do us harm: it has the potential to actively distort research priorities and funding. Why This Matters We all seem to agree that research is too slow. Low clinical trial enrollment wastes time, money, and the health of patients who need better treatment options. However, what's also clear is that we lack reliable evidence on what activities enable us to accelerate the pace of enrollment without sacrificing quality. If we are serious about improving clinical trial accrual, we owe it to our patients to demand robust evidence for what works and what doesn’t. Relying on weak evidence that we've already solved the problem ("we've tripled enrollment!") or have a method to magically solve it ("PPI quadrupled enrollment!") will cause us to divert significant time, energy, and human health into areas that are politically favored but less than certain to produce benefit. And the overhyping those results by research leadership compounds that problem substantially. NIHR leadership should reconsider its approach to public discussion of its research, and practice what it preaches: critical assessment of the data. [Update Sept. 20: The authors of the study have posted a lengthy comment below. My follow-up is here.] [Image via flikr user Elliot Brown.] Ennis L, & Wykes T (2013). Impact of patient involvement in mental health research: longitudinal study. The British journal of psychiatry : the journal of mental science PMID: 24029538 Full Article NIHR patient recruitment trial delays UK trials
me Questionable Enrollment Math(s) - the Authors Respond By www.placebocontrol.com Published On :: Fri, 20 Sep 2013 04:09:00 +0000 The authors of the study I blogged about on Monday were kind enough to post a lengthy comment, responding in part to some of the issues I raised. I thought their response was interesting, and so reprint it in its entirety below, interjecting my own reactions as well. There were a number of points you made in your blog and the title of questionable maths was what caught our eye and so we reply on facts and provide context. Firstly, this is a UK study where the vast majority of UK clinical trials take place in the NHS. It is about patient involvement in mental health studies - an area where recruitment is difficult because of stigma and discrimination. I agree, in hindsight, that I should have titled the piece “questionable maths” rather than my Americanized “questionable math”. Otherwise, I think this is fine, although I’m not sure that anything here differs from my post. 1. Tripling of studies - You dispute NIHR figures recorded on a national database and support your claim with a lone anecdote - hardly data that provides confidence. The reason we can improve recruitment is that NIHR has a Clinical Research Network which provides extra staff, within the NHS, to support high quality clinical studies and has improved recruitment success. To be clear, I did not “dispute” the figures so much as I expressed sincere doubt that those figures correspond with an actual increase in actual patients consenting to participate in actual UK studies. The anecdote explains why I am skeptical – it's a bit like I've been told there was a magnitude 8 earthquake in Chicago, but neither I nor any of my neighbors felt anything. There are many reasons why reported numbers can increase in the absence of an actual increase. It’s worth noting that my lack of confidence in the NIHR's claims appears to be shared by the 2 UK-based experts quoted by Applied Clinical Trials in the article I linked to. 2. Large database: We have the largest database of detailed study information and patient involvement data - I have trawled the world for a bigger one and NIMH say there certainly isn't one in the USA. This means few places where patient impact can actually be measured 3. Number of studies: The database has 374 studies which showed among other results that service user involvement increased over time probably following changes by funders e.g. NIHR requests information in the grant proposal on how service users have been and will be involved - one of the few national funders to take this issue seriously. As far as I can tell, neither of these points is in dispute. 4. Analysis of patient involvement involves the 124 studies that have completed. You cannot analyse recruitment success unless then. I agree you cannot analyze recruitment success in studies that have not yet completed. My objection is that in both the KCL press release and the NIHR-authored Guardian article, the only number mentioned in 374, and references to the recruitment success findings came immediately after references to that number. For example: Published in the British Journal of Psychiatry, the researchers analysed 374 studies registered with the Mental Health Research Network (MHRN). Studies which included collaboration with service users in designing or running the trial were 1.63 times more likely to recruit to target than studies which only consulted service users. Studies which involved more partnerships - a higher level of Patient and Public Involvement (PPI) - were 4.12 times more likely to recruit to target. The above quote clearly implies that the recruitment conclusions were based on an analysis of 374 studies – a sample 3 times larger than the sample actually used. I find this disheartening. The complexity measure was developed following a Delphi exercise with clinicians, clinical academics and study delivery staff to include variables likely to be barriers to recruitment. It predicts delivery difficulty (meeting recruitment & delivery staff time). But of course you know all that as it was in the paper. Yes, I did know this, and yes, I know it because it was in the paper. In fact, that’s all I know about this measure, which is what led me to characterize it as “arbitrary and undocumented”. To believe that all aspects of protocol complexity that might negatively affect enrollment have been adequately captured and weighted in a single 17-point scale requires a leap of faith that I am not, at the moment, able to make. The extraordinary claim that all complexity issues have been accounted for in this model requires extraordinary evidence, and “we conducted a Delphi exercise” does not suffice. 6. All studies funded by NIHR partners were included – we only excluded studies funded without peer review, not won competitively. For the involvement analysis we excluded industry studies because of not being able to contact end users and where inclusion compromised our analysis reliability due to small group sizes. It’s only that last bit I was concerned about. Specifically, the 11 studies that were excluded due to being in “clinical groups” that were too small, despite the fact that “clinical groups” appear to have been excluded as non-significant from the final model of recruitment success. (Also: am I being whooshed here? In a discussion of "questionable math" the authors' enumeration goes from 4 to 6. I’m going to take the miscounting here as a sly attempt to see if I’m paying attention...) I am sure you are aware of the high standing of the journal and its robust peer review. We understand that our results must withstand the scrutiny of other scientists but many of your comments were unwarranted. This is the first in the world to investigate patient involvement impact. No other databases apart from the one held by the NIHR Mental Health Research Network is available to test – we only wish they were. I hope we can agree that peer review – no matter how "high standing" the journal – is not a shield against concern and criticism. Despite the length of your response, I’m still at a loss as to which of my comments specifically were unwarranted. In fact, I feel that I noted very clearly that my concerns about the study’s limitations were minuscule compared to my concerns about the extremely inaccurate way that the study has been publicized by the authors, KCL, and the NIHR. Even if I conceded every possible criticism of the study itself, there remains the fact that in public statements, you Misstated an odds ratio of 4 as “4 times more likely to” Overstated the recruitment success findings as being based on a sample 3 times larger than it actually was Re-interpreted, without reservation, a statistical association as a causal relationship Misstated the difference between the patient involvement categories as being a matter of merely “involving just one or two patients in the study team” And you did these consistently and repeatedly – in Dr Wykes's blog post, in the KCL press release, and in the NIHR-written Guardian article. To use the analogy from my previous post: if a pharmaceutical company had committed these acts in public statements about a new drug, public criticism would have been loud and swift. Your comment on the media coverage of odds ratios is an issue that scientists need to overcome (there is even a section in Wikipedia). It's highly unfair to blame "media coverage" for the use of an odds ratio as if it were a relative risk ratio. In fact, the first instance of "4 times more likely" appears in Dr Wykes's own blog post. It's repeated in the KCL press release, so you yourselves appear to have been the source of the error. You point out the base rate issue but of course in a logistic regression you also take into account all the other variables that may impinge on the outcome prior to assessing the effects of our key variable patient involvement - as we did – and showed that the odds ratio is 4.12 - So no dispute about that. We have followed up our analysis to produce a statement that the public will understand. Using the following equations: Model predicted recruitment lowest level of involvement exp(2.489-.193*8.8-1.477)/(1+exp(2.489-.193*8.8-1.477))=0.33 Model predicted recruitment highest level of involvement exp(2.489-.193*8.8-1.477+1.415)/(1+exp(2.489-.193*8.8-1.477+1.415)=0.67 For a study of typical complexity without a follow up increasing involvement from the lowest to the highest levels increased recruitment from 33% to 66% i.e. a doubling. So then, you agree that your prior use of “4 times more likely” was not true? Would you be willing to concede that in more or less direct English? This is important and is the first time that impact has been shown for patient involvement on the study success. Luckily in the UK we have a network that now supports clinicians to be involved and a system for ensuring study feasibility. The addition of patient involvement is the additional bonus that allows recruitment to increase over time and so cutting down the time for treatments to get to patients. No, and no again. This study shows an association in a model. The gap between that and a causal relationship is far too vast to gloss over in this manner. In summary, I thank the authors for taking the time to response, but I feel they've overreacted to my concerns about the study, and seriously underreacted to my more important concerns about their public overhyping of the study. I believe this study provides useful, though limited, data about the potential relationship between patient engagement and enrollment success. On the other hand, I believe the public positioning of the study by its authors and their institutions has been exaggerated and distorted in clearly unacceptable ways. I would ask the authors to seriously consider issuing public corrections on the 4 points listed above. Full Article NIHR patient recruitment trial delays UK trials
me Patient Recruitment: Taking the Low Road By www.placebocontrol.com Published On :: Thu, 19 Dec 2013 22:36:00 +0000 The Wall Street Journal has an interesting article on the use of “Big Data” to identify and solicit potential clinical trial participants. The premise is that large consumer data aggregators like Experian can target patients with certain diseases through correlations with non-health behavior. Examples given include “a preference for jazz” being associated with arthritis and “shopping online for clothes” being an indicator of obesity. We've seen this story before. In this way, allegedly, clinical trial patient recruitment companies can more narrowly target their solicitations* for patients to enroll in clinical trials. In the spirit of full disclosure, I should mention that I was interviewed by the reporter of this article, although I am not quoted. My comments generally ran along three lines, none of which really fit in with the main storyline of the article: I am highly skeptical that these analyses are actually effective at locating patients These methods aren't really new – they’re the same tactics that direct marketers have been using for years Most importantly, the clinical trials community can – and should – be moving towards open and collaborative patient engagement. Relying on tactics like consumer data snooping and telemarketing is an enormous step backwards. The first point is this: certainly some diseases have correlates in the real world, but these correlates tend to be pretty weak, and are therefore unreliable predictors of disease. Maybe it’s true that those struggling with obesity tend to buy more clothes online (I don’t know if it’s true or not – honestly it sounds a bit more like an association built on easy stereotypes than on hard data). But many obese people will not shop online (they will want to be sure the clothes actually fit), and vast numbers of people with low or average BMIs will shop for clothes online. So the consumer data will tend to have very low predictive value. The claims that liking jazz and owning cats are predictive of having arthritis are even more tenuous. These correlates are going to be several times weaker than basic demographic information like age and gender. And for more complex conditions, these associations fall apart. Marketers claim to solve this by factoring a complex web of associations through a magical black box – th WSJ article mentions that they “applied a computed algorithm” to flag patients. Having seen behind the curtain on a few of these magic algorithms, I can confidently say that they are underwhelming in their sophistication. Hand-wavy references to Big Data and Algorithms are just the tools used to impress pharma clients. (The down side to that, of course, is that you can’t help but come across as big brotherish – see this coverage from Forbes for a taste of what happens when people accept these claims uncritically.) But the effectiveness of these data slice-n-dicing activities is perhaps beside the point. They are really just a thin cover for old-fashioned boiler room tactics: direct mail and telemarketing. When I got my first introduction to direct marketing in the 90’s, it was the exact same program – get lead lists from big companies like Experian, then aggressively mail and call until you get a response. The limited effectiveness and old-school aggressiveness of these programs comes is nicely illustrated in the article by one person’s experience: Larna Godsey, of Wichita, Kan., says she received a dozen phone calls about a diabetes drug study over the past year from a company that didn't identify itself. Ms. Godsey, 63, doesn't suffer from the disease, but she has researched it on the Internet and donated to diabetes-related causes. "I don't know if it's just a coincidence or if they're somehow getting my information," says Ms. Godsey, who filed a complaint with the FTC this year. The article notes that one recruitment company, Acurian, has been the subject of over 500 FTC complaints regarding its tactics. It’s clear that Big Data is just the latest buzzword lipstick on the telemarketing pig. And that’s the real shame of it. We have arrived at an unprecedented opportunity for patients, researchers, and private industry to come together and discuss, as equals, research priorities and goals. Online patient communities like Inspire and PatientsLikeMe have created new mechanisms to share clinical trial opportunities and even create new studies. Dedicated disease advocates have jumped right into the world of clinical research, with groups like the Cystic Fibrosis Foundation and Michael J. Fox Foundation no longer content with raising research funds, but actively leading the design and operations of new studies. Some – not yet enough – pharmaceutical companies have embraced the opportunity to work more openly and honestly with patient groups. The scandal of stories like this is not the Wizard of Oz histrionics of secret computer algorithms, but that we as an industry continue to take the low road and resort to questionable boiler room tactics. It’s past time for the entire patient recruitment industry to drop the sleaze and move into the 21st century. I would hope that patient groups and researchers will come together as well to vigorously oppose these kinds of tactics when they encounter them. (*According to the article, Acurian "has said that calls related to medical studies aren't advertisements as defined by law," so we can agree to call them "solicitations".) Full Article Big Data CFF direct mail direct to patient ethics Inspire MJFF patient engagement patient recruitment PatientsLikeMe telemarketing
me Megafund versus Megalosaurus: Funding Drug Development By www.placebocontrol.com Published On :: Mon, 13 Jan 2014 04:36:00 +0000 This new 10-minute TEDMED talk is getting quite a bit of attention: (if embedded video does not work, try the TED site itself.) In it, Roger Stein claims to have created an approach to advancing drugs through clinical trials that will "fundamentally change the way research for cancer and lots of other things gets done". Because the costs of bringing a drug to market are so high, time from discovery to marketing is so long, and the chances of success of any individual drug are so grim, betting on any individual drug is foolish, according to Stein. Instead, risks for a large number of potential assets should be pooled, with the eventual winners paying for the losers. To do this, Stein proposes what he calls a "megafund" - a large collection of assets (candidate therapies). Through some modeling and simulations, Stein suggests some of the qualities of an ideal megafund: it would need in the neighborhood of $3-15 billion to acquire and manage 80-150 drugs. A fund of this size and with these assets would be able to provide an equity yield of about 12%, which would be "right in the investment sweet spot of pension funds and 401(k) plans". Here's what I find striking about those numbers: let's compare Stein's Megafund to everyone's favorite Megalosaurus, the old-fashioned Big Pharma dinosaur sometimes known as Pfizer: Megafund (Stein) Megalosaurus (Pfizer) Funding $3-15 billion $9 billion estimated 2013 R&D spend Assets 80-150 81 (in pipeline, plus many more in preclinical) Return on Equity 12% (estimated) 9.2% (last 10 years) to 13.2% (last 5) Since Pfizer's a dinosaur, it can't possibly compete withthe sleek, modern Megafund, right? Right? These numbers look remarkably similar. Pfizer - and a number of its peers - are spending Megafund-sized budget each year to shepherd through a Megafund-sized number of compounds. (Note many of Pfizer's peers have substantially fewer drugs in their published pipelines, but they own many times more compounds - the pipeline is just the drugs what they've elected to file an IND on.) What am I missing here? I understand that a fund is not a company, and there may be some benefits to decoupling asset management decisions from actual operations, but this won't be a tremendous gain, and would presumably be at least partially offset by increased transaction costs (Megafund has to source, contract, manage, and audit vendors to design and run all its trials, after all, and I don't know why I'd think it could do that any more cheaply than Big Pharma can). And having a giant drug pipeline's go/no go decisions made by "financial engineers" rather than pharma industry folks would seem like a scenario that's only really seen as an upgrade by the financial engineers themselves. A tweet from V.S. Schulz pointed me to a post on Derek Lowe's In the Pipeline blog. which lead to a link to this paper by Stein and 2 others in Nature Biotechnology from a year and a half ago. The authors spend most of their time differentiating themselves from other structures in the technical, financial details rather than explaining why megafund would work better at finding new drugs. However, they definitely think this is qualitatively different from existing pharma companies, and offer a couple reasons. First, [D]ebt financing can be structured to be more “patient” than private or public equity by specifying longer maturities; 10- to 20-year maturities are not atypical for corporate bonds. ... Such long horizons contrast sharply with the considerably shorter horizons of venture capitalists, and the even shorter quarterly earnings cycle and intra-daily price fluctuations faced by public companies. I'm not sure where this line of though is coming from. Certainly all big pharma companies' plans extend decades into the future - there may be quarterly earnings reports to file, but that's a force exerted far more on sales and marketing teams than on drug development. The financing of pharmaceutical development is already extremely long term. Even in the venture-backed world, Stein and team are wrong if they believe there is pervasive pressure to magically deliver drugs in record time. Investors and biotech management are both keenly aware of the tradeoffs between speed and regulatory success. Even this week's came-from-nowhere Cinderella story, Intercept Pharmaceuticals, was founded with venture money over a decade ago - these "longer maturities" are standard issue in biotech. We aren't making iPhone apps here, guys. Second, Although big pharma companies are central to the later stages of drug development and the marketing and distributing of approved drugs, they do not currently play as active a role at the riskier preclinical and early stages of development Again, I'm unsure why this is supposed to be so. Of Pfizer's 81 pipeline compounds, 55 are in Phase 1 or 2 - a ratio that's pretty heavy on early, risky project, and that's not too different from industry as a whole. Pfizer does not publish data on the number of compounds it currently has undergoing preclinical testing, but there's no clear reason I can think of to assume it's a small number. So, is Megafund truly a revolutionary idea, or is it basically a mathematical deck-chair-rearrangement for the "efficiencies of scale" behemoths we've already got? [Image: the world's first known dino, Megalosaurus, via Wikipedia.] Full Article big pharma drug development Pfizer
me Waiver of Informed Consent - proposed changes in the 21st Century Cures Act By www.placebocontrol.com Published On :: Tue, 14 Jul 2015 20:21:00 +0000 Adam Feuerstein points out - and expresses considerable alarm over - an overlooked clause in the 21st Century Cures Act: Waive informed consent requirement for clinical trials?!?! Unbelievable but true if #Path2Cures becomes law. pic.twitter.com/dqmWPpxPdE — Adam Feuerstein (@adamfeuerstein) July 14, 2015 In another tweet, he suggests that the act will "decimate" informed consent in drug trials. Subsequent responses and retweets did nothing to clarify the situation, and if anything tended to spread, rather than address, Feuerstein's confusion. Below is a quick recap of the current regulatory context and a real-life example of where the new wording may be helpful. In short, though, I think it's safe to say: Waiving informed consent is not new; it's already permitted under current regs The standards for obtaining a waiver of consent are stringent They may, in fact, be too stringent in a small number of situations The act may, in fact, be helpful in those situations Feuerstein may, in fact, need to chill out a little bit (For the purposes of this discussion, I’m talking about drug trials, but I believe the device trial situation is parallel.) Section 505(i) - the section this act proposes to amend - instructs the Secretary of Health and Human Services to propagate rules regarding clinical research. Subsection 4 addresses informed consent: …the manufacturer, or the sponsor of the investigation, require[e] that experts using such drugs for investigational purposes certify to such manufacturer or sponsor that they will inform any human beings to whom such drugs, or any controls used in connection therewith, are being administered, or their representatives, that such drugs are being used for investigational purposes and will obtain the consent of such human beings or their representatives, except where it is not feasible or it is contrary to the best interests of such human beings. [emphasis mine] Note that this section already recognizes situations where informed consent may be waived for practical or ethical reasons. These rules were in fact promulgated under 45 CFR part 46, section 116. The relevant bit – as far as this conversation goes – regards circumstances under which informed consent might be fully or partially waived. Specifically, there are 4 criteria, all of which need to be met: (1) The research involves no more than minimal risk to the subjects; (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects; (3) The research could not practicably be carried out without the waiver or alteration; and (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation. In practice, this is an especially difficult set of criteria to meet for most studies. Criterion (1) rules out most “conventional” clinical trials, because the hallmarks of those trials (use of an investigational medicine, randomization of treatment, blinding of treatment allocation) are all deemed to be more than “minimal risk”. That leaves observational studies – but even many of these cannot clear the bar of criterion (3). That word “practicably” is a doozy. Here’s an all-too-real example from recent personal experience. A drug manufacturer wants to understand physicians’ rationales for performing a certain procedure. It seems – but there is little hard data – that a lot of physicians do not strictly follow guidelines on when to perform the procedure. So we devise a study: whenever the procedure is performed, we ask the physician to complete a quick form categorizing why they made their decision. We also ask him or her to transcribe a few pieces of data from the patient chart. Even though the patients aren’t personally identifiable, the collection of medical data qualifies this as a clinical trial. It’s a minimal risk trial, definitely: the trial doesn’t dictate at all what the doctor should do, it just asks him or her to record what they did and why, and supply a bit of medical context for the decision. All told, we estimated 15 minutes of physician time to complete the form. The IRB monitoring the trial, however, denied our request for a waiver of informed consent, since it was “practicable” (not easy, but possible) to obtain informed consent from the patient. Informed consent – even with a slimmed-down form – was going to take a minimum of 30 minutes, so the length of the physician’s involvement tripled. In addition, many physicians opted out of the trial because they felt that the informed consent process added unnecessary anxiety and alarm for their patients, and provided no corresponding benefit. The end result was not surprising: the budget for the trial more than doubled, and enrollment was far below expectations. Which leads to two questions: 1. Did the informed consent appreciably help a single patient in the trial? Very arguably, no. Consenting to being “in” the trial made zero difference in the patients’ care, added time to their stay in the clinic, and possibly added to their anxiety. 2. Was less knowledge collected as a result? Absolutely, yes. The sponsor could have run two studies for the same cost. Instead, they ultimately reduced the power of the trial in order to cut losses. Bottom line, it appears that the modifications proposed in the 21st Century Cures Act really only targets trials like the one in the example. The language clearly retains criteria 1 and 2 of the current HHS regs, which are the most important from a patient safety perspective, but cuts down the “practicability” requirement, potentially permitting high quality studies to be run with less time and cost. Ultimately, it looks like a very small, but positive, change to the current rules. The rest of the act appears to be a mash-up of some very good and some very bad (or at least not fully thought out) ideas. However, this clause should not be cause for alarm. Full Article 21st Century Cures Act Adam Feuerstein informed consent
me Retention metrics, simplified By www.placebocontrol.com Published On :: Thu, 30 Mar 2017 19:53:00 +0000 [Originally posted on First Patient In] In my experience, most clinical trials do not suffer from significant retention issues. This is a testament to the collaborative good will of most patients who consent to participate, and to the patient-first attitude of most research coordinators. However, in many trials – especially those that last more than a year – the question of whether there is a retention issue will come up at some point while the trial’s still going. This is often associated with a jump in early terminations, which can occur as the first cohort of enrollees has been in the trial for a while. It’s a good question to ask midstream: are we on course to have as many patients fully complete the trial as we’d originally anticipated? However, the way we go about answering the question is often flawed and confusing. Here’s an example: a sponsor came to us with what they thought was a higher rate of early terminations than expected. The main problem? They weren't actually sure. Here’s their data. Can you tell? Original retention graph. Click to enlarge. If you can, please let me know how! While this chart is remarkably ... full of numbers, it provides no actual insight into when patients are dropping out, and no way that I can tell to project eventual total retention. In addition, measuring the “retention rate” as a simple ratio of active to terminated patients will not provide an accurate benchmark until the trial is almost over. Here's why: patients tend to drop out later in a trial, so as long as you’re enrolling new patients, your retention rate will be artificially high. When enrollment ends, your retention rate will appear to drop rapidly – but this is only because of the artificial lift you had earlier. In fact, that was exactly the problem the sponsor had: when enrollment ended, the retention rate started dropping. It’s good to be concerned, but it’s also important to know how to answer the question. Fortunately, there is a very simple way to get a clear answer in most cases – one that’s probably already in use by your biostats team around the corner: the Kaplan-Meier “survival” curve. Here is the same study data, but patient retention is simply depicted as a K-M graph. The key difference is that instead of calendar dates, we used the relative measure of time in the trial for each patient. That way we can easily spot where the trends are. In this case, we were able to establish quickly that patient drop-outs were increasing at a relatively small constant rate, with a higher percentage of drops coinciding with the one-year study visit. Most importantly, we were able to very accurately predict the eventual number of patients who would complete the trial. And it only took one graph! Full Article metrics patient retention
me A human bird flu case is thought to be found in Canada for the first time By www.npr.org Published On :: Mon, 11 Nov 2024 03:18:54 -0500 A person has tested positive in British Columbia, Canadian health officials said, though the results must be sent to another lab for confirmation. Full Article
me What does a 2nd Trump term mean for the Affordable Care Act? By www.npr.org Published On :: Mon, 11 Nov 2024 04:47:01 -0500 President-elect Donald Trump tried unsuccessfully to get rid of the Affordable Care Act during his first term. What action will he take this time around? Full Article
me Menjelajahi Dunia Keajaiban Slot Online Pragmatic Play By biosimilarnews.com Published On :: Fri, 16 Feb 2024 09:20:08 +0000 Dunia perjudian daring telah menyaksikan kemunculan penyedia perangkat lunak yang menghebohkan, dan di antara mereka, Pragmatic Play telah berhasil menarik perhatian para pemain dengan berbagai slot online unggulan. Dalam artikel… The post Menjelajahi Dunia Keajaiban Slot Online Pragmatic Play appeared first on Biosimilarnews. Full Article News Review Tips & Trik
me Kumpulan Game Slot Gacor Dengan Persentase RTP Tertinggi Hari Ini By biosimilarnews.com Published On :: Tue, 12 Mar 2024 04:30:20 +0000 Dalam dunia perjudian online yang terus berkembang, pencarian para pemain untuk menemukan peluang terbaik dalam meraih kemenangan mengarah pada fenomena populer: kumpulan game slot gacor dengan persentase RTP tertinggi hari… The post Kumpulan Game Slot Gacor Dengan Persentase RTP Tertinggi Hari Ini appeared first on Biosimilarnews. Full Article News
me Tips Rahasia Menang Mudah Main Slot Online Gacor By biosimilarnews.com Published On :: Wed, 03 Apr 2024 04:51:49 +0000 Mengungkap rahasia menang mudah dalam bermain slot online gacor menjadi dambaan setiap pemain judi daring. Pertama, perhatikan dengan seksama pemilihan mesin slot yang tepat. Pilihlah mesin dengan tingkat pembayaran atau… The post Tips Rahasia Menang Mudah Main Slot Online Gacor appeared first on Biosimilarnews. Full Article News Tips & Trik
me Cara Melihat Maxwin Saat Bermain Slot By biosimilarnews.com Published On :: Thu, 18 Apr 2024 11:15:52 +0000 Mesin slot sering kali memiliki istilah maxwin yang mengacu pada jumlah tertinggi yang bisa dimenangkan seorang pemain dalam permainan. Maxwins dalam slot biasanya dikaitkan dengan kombinasi simbol tertentu yang, jika… The post Cara Melihat Maxwin Saat Bermain Slot appeared first on Biosimilarnews. Full Article Tips & Trik
me Game Slot Gacor Gampang Menang Habanero By biosimilarnews.com Published On :: Fri, 26 Apr 2024 05:16:31 +0000 Habanero tidak hanya menyajikan game slot biasa, melainkan sebuah petualangan menang tanpa batas. Dengan tema-tema yang beragam, mulai dari petualangan antariksa hingga ke dunia mitologi, setiap game Habanero memiliki keunikan… The post Game Slot Gacor Gampang Menang Habanero appeared first on Biosimilarnews. Full Article News
me Link Daftar Situs Slot Gacor Gampang Menang Maxwin Terpercaya Hari Ini By biosimilarnews.com Published On :: Sun, 02 Jun 2024 10:34:45 +0000 Keuntungan besar dan kegembiraan yang ditawarkan oleh mesin slot online membuatnya semakin populer. Namun, dalam lautan situs slot yang ada, bagaimana Anda bisa menemukan situs slot terbaik yang dapat memberikan… The post Link Daftar Situs Slot Gacor Gampang Menang Maxwin Terpercaya Hari Ini appeared first on Biosimilarnews. Full Article News Tips & Trik
me Is AI Search a Medical Misinformation Disaster? By spectrum.ieee.org Published On :: Thu, 13 Jun 2024 13:00:04 +0000 Last month when Google introduced its new AI search tool, called AI Overviews, the company seemed confident that it had tested the tool sufficiently, noting in the announcement that “people have already used AI Overviews billions of times through our experiment in Search Labs.” The tool doesn’t just return links to Web pages, as in a typical Google search, but returns an answer that it has generated based on various sources, which it links to below the answer. But immediately after the launch users began posting examples of extremely wrong answers, including a pizza recipe that included glue and the interesting fact that a dog has played in the NBA. Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory.While the pizza recipe is unlikely to convince anyone to squeeze on the Elmer’s, not all of AI Overview’s extremely wrong answers are so obvious—and some have the potential to be quite harmful. Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory and has a new book out about the online propagandists who “turn lies into reality.” She has studied the spread of medical misinformation via social media, so IEEE Spectrum spoke to her about whether AI search is likely to bring an onslaught of erroneous medical advice to unwary users.I know you’ve been tracking disinformation on the Web for many years. Do you expect the introduction of AI-augmented search tools like Google’s AI Overviews to make the situation worse or better?Renée DiResta: It’s a really interesting question. There are a couple of policies that Google has had in place for a long time that appear to be in tension with what’s coming out of AI-generated search. That’s made me feel like part of this is Google trying to keep up with where the market has gone. There’s been an incredible acceleration in the release of generative AI tools, and we are seeing Big Tech incumbents trying to make sure that they stay competitive. I think that’s one of the things that’s happening here. We have long known that hallucinations are a thing that happens with large language models. That’s not new. It’s the deployment of them in a search capacity that I think has been rushed and ill-considered because people expect search engines to give them authoritative information. That’s the expectation you have on search, whereas you might not have that expectation on social media.There are plenty of examples of comically poor results from AI search, things like how many rocks we should eat per day [a response that was drawn for an Onion article]. But I’m wondering if we should be worried about more serious medical misinformation. I came across one blog post about Google’s AI Overviews responses about stem-cell treatments. The problem there seemed to be that the AI search tool was sourcing its answers from disreputable clinics that were offering unproven treatments. Have you seen other examples of that kind of thing?DiResta: I have. It’s returning information synthesized from the data that it’s trained on. The problem is that it does not seem to be adhering to the same standards that have long gone into how Google thinks about returning search results for health information. So what I mean by that is Google has, for upwards of 10 years at this point, had a search policy called Your Money or Your Life. Are you familiar with that?I don’t think so.DiResta: Your Money or Your Life acknowledges that for queries related to finance and health, Google has a responsibility to hold search results to a very high standard of care, and it’s paramount to get the information correct. People are coming to Google with sensitive questions and they’re looking for information to make materially impactful decisions about their lives. They’re not there for entertainment when they’re asking a question about how to respond to a new cancer diagnosis, for example, or what sort of retirement plan they should be subscribing to. So you don’t want content farms and random Reddit posts and garbage to be the results that are returned. You want to have reputable search results.That framework of Your Money or Your Life has informed Google’s work on these high-stakes topics for quite some time. And that’s why I think it’s disturbing for people to see the AI-generated search results regurgitating clearly wrong health information from low-quality sites that perhaps happened to be in the training data.So it seems like AI overviews is not following that same policy—or that’s what it appears like from the outside?DiResta: That’s how it appears from the outside. I don’t know how they’re thinking about it internally. But those screenshots you’re seeing—a lot of these instances are being traced back to an isolated social media post or a clinic that’s disreputable but exists—are out there on the Internet. It’s not simply making things up. But it’s also not returning what we would consider to be a high-quality result in formulating its response.I saw that Google responded to some of the problems with a blog post saying that it is aware of these poor results and it’s trying to make improvements. And I can read you the one bullet point that addressed health. It said, “For topics like news and health, we already have strong guardrails in place. In the case of health, we launched additional triggering refinements to enhance our quality protections.” Do you know what that means?DiResta: That blog posts is an explanation that [AI Overviews] isn’t simply hallucinating—the fact that it’s pointing to URLs is supposed to be a guardrail because that enables the user to go and follow the result to its source. This is a good thing. They should be including those sources for transparency and so that outsiders can review them. However, it is also a fair bit of onus to put on the audience, given the trust that Google has built up over time by returning high-quality results in its health information search rankings.I know one topic that you’ve tracked over the years has been disinformation about vaccine safety. Have you seen any evidence of that kind of disinformation making its way into AI search?DiResta: I haven’t, though I imagine outside research teams are now testing results to see what appears. Vaccines have been so much a focus of the conversation around health misinformation for quite some time, I imagine that Google has had people looking specifically at that topic in internal reviews, whereas some of these other topics might be less in the forefront of the minds of the quality teams that are tasked with checking if there are bad results being returned.What do you think Google’s next moves should be to prevent medical misinformation in AI search?DiResta: Google has a perfectly good policy to pursue. Your Money or Your Life is a solid ethical guideline to incorporate into this manifestation of the future of search. So it’s not that I think there’s a new and novel ethical grounding that needs to happen. I think it’s more ensuring that the ethical grounding that exists remains foundational to the new AI search tools. Full Article Ai search Google Disinformation Generative ai Large language models Health Medicine Search