b

Nearly 200 women were prescribed valproate during pregnancy between April 2018 and September 2020

Some 180 women were prescribed valproate, a medicine used to treat epilepsy and bipolar disorder, during their pregnancy within a 2.5 year interval, NHS data has revealed.




b

Everything you should know about the coronavirus pandemic

The latest information about the novel coronavirus identified in Wuhan, China, and advice on how pharmacists can help concerned patients and the public.




b

Semaglutide effective for weight loss in non-diabetic adults, research suggests

The type 2 diabetes mellitus drug semaglutide is effective for weight loss in non-diabetic overweight or obese adults, when taken alongside a reduced-calorie diet and exercise, researchers have found.




b

MHRA to consult on making two progestogen-only contraceptives available without a prescription

Consultations on the reclassification of two progestogen-only contraceptive pills from prescription-only to pharmacy medicines have been launched.




b

Everything you need to know about the COVID-19 therapy trials

Researchers around the world are working at record speed to find the best ways to treat and prevent COVID-19, from investigating the possibility of repurposing existing drugs to searching for novel therapies against the virus.




b

Overseas candidates will be allowed to sit registration assessment remotely, regulator says

The General Pharmaceutical Council has said most candidates living in countries with a two-hour or more time difference from the UK will be able to apply to sit the registration assessment at home.




b

Health boards say around half of pharmacies have expressed interest in providing COVID-19 vaccines

Around half of Wales’ community pharmacies have expressed interest to health boards in providing COVID-19 vaccinations as part of the national programme.




b

New drug cuts the risk of death in bladder cancer by 30% compared with chemotherapy, study suggests

A new type of drug that targets chemotherapy directly to cancer cells reduces the risk of death from the most common type of bladder cancer by 30%, a phase III trial in the New England Journal of Medicine has suggested.




b

Half of asthma patients in the UK overusing SABAs, study finds

More than half of patients with asthma in the UK are “potentially overusing” short-acting β2-agonists, according to research.




b

RPS pays tribute to pharmacy law and ethics pioneer Joy Wingfield

The Royal Pharmaceutical Society has expressed its sadness at the death of Joy Wingfield, honorary professor of Pharmacy Law and Ethics at the University of Nottingham.




b

IFM’s Hat Trick and Reflections On Option-To-Buy M&A

Today IFM Therapeutics announced the acquisition of IFM Due, one of its subsidiaries, by Novartis. Back in Sept 2019, IFM granted Novartis the right to acquire IFM Due as part of an “option to buy” collaboration around cGAS-STING antagonists for

The post IFM’s Hat Trick and Reflections On Option-To-Buy M&A appeared first on LifeSciVC.




b

The Biotech Startup Contraction Continues… And That’s A Good Thing

Venture creation in biotech is witnessing a sustained contraction. After the pandemic bubble’s over-indulgence, the venture ecosystem appears to have reset its pace of launching new startups. According to the latest Pitchbook data, venture creation in biotech hit its slowest

The post The Biotech Startup Contraction Continues… And That’s A Good Thing appeared first on LifeSciVC.




b

Stars and Scars… Some Lessons Learned About Leadership

By Arthur O. Tzianabos, PhD, CEO of Lifordi Immunotherapeutics, as part of the From the Trenches feature of LifeSciVC As the biotech industry continues to pick up steam, I have been getting a number of phone calls from folks in

The post Stars and Scars… Some Lessons Learned About Leadership appeared first on LifeSciVC.




b

Mariana Oncology’s Radiopharm Platform Acquired By Novartis

Novartis recently announced the acquisition of Mariana Oncology, an emerging biotech focused on advancing a radioligand therapeutics platform, for up to $1.75 billion in upfronts and future milestones. The capstone of its three short years of operations, this acquisition represents

The post Mariana Oncology’s Radiopharm Platform Acquired By Novartis appeared first on LifeSciVC.




b

Boiling It Down: Conveying Complexity For Decision-makers

By Ankit Mahadevia, former CEO of Spero Therapeutics, as part of the From The Trenches feature of LifeSciVC Drug development is complex. So is running a business. Sometimes, the work of doing both can make your head spin. In my

The post Boiling It Down: Conveying Complexity For Decision-makers appeared first on LifeSciVC.




b

To B or Not to (Series) B

By Robert Clarke, CEO of Kinaset Therapeutics, as part of the From The Trenches feature of LifeSciVC Strategic considerations of when and how to consider raising additional capital to support clinical development in an improving but still volatile market. As

The post To B or Not to (Series) B appeared first on LifeSciVC.





b

A Primer on Early-Stage Biotech VC

By Aimee Raleigh, Principal at Atlas Venture, as part of the From The Trenches feature of LifeSciVC From the outside, one might assume all biotech venture capital (VC) firms are more similar than different. However, once you look under the

The post A Primer on Early-Stage Biotech VC appeared first on LifeSciVC.



  • Biotech startup advice
  • From The Trenches
  • Talent

b

A Biotech Midsummer’s Madness

By Arthur Tzianabos, CEO of Lifordi Immunotherapeutics, as part of the From The Trenches feature of LifeSciVC Greetings from Lake Winnipesaukee in NH where I am at this time every year. It’s midsummer and vacation time for me and the

The post A Biotech Midsummer’s Madness appeared first on LifeSciVC.




b

Reflections On My Experience As A Board Member

By Ivana Magovčević-Liebisch, CEO of Vigil Neuroscience, as part of the From The Trenches feature of LifeSciVC In an industry where boom and bust cycles occur regularly and 90 percent of drug candidates fail to reach the market, an outstanding

The post Reflections On My Experience As A Board Member appeared first on LifeSciVC.



  • Boards and governance
  • From The Trenches
  • Leadership

b

Biotech Risk Cycles: Assets And Platforms

Today’s market likes products. Platforms aren’t in vogue anymore. Investors, especially in the public markets, only want late stage de-risked assets. Pharma only seems to be buying these kinds of asset. VCs need to focus on clinical stage companies. Or

The post Biotech Risk Cycles: Assets And Platforms appeared first on LifeSciVC.




b

AllTrials guide to asking academic institutions about missing results

When university and hospital trusts were called to the UK parliament last year to answer questions on why they were not following the rules on reporting results, we saw how effective the questioning from politicians was. Those of you who watched the parliamentary session saw the pressure the university representatives were put under. Because the politicians asked […]




b

Half of US clinical trials are breaking the law on reporting results

New research has shown that the majority of clinical trials which should be following the US law on reporting results aren’t. Less than half (41%) of clinical trial results were reported on time and 1 in 3 trials (36%) remain unreported. The research also found that clinical trials sponsored by companies are the most likely […]




b

Hundreds of clinical trials ruled to be breaking the law

A judge in New York has ruled that hundreds of clinical trials registered on ClinicalTrials.gov are breaking the law by not reporting results. The ruling came in a court case launched against the US Department of Health and Human Services by two plaintiffs, a family doctor and a professor of journalism. The case focused on […]




b

Preview of Enrollment Analytics: Moving Beyond the Funnel (Shameless DIA Self-Promotion, Part 2)


Are we looking at our enrollment data in the right way?


I will be chairing a session on Tuesday on this topic, joined by a couple of great presenters (Diana Chung from Gilead and Gretchen Goller from PRA).

Here's a short preview of the session:



Hope to see you there. It should be a great discussion.

Session Details:

June 25, 1:45PM - 3:15PM

  • Session Number: 241
  • Room Number: 205B


1. Enrollment Analytics: Moving Beyond the Funnel
Paul Ivsin
VP, Consulting Director
CAHG Clinical Trials

2. Use of Analytics for Operational Planning
Diana Chung, MSc
Associate Director, Clinical Operations
Gilead

3. Using Enrollment Data to Communicate Effectively with Sites
Gretchen Goller, MA
Senior Director, Patient Access and Retention Services
PRA





b

Brazen Scofflaws? Are Pharma Companies Really Completely Ignoring FDAAA?

Results reporting requirements are pretty clear. Maybe critics should re-check their methods?

Ben Goldacre has rather famously described the clinical trial reporting requirements in the Food and Drug Administration Amendments Act of 2007 as a “fake fix” that was being thoroughly “ignored” by the pharmaceutical industry.

Pharma: breaking the law in broad daylight?
He makes this sweeping, unconditional proclamation about the industry and its regulators on the basis of  a single study in the BMJ, blithely ignoring the fact that a) the authors of the study admitted that they could not adequately determine the number of studies that were meeting FDAAA requirements and b) a subsequent FDA review that identified only 15 trials potentially out of compliance, out of a pool of thousands.


Despite the fact that the FDA, which has access to more data, says that only a tiny fraction of studies are potentially noncompliant, Goldacre's frequently repeated claims that the law is being ignored seems to have caught on in the general run of journalistic and academic discussions about FDAAA.

And now there appears to be additional support for the idea that a large percentage of studies are noncompliant with FDAAA results reporting requirements, in the form of a new study in the Journal of Clinical Oncology: "Public Availability of Results of Trials Assessing Cancer Drugs in the United States" by Thi-Anh-Hoa Nguyen, et al.. In it, the authors report even lower levels of FDAAA compliance – a mere 20% of randomized clinical trials met requirements of posting results on clinicaltrials.gov within one year.

Unsurprisingly, the JCO results were immediately picked up and circulated uncritically by the usual suspects.

I have to admit not knowing much about pure academic and cooperative group trial operations, but I do know a lot about industry-run trials – simply put, I find the data as presented in the JCO study impossible to believe. Everyone I work with in pharma trials is painfully aware of the regulatory environment they work in. FDAAA compliance is a given, a no-brainer: large internal legal and compliance teams are everywhere, ensuring that the letter of the law is followed in clinical trial conduct. If anything, pharma sponsors are twitchily over-compliant with these kinds of regulations (for example, most still adhere to 100% verification of source documentation – sending monitors to physically examine every single record of every single enrolled patient - even after the FDA explicitly told them they didn't have to).

I realize that’s anecdotal evidence, but when such behavior is so pervasive, it’s difficult to buy into data that says it’s not happening at all. The idea that all pharmaceutical companies are ignoring a highly visible law that’s been on the books for 6 years is extraordinary. Are they really so brazenly breaking the rules? And is FDA abetting them by disseminating incorrect information?

Those are extraordinary claims, and would seem to require extraordinary evidence. The BMJ study had clear limitations that make its implications entirely unclear. Is the JCO article any better?

Some Issues


In fact, there appear to be at least two major issues that may have seriously compromised the JCO findings:

1. Studies that were certified as being eligible for delayed reporting requirements, but do not have their certification date listed.

The study authors make what I believe to be a completely unwarranted assumption:

In trials for approval of new drugs or approval for a new indication, a certification [permitting delayed results reporting] should be posted within 1 year and should be publicly available.

It’s unclear to me why the authors think the certifications “should be” publicly available. In re-reading FDAAA section 801, I don’t see any reference to that being a requirement. I suppose I could have missed it, but the authors provide a citation to a page that clearly does not list any such requirement.

But their methodology assumes that all trials that have a certification will have it posted:

If no results were posted at ClinicalTrials.gov, we determined whether the responsible party submitted a certification. In this case, we recorded the date of submission of the certification to ClinicalTrials.gov.

If a sponsor gets approval from FDA to delay reporting (as is routine for all drugs that are either not approved for any indication, or being studied for a new indication – i.e., the overwhelming majority of pharma drug trials), but doesn't post that approval on the registry, the JCO authors deem that trial “noncompliant”. This is not warranted: the company may have simply chosen not to post the certification despite being entirely FDAAA compliant.

2. Studies that were previously certified for delayed reporting and subsequently reported results

It is hard to tell how the authors treated this rather-substantial category of trials. If a trial was certified for delayed results reporting, but then subsequently published results, the certification date becomes difficult to find. Indeed, it appears in the case where there were results, the authors simply looked at the time from study completion to results posting. In effect, this would re-classify almost every single one of these trials from compliant to non-compliant. Consider this example trial:


  • Phase 3 trial completes January 2010
  • Certification of delayed results obtained December 2010 (compliant)
  • FDA approval June 2013
  • Results posted July 2013 (compliant)


In looking at the JCO paper's methods section, it really appears that this trial would be classified as reporting results 3.5 years after completion, and therefore be considered noncompliant with FDAAA. In fact, this trial is entirely kosher, and would be extremely typical for many phase 2 and 3 trials in industry.

Time for Some Data Transparency


The above two concerns may, in fact, be non-issues. They certainly appear to be implied in the JCO paper, but the wording isn't terribly detailed and could easily be giving me the wrong impression.

However, if either or both of these issues are real, they may affect the vast majority of "noncompliant" trials in this study. Given the fact that most clinical trials are either looking at new drugs, or looking at new indications for new drugs, these two issues may entirely explain the gap between the JCO study and the unequivocal FDA statements that contradict it.

I hope that, given the importance of transparency in research, the authors will be willing to post their data set publicly so that others can review their assumptions and independently verify their conclusions. It would be more than a bit ironic otherwise.

[Image credit: Shamless lawlessness via Flikr user willytronics.]


Thi-Anh-Hoa Nguyen, Agnes Dechartres, Soraya Belgherbi, and Philippe Ravaud (2013). Public Availability of Results of Trials Assessing Cancer Drugs in the United States JOURNAL OF CLINICAL ONCOLOGY DOI: 10.1200/JCO.2012.46.9577




b

Questionable Enrollment Math at the UK's NIHR

There has been considerable noise coming out of the UK lately about successes in clinical trial enrollment.

First, a couple months ago came the rather dramatic announcement that clinical trial participation in the UK had "tripled over the last 6 years". That announcement, by the chief executive of the

Sweet creature of bombast: is Sir John
writing press releases for the NIHR?
National Institute of Health Research's Clinical Research Network, was quickly and uncritically picked up by the media.

That immediately caught my attention. In large, global trials, most pharmaceutical companies I've worked with can do a reasonable job of predicting accrual levels in a given country. I like to think that if participation rates in any given country had jumped that heavily, I’d have heard something.

(To give an example: looking at a quite-typical study I worked on a few years ago: UK sites were overall slightly below the global average. The highest-enrolling countries were about 2.5 times as fast. So, a 3-fold increase in accruals would have catapulted the UK from below average to the fastest-enrolling country in the world.)

Further inquiry, however, failed to turn up any evidence that the reported tripling actually corresponded to more human beings enrolled in clinical trials. Instead, there is some reason to believe that all we witnessed was increased reporting of trial participation numbers.

Now we have a new source of wonder, and a new giant multiplier coming out of the UK. As the Director of the NIHR's Mental Health Research Network, Til Wykes, put it in her blog coverage of her own paper:
Our research on the largest database of UK mental health studies shows that involving just one or two patients in the study team means studies are 4 times more likely to recruit successfully.
Again, amazing! And not just a tripling – a quadrupling!

Understand: I spend a lot of my time trying to convince study teams to take a more patient-focused approach to clinical trial design and execution. I desperately want to believe this study, and I would love having hard evidence to bring to my clients.

At first glance, the data set seems robust. From the King's College press release:
Published in the British Journal of Psychiatry, the researchers analysed 374 studies registered with the Mental Health Research Network (MHRN).
Studies which included collaboration with service users in designing or running the trial were 1.63 times more likely to recruit to target than studies which only consulted service users.  Studies which involved more partnerships - a higher level of Patient and Public Involvement (PPI) - were 4.12 times more likely to recruit to target.
But here the first crack appears. It's clear from the paper that the analysis of recruitment success was not based on 374 studies, but rather a much smaller subset of 124 studies. That's not mentioned in either of the above-linked articles.

And at this point, we have to stop, set aside our enthusiasm, and read the full paper. And at this point, critical doubts begin to spring up, pretty much everywhere.

First and foremost: I don’t know any nice way to say this, but the "4 times more likely" line is, quite clearly, a fiction. What is reported in the paper is a 4.12 odds ratio between "low involvement" studies and "high involvement" studies (more on those terms in just a bit).  Odds ratios are often used in reporting differences between groups, but they are unequivocally not the same as "times more likely than".

This is not a technical statistical quibble. The authors unfortunately don’t provide the actual success rates for different kinds of studies, but here is a quick example that, given other data they present, is probably reasonably close:

  • A Studies: 16 successful out of 20 
    • Probability of success: 80% 
    • Odds of success: 4 to 1
  • B Studies: 40 successful out of 80
    • Probability of success: 50%
    • Odds of success: 1 to 1

From the above, it’s reasonable to conclude that A studies are 60% more likely to be successful than B studies (the A studies are 1.6 times as likely to succeed). However, the odds ratio is 4.0, similar to the difference in the paper. It makes no sense to say that A studies are 4 times more likely to succeed than B studies.

This is elementary stuff. I’m confident that everyone involved in the conduct and analysis of the MHRN paper knows this already. So why would Dr Wykes write this? I don’t know; it's baffling. Maybe someone with more knowledge of the politics of British medicine can enlighten me.

If a pharmaceutical company had promoted a drug with this math, the warning letters and fines would be flying in the door fast. And rightly so. But if a government leader says it, it just gets recycled verbatim.

The other part of Dr Wykes's statement is almost equally confusing. She claims that the enrollment benefit occurs when "involving just one or two patients in the study team". However, involving one or two patients would seem to correspond to either the lowest ("patient consultation") or the middle level of reported patient involvement (“researcher initiated collaboration”). In fact, the "high involvement" categories that are supposed to be associated with enrollment success are studies that were either fully designed by patients, or were initiated by patients and researchers equally. So, if there is truly a causal relationship at work here, improving enrollment would not be merely a function of adding a patient or two to the conversation.

There are a number of other frustrating aspects of this study as well. It doesn't actually measure patient involvement in any specific research program, but uses just 3 broad categories (that the researchers specified at the beginning of each study). It uses an arbitrary and undocumented 17-point scale to measure "study complexity", which collapses and quite likely underweights many critical factors into a single number. The enrollment analysis excluded 11 studies because they weren't adequate for a factor that was later deemed non-significant. And probably the most frustrating facet of the paper is that the authors share absolutely no descriptive data about the studies involved in the enrollment analysis. It would be completely impossible to attempt to replicate its methods or verify its analysis. Do the authors believe that "Public Involvement" is only good when it’s not focused on their own work?

However, my feelings about the study and paper are an insignificant fraction of the frustration I feel about the public portrayal of the data by people who should clearly know better. After all, limited evidence is still evidence, and every study can add something to our knowledge. But the public misrepresentation of the evidence by leaders in the area can only do us harm: it has the potential to actively distort research priorities and funding.

Why This Matters

We all seem to agree that research is too slow. Low clinical trial enrollment wastes time, money, and the health of patients who need better treatment options.

However, what's also clear is that we lack reliable evidence on what activities enable us to accelerate the pace of enrollment without sacrificing quality. If we are serious about improving clinical trial accrual, we owe it to our patients to demand robust evidence for what works and what doesn’t. Relying on weak evidence that we've already solved the problem ("we've tripled enrollment!") or have a method to magically solve it ("PPI quadrupled enrollment!") will cause us to divert significant time, energy, and human health into areas that are politically favored but less than certain to produce benefit. And the overhyping those results by research leadership compounds that problem substantially. NIHR leadership should reconsider its approach to public discussion of its research, and practice what it preaches: critical assessment of the data.

[Update Sept. 20: The authors of the study have posted a lengthy comment below. My follow-up is here.]
 
[Image via flikr user Elliot Brown.]


Ennis L, & Wykes T (2013). Impact of patient involvement in mental health research: longitudinal study. The British journal of psychiatry : the journal of mental science PMID: 24029538





b

Questionable Enrollment Math(s) - the Authors Respond

The authors of the study I blogged about on Monday were kind enough to post a lengthy comment, responding in part to some of the issues I raised. I thought their response was interesting, and so reprint it in its entirety below, interjecting my own reactions as well.

There were a number of points you made in your blog and the title of questionable maths was what caught our eye and so we reply on facts and provide context.

Firstly, this is a UK study where the vast majority of UK clinical trials take place in the NHS. It is about patient involvement in mental health studies - an area where recruitment is difficult because of stigma and discrimination.

I agree, in hindsight, that I should have titled the piece “questionable maths” rather than my Americanized “questionable math”. Otherwise, I think this is fine, although I’m not sure that anything here differs from my post.

1. Tripling of studies - You dispute NIHR figures recorded on a national database and support your claim with a lone anecdote - hardly data that provides confidence. The reason we can improve recruitment is that NIHR has a Clinical Research Network which provides extra staff, within the NHS, to support high quality clinical studies and has improved recruitment success.

To be clear, I did not “dispute” the figures so much as I expressed sincere doubt that those figures correspond with an actual increase in actual patients consenting to participate in actual UK studies. The anecdote explains why I am skeptical – it's a bit like I've been told there was a magnitude 8 earthquake in Chicago, but neither I nor any of my neighbors felt anything. There are many reasons why reported numbers can increase in the absence of an actual increase. It’s worth noting that my lack of confidence in the NIHR's claims appears to be shared by the 2 UK-based experts quoted by Applied Clinical Trials in the article I linked to.

2. Large database: We have the largest database of detailed study information and patient involvement data - I have trawled the world for a bigger one and NIMH say there certainly isn't one in the USA. This means few places where patient impact can actually be measured
3. Number of studies: The database has 374 studies which showed among other results that service user involvement increased over time probably following changes by funders e.g. NIHR requests information in the grant proposal on how service users have been and will be involved - one of the few national funders to take this issue seriously.

As far as I can tell, neither of these points is in dispute.

4. Analysis of patient involvement involves the 124 studies that have completed. You cannot analyse recruitment success unless then.

I agree you cannot analyze recruitment success in studies that have not yet completed. My objection is that in both the KCL press release and the NIHR-authored Guardian article, the only number mentioned in 374, and references to the recruitment success findings came immediately after references to that number. For example:

Published in the British Journal of Psychiatry, the researchers analysed 374 studies registered with the Mental Health Research Network (MHRN).
Studies which included collaboration with service users in designing or running the trial were 1.63 times more likely to recruit to target than studies which only consulted service users.  Studies which involved more partnerships - a higher level of Patient and Public Involvement (PPI) - were 4.12 times more likely to recruit to target.

The above quote clearly implies that the recruitment conclusions were based on an analysis of 374 studies – a sample 3 times larger than the sample actually used. I find this disheartening.

The complexity measure was developed following a Delphi exercise with clinicians, clinical academics and study delivery staff to include variables likely to be barriers to recruitment. It predicts delivery difficulty (meeting recruitment & delivery staff time). But of course you know all that as it was in the paper.

Yes, I did know this, and yes, I know it because it was in the paper. In fact, that’s all I know about this measure, which is what led me to characterize it as “arbitrary and undocumented”. To believe that all aspects of protocol complexity that might negatively affect enrollment have been adequately captured and weighted in a single 17-point scale requires a leap of faith that I am not, at the moment, able to make. The extraordinary claim that all complexity issues have been accounted for in this model requires extraordinary evidence, and “we conducted a Delphi exercise” does not suffice.  

6. All studies funded by NIHR partners were included – we only excluded studies funded without peer review, not won competitively. For the involvement analysis we excluded industry studies because of not being able to contact end users and where inclusion compromised our analysis reliability due to small group sizes.

It’s only that last bit I was concerned about. Specifically, the 11 studies that were excluded due to being in “clinical groups” that were too small, despite the fact that “clinical groups” appear to have been excluded as non-significant from the final model of recruitment success.

(Also: am I being whooshed here? In a discussion of "questionable math" the authors' enumeration goes from 4 to 6. I’m going to take the miscounting here as a sly attempt to see if I’m paying attention...)

I am sure you are aware of the high standing of the journal and its robust peer review. We understand that our results must withstand the scrutiny of other scientists but many of your comments were unwarranted. This is the first in the world to investigate patient involvement impact. No other databases apart from the one held by the NIHR Mental Health Research Network is available to test – we only wish they were.

I hope we can agree that peer review – no matter how "high standing" the journal – is not a shield against concern and criticism. Despite the length of your response, I’m still at a loss as to which of my comments specifically were unwarranted.

In fact, I feel that I noted very clearly that my concerns about the study’s limitations were minuscule compared to my concerns about the extremely inaccurate way that the study has been publicized by the authors, KCL, and the NIHR. Even if I conceded every possible criticism of the study itself, there remains the fact that in public statements, you
  1. Misstated an odds ratio of 4 as “4 times more likely to”
  2. Overstated the recruitment success findings as being based on a sample 3 times larger than it actually was
  3. Re-interpreted, without reservation, a statistical association as a causal relationship
  4. Misstated the difference between the patient involvement categories as being a matter of merely “involving just one or two patients in the study team”
And you did these consistently and repeatedly – in Dr Wykes's blog post, in the KCL press release, and in the NIHR-written Guardian article.

To use the analogy from my previous post: if a pharmaceutical company had committed these acts in public statements about a new drug, public criticism would have been loud and swift.

Your comment on the media coverage of odds ratios is an issue that scientists need to overcome (there is even a section in Wikipedia).

It's highly unfair to blame "media coverage" for the use of an odds ratio as if it were a relative risk ratio. In fact, the first instance of "4 times more likely" appears in Dr Wykes's own blog post. It's repeated in the KCL press release, so you yourselves appear to have been the source of the error.

You point out the base rate issue but of course in a logistic regression you also take into account all the other variables that may impinge on the outcome prior to assessing the effects of our key variable patient involvement - as we did – and showed that the odds ratio is 4.12 - So no dispute about that. We have followed up our analysis to produce a statement that the public will understand. Using the following equations:
Model predicted recruitment lowest level of involvement exp(2.489-.193*8.8-1.477)/(1+exp(2.489-.193*8.8-1.477))=0.33
Model predicted recruitment highest level of involvement exp(2.489-.193*8.8-1.477+1.415)/(1+exp(2.489-.193*8.8-1.477+1.415)=0.67
For a study of typical complexity without a follow up increasing involvement from the lowest to the highest levels increased recruitment from 33% to 66% i.e. a doubling.

So then, you agree that your prior use of “4 times more likely” was not true? Would you be willing to concede that in more or less direct English?

This is important and is the first time that impact has been shown for patient involvement on the study success.
Luckily in the UK we have a network that now supports clinicians to be involved and a system for ensuring study feasibility.
The addition of patient involvement is the additional bonus that allows recruitment to increase over time and so cutting down the time for treatments to get to patients.

No, and no again. This study shows an association in a model. The gap between that and a causal relationship is far too vast to gloss over in this manner.

In summary, I thank the authors for taking the time to response, but I feel they've overreacted to my concerns about the study, and seriously underreacted to my more important concerns about their public overhyping of the study. 

I believe this study provides useful, though limited, data about the potential relationship between patient engagement and enrollment success. On the other hand, I believe the public positioning of the study by its authors and their institutions has been exaggerated and distorted in clearly unacceptable ways. I would ask the authors to seriously consider issuing public corrections on the 4 points listed above.





b

Brave New Biopharm Blogging

Although a few articles on this site are older, I really only began blogging in earnest about 15 months ago. However, I suppose that's long enough that I can count myself as at least somewhat established, and take a moment to welcome and encourage some interesting newcomers to the scene.
 
Bloggers in dank basements their natural habitat.
There are 3 relative newcomers that I've found really interesting, all with very different perspectives on drug development and clinical research:


The Big Pharma insider.
With the exception of John LaMattina (the former Pfizer exec who regularly provides seriously thought provoking ideas over on Forbes), I don’t know of anyone from the ranks of Big Pharma who writes both consistently and well. Which is a shame, given how many major past, current, and future therapies pass through those halls.

Enter Frank David, the Director of Strategy at AstraZeneca's Oncology Innovative Medicines unit. Frank started his Pharmagellan blog this April, and has been putting out a couple thoughtful perspective pieces a month since then.

Frank also gets my vote for most under-followed Twitter account in the industry, as he’s putting out a steady stream of interesting material.


Getting trials done.
Clinical operations – the actual execution of the clinical trials we all talk about – is seriously underrepresented in the blogosphere. There are a number of industry blogs, but none that aren’t trying first and foremost to sell you something.

I met Nadia Bracken on my last trip out to the San Francisco bay area. To say Nadia is driven is to make a rather silly understatement. Nadia is driven. She thinks fast and she talks fast. ClinOps Toolkit is a blog (or resource? or community?) that is still very much in development, but I think it holds a tremendous amount of potential. People working in ClinOps should be embracing her, and those of us who depend on operations teams getting the job done should keep a close eye on the website.


Watching the money.
I am not a stock trader. I am a data person, and data says trust big sample sizes. And, honestly, I just don't have the time.

But that doesn't stop me from realizing that a lot of great insight about drug development – especially when it concerns small biotechs – is coming from the investment community. So I tend to follow a number of financial writers, as I've found that they do a much better job of digging through the hype than can ever be expected of the mainstream media.

One stock writer who I've been following for a while is Andrew Goodwin, who maintains the Biotech Due Diligence website and blog. Andrew clearly has a great grasp on a number of topics, so when he described a new blog as a “must-have addition” to one's reading list, I had to take a look.

And the brand-new-this-month blog, by David Sable at Special Situations Fund, does seem like a great read. David looks both at the corporate dynamics and scientific stories of biotechs with a firmly skeptical view. I know most blogs this new will not be around 6 months from now (and David admits as much in his opening post), but I’m hoping this one lasts.

. . . . .

So, I encourage you to take a look at the above 3 blogs. I'm happy to see more and diverse perspectives on the drug development process starting to emerge, and hope that all 3 of these authors stick around for quite a while – we need their ideas.



[Bloggerhole photo courtesy of Flikr user second_mouse.]




b

Half of All Trials Unpublished*

(*For certain possibly nonstandard uses of the word "unpublished")

This is an odd little study. Instead of looking at registered trials and following them through to publication, this study starts with a random sample of phase 3 and 4 drug trials that already had results posted on ClinicalTrials.gov - so in one, very obvious sense, none of the trials in this study went unpublished.

Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals
Carolina Riveros, Agnes Dechartres, Elodie Perrodeau, Romana Haneef, Isabelle Boutron, Philippe Ravaud



But here the authors are concerned with publication in medical journals, and they were only able to locate journal articles covering about half (297/594) of trials with registered results. 

It's hard to know what to make of these results, exactly. Some of the "missing" trials may be published in the future (a possibility the authors acknowledge), some may have been rejected by one or more journals (FDAAA requires posting the results to ClinicalTrials.gov, but it certainly doesn't require journals to accept trial reports), and some may be pre-FDAAA trials that sponsors have retroactively added to ClinicalTrials.gov even though development on the drug has ceased.

It would have been helpful had the authors reported journal publication rates stratified by the year the trials completed - this would have at least given us some hints regarding the above. More than anything I still find it absolutely bizarre that in a study this small, the entire dataset is not published for review.

One potential concern is the search methodology used by the authors to match posted and published trials. If the easy routes (link to article already provided in ClinicalTrials.gov, or NCT number found in a PubMed search) failed, a manual search was performed:
The articles identified through the search had to match the corresponding trial in terms of the information registered at ClinicalTrials.gov (i.e., same objective, same sample size, same primary outcome, same location, same responsible party, same trial phase, and same sponsor) and had to present results for the primary outcome. 
So it appears that a reviewed had to score the journal article as an exact match on 8 criteria in order for the trial to be considered the same. That could easily lead to exclusion of journal articles on the basis of very insubstantial differences. The authors provide no detail on this; and again, that would be easy to verify if the study dataset was published. 

The reason I harp on this, and worry about the matching methodology, is that two of the authors of this study were also involved in a methodologically opaque and flawed study about clinical trial results posted in the JCO. In that study, as well, the authors appeared to use an incorrect methodology to identify published clinical trials. When I pointed the issues out, the corresponding author merely reiterated what was already (insufficiently) in the paper's Methodology section.

I find it strange beyond belief, and more than a little hypocritical, that researchers would use a public, taxpayer-funded database as the basis of their studies, and yet refuse to provide their data for public review. There are no technological or logistical issues preventing this kind of sharing, and there is an obvious ethical point in favor of transparency.

But if the authors are reasonably close to correct in their results, I'm not sure what to make of this study. 

The Nature article covering this study contend that
[T]he [ClinicalTrials.gov] database was never meant to replace journal publications, which often contain longer descriptions of methods and results and are the basis for big reviews of research on a given drug.
I suppose that some journal articles have better methodology sections, although this is far from universally true (and, like this study here, these methods are often quite opaquely described and don't support replication). As for results, I don't believe that's the case. In this study, the opposite was true: ClinicalTrial.gov results were generally more complete than journal results. And I have no idea why the registry wouldn't surpass journals as a more reliable and complete source of information for "big reviews".

Perhaps it is a function of my love of getting my hands dirty digging into the data, but if we are witnessing a turning point where journal articles take a distant back seat to the ClinicalTrials.gov registry, I'm enthused. ClinicalTrials.gov is public, free, and contains structured data; journal articles are expensive, unparsable, and generally written in painfully unclear language. To me, there's really no contest. 

Carolina Riveros, Agnes Dechartres, Elodie Perrodeau, Romana Haneef, Isabelle Boutron, & Philippe Ravaud (2013). Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals PLoS Medicine DOI: 10.1371/journal.pmed.1001566




b

Can a Form Letter from FDA "Blow Your Mind"?

Adam Feuerstein appears to be a generally astute observer of the biotech scene. As a finance writer, he's accosted daily with egregiously hyped claims from small drug companies and their investors, and I think he tends to do an excellent job of spotting cases where breathless excitement is unaccompanied by substantive information.


However, Feuerstein's healthy skepticism seems to have abandoned him last year in the case of a biotech called Sarepta Therapeutics, who released some highly promising - but also incredibly limited - data on their treatment for Duchenne muscular dystrophy. After a disappointing interaction with the FDA, Sarepta's stock dropped, and Feuerstein appeared to realize that he'd lost some objectivity on the topic.


However, with the new year comes new optimism, and Feuerstein seems to be back to squinting hard at tea leaves - this time in the case of a form letter from the FDA.


He claims that the contents of the letter will "blow your mind". To him, the key passage is:


We understand that you feel that eteplirsen is highly effective, and may be confused by what you have read or heard about FDA's actions on eteplirsen. Unfortunately, the information reported in the press or discussed in blogs does not necessarily reflect FDA's position. FDA has reached no conclusions about the possibility of using accelerated approval for any new drug for the treatment of Duchenne muscular dystrophy, and for eteplirsen in particular.


Feuerstein appears to think that the fact that FDA "has reached no conclusions" may mean that it may be "changing its mind". To which he adds: "Wow!"
Adam Feuerstein: This time,
too much froth, not enough coffee?


I'm not sure why he thinks that. As far as I can tell, the FDA will never reach a conclusion like this before its gone through the actual review process. After all, if FDA already knows the answer before the full review, what would the point of the review even be? It would seem a tremendous waste of agency resources. Not to mention how non-level the playing field would be if some companies were given early yes/no decisions while others had to go through a full review.


It seems fair to ask: is this a substantive change by FDA review teams, or would it be their standard response to any speculation about whether and how they would approve or reject a new drug submission? Can Feuerstein point to other cases where FDA has given a definitive yes or no on an application before the application was ever filed? I suspect not, but am open to seeing examples.


A more plausible theory for this letter is that the FDA is attempting a bit of damage control. It is not permitted to share anything specific it said or wrote to Sarepta about the drug, and has come under some serious criticism for “rejecting” Sarepta’s Accelerated Approval submission. The agency has been sensitive to the DMD community, even going so far as to have Janet Woodcock and Bob Temple meet with DMD parents and advocates last February. Sarepta has effectively positioned FDA as the reason for it’s delay in approval, but no letters have actually been published, so the conversation has been a bit one-sided. This letter appears to be an attempt at balancing perspectives a bit, although the FDA is still hamstrung by its restriction on relating any specific communications.

Ultimately, this is a form letter that contains no new information: FDA has reached no conclusions because FDA is not permitted to reach conclusions until it has completed a fair and thorough review, which won't happen until the drug is actually submitted for approval.

We talk about "transparency" in terms of releasing clinical trials data, but to me there is a great case to be made for increase regulatory transparency. The benefits to routine publication of most FDA correspondence and meeting results (including such things as Complete Response letters, explaining FDA's thinking when it rejects new applications) would actually go a long way towards improving public understanding of the drug review and approval process.




b

Will Your Family Make You a Better Trial Participant?

It is becoming increasing accepted within the research community that patient engagement leads to a host of positive outcomes – most importantly (at least practically speaking) improved clinical trial recruitment and retention.

But while we can all agree that "patient engagement is good" in a highly general sense, we don't have much consensus on what the implications of that idea might be. There is precious little hard evidence about how to either attract engaged patients, or how we might effectively turn "regular patients" into "engaged patients".

That latter point - that we could improve trial enrollment and completion rates by converting the (very large) pool of less-engaged patient - is a central tenet of the mHealth movement in clinical trials. Since technology can now accompany us almost anywhere, it would seem that we have an unprecedented opportunity to reach out and connect with current and potential trial participants.

However, there are signs that this promised revolution in patient engagement hasn't come about. From the decline of new apps being downloaded to the startlingly high rate of people abandoning their wearable health devices, there's a growing body of evidence suggesting that we aren't in fact making very good progress towards increasing engagement. We appear to have underestimated the inertia of the disengaged patient.

So what can we do? We know people like their technology, but if they're not using it to engage with their healthcare decisions, we're no better off as a result.

Daniel Calvert, in a recent blog post at Parallel 6 offers an intriguing solution: he suggests we go beyond the patient and engage their wider group of loved ones. By engaging what Calvert calls the Support Circle - those people most likely to "encourage the health and well being of that patient as they undergo a difficult period of their life" - trial teams will find themselves with a more supported, and therefore more engaged, participant, with corresponding benefits to enrollment and retention. 

Calvert outlines a number of potential mechanisms to get spouses, children, and other loved ones involved in the trial process:
During the consent process the patient can invite their support team in with them. A mobile application can be put on their phones enabling encouraging messages, emails, and texts to be sent. Loved ones can see if their companion or family member did indeed take today’s medication or make last Monday’s appointment. Gamification offers badges or pop-ups: “Two months of consecutive appointments attended” or “perfect eDiary log!” Loved ones can see those notifications, like/comment, and constantly encourage the patients. 
Supporting materials can also be included in the Support Circle application. There are a host of unknown terms to patients and their team. Glossaries, videos, FAQs, contact now, and so much more can be made available at their fingertips.
I have to admit I'm fascinated by Calvert's idea. I want him to be right: the picture of supportive, encouraging, loving spouses and children standing by to help a patient get through a clinical trial is an attractive one. So is the idea that they're just waiting for us to include them - all we need to do is a bit of digital communication with them to get them fully on board as members of the study team.

The problem, however, remains: we have absolutely no evidence that this approach will work. There is no data showing that it is superior to other approaches to engage trial patients.

(In fact, we may even have some indirect evidence that it may hinder enrollment: in trials that require active caregiver participation, such as those in Alzheimer's Disease, caregivers are believed to often contribute to the barriers to patient enrollment).

Calvert's idea is a good one, and it's worthy of consideration. More importantly, it's worthy of being rigorously tested against other recruitment and retention approaches. We have a lot of cool new technologies, and even more great ideas - we're not lacking for those. What we're lacking is hard data showing us how these things perform. What we especially need is comparative data showing how new tactics work relative to other approaches.

Over 5 years ago, I wrote a blog post bemoaning the sloppy approaches we take in trial recruitment - a fact made all the more painfully ironic by the massive intellectual rigor of the trials themselves. I'm not at all sure that we've made any real progress in those 5 years.

In my next post, I'll outline what I believe are some of the critical steps we need to take to improve the current situation, and start bringing some solid evidence to the table along with our ideas.

[Photo credit: Flikr user Matthew G, "Love (of technology)"]







b

The first paid research subject in written history?

On this date 349 years ago, Samuel Pepys relates in his famous diary a remarkable story about an upcoming medical experiment. As far as I can tell, this is the first written description of a paid research subject.


According to his account, the man (who he describes as “a little frantic”) was to be paid to undergo a blood transfusion from a sheep. It was hypothesized that the blood of this calm and docile animal would help to calm the man.

Some interesting things to note about this experiment:
  • Equipoise. There is explicit disagreement about what effect the experimental treatment will have: according to Pepys, "some think it may have a good effect upon him as a frantic man by cooling his blood, others that it will not have any effect at all".
  • Results published. An account of the experiment was published just two weeks later in the journal Philosophical Transactions
  • Medical Privacy. In this subsequent write-up, the research subject is identified as Arthur Coga, a former Cambridge divinity student. According to at least one account, being publicly identified had a bad effect on Coga, as people who had heard of him allegedly succeeded in getting him to spend his stipend on drink (though no sources are provided to confirm this story).
  • Patient Reported Outcome. Coga was apparently chosen because, although mentally ill, he was still considered educated enough to give an accurate description of the treatment effect. 
Depending on your perspective, this may also be a very early account of the placebo effect, or a classic case of ignoring the patient’s experience. Because even though his report was positive, the clinicians remained skeptical. From the journal article:
The Man after this operation, as well as in it, found himself very well, and hath given in his own Narrative under his own hand, enlarging more upon the benefit, he thinks, he hath received by it, than we think fit to own as yet.
…and in fact, a subsequent diary entry from Pepys mentions meeting Coga, with similarly mixed impressions: “he finds himself much better since, and as a new man, but he is cracked a little in his head”.

The amount Coga was paid for his participation? Twenty shillings – at the time, that was exactly one Guinea.

[Image credit: Wellcome Images]







b

Establishing efficacy - without humans?

The decade following passage of FDAAA has been one of easing standards for drug approvals in the US, most notably with the advent of “breakthrough” designation created by FDASIA in 2012 and the 21st Century Cures Act in 2016.

Although, as of this writing, there is no nominee for FDA Commissioner, it appears to be safe to say that the current administration intends to accelerate the pace of deregulation, mostly through further lowering of approval requirements. In fact, some of the leading contenders for the position are on record as supporting a return to pre-Kefauver-Harris days, when drug efficacy was not even considered for approval.

Build a better mouse model, and pharma will
beat a path to your door - no laws needed.

In this context, it is at least refreshing to read a proposal to increase efficacy standards. This comes from two bioethicists at McGill University, who make the somewhat-startling case for a higher degree of efficacy evaluation before a drug begins any testing in humans.
We contend that a lack of emphasis on evidence for the efficacy of drug candidates is all too common in decisions about whether an experimental medicine can be tested in humans. We call for infrastructure, resources and better methods to rigorously evaluate the clinical promise of new interventions before testing them on humans for the first time.
The author propose some sort of centralized clearinghouse to evaluate efficacy more rigorously. It is unclear what they envision this new multispecialty review body’s standards for green-lighting a drug to enter human testing. Instead they propose three questions:
  • What is the likelihood that the drug will prove clinically useful?
  • Assume the drug works in humans. What is the likelihood of observing the preclinical results?
  • Assume the drug does not work in humans. What is the likelihood of observing the preclinical results?
These seem like reasonable questions, I suppose – and are likely questions that are already being asked of preclinical data. They certainly do not rise to the level of providing a clear standard for regulatory approval, though perhaps it’s a reasonable place to start.

The most obvious counterargument here is one that the authors curiously don’t pick up on at all: if we had the ability to accurately (or even semiaccurately) predict efficacy preclinically, pharma sponsors would already be doing it. The comment notes: “More-thorough assessments of clinical potential before trials begin could lower failure rates and drug-development costs.” And it’s hard not to agree: every pharmaceutical company would love to have even an incrementally-better sense of whether their early pipeline drugs will be shown to work as hoped.

The authors note
Commercial interests cannot be trusted to ensure that human trials are launched only when the case for clinical potential is robust. We believe that many FIH studies are launched on the basis of flimsy, underscrutinized evidence.
However, they do not produce any evidence that industry is in any way deliberately underperforming their preclinical work, merely that preclinical efficacy is often difficult to reproduce and is poorly correlated with drug performance in humans.

Pharmaceutical companies have many times more candidate compounds than they can possibly afford to put into clinical trials. Figuring out how to lower failure rates – or at least the total cost of failure - is a prominent industry obsession, and efficacy remains the largest source of late-stage trial failure. This quest to “fail faster” has resulted in larger and more expensive phase 2 trials, and even to increased efficacy testing in some phase 1 trials. And we do this not because of regulatory pressure, but because of hopes that these efforts will save overall costs. So it seems beyond probable that companies would immediately invest more in preclinical efficacy testing, if such testing could be shown to have any real predictive power. But generally speaking, it does not.

As a general rule, we don’t need regulations that are firmly aligned with market incentives, we need regulations if and when we think those incentives might run counter to the general good. In this case, there are already incredibly strong market incentives to improve preclinical assessments. Where companies have attempted to do something with limited success, it would seem quixotic to think that regulatory fiat will accomplish more.

(One further point. The authors try to link the need for preclinical efficacy testing to the 2016 Bial tragedy. This seems incredibly tenuous: the authors speculate that perhaps trial participants would not have been harmed and killed if Bial had been required to produce more evidence of BIA102474’s clinical efficacy before embarking on their phase 1 trials. But that would have been entirely coincidental in this case: if the drug had in fact more evidence of therapeutic promise, the tragedy still would have happened, because it had nothing at all to do with the drug’s efficacy.

This is to some extent a minor nitpick, since the argument in favor of earlier efficacy testing does not depend on a link to Bial. However, I bring it up because a) the authors dedicate the first four paragraphs of their comment to the link, and b) there appears to be a minor trend of using the death and injuries of that trial to justify an array of otherwise-unrelated initiatives. This seems like a trend we should discourage.)

[Update 2/23: I posted this last night, not realizing that only a few hours earlier, John LaMattina had published on this same article. His take is similar to mine, in that he is suspicious of the idea that pharmaceutical companies would knowingly push ineffective drugs up their pipeline.]

Kimmelman, J., & Federico, C. (2017). Consider drug efficacy before first-in-human trials Nature, 542 (7639), 25-27 DOI: 10.1038/542025a




b

The Streetlight Effect and 505(b)(2) approvals

It is a surprisingly common peril among analysts: we don’t have the data to answer the question we’re interested in, so we answer a related question where we do have data. Unfortunately, the new answer turns out to shed no light on the original interesting question.

This is sometimes referred to as the Streetlight Effect – a phenomenon aptly illustrated by Mutt and Jeff over half a century ago:


This is the situation that the Tufts Center for the Study of Drug Development seems to have gotten itself into in its latest "Impact Report".  It’s worth walking through the process of how an interesting question ends up in an uninteresting answer.

So, here’s an interesting question:
My company owns a drug that may be approvable through FDA’s 505(b)(2) pathway. What is the estimated time and cost difference between pursuing 505(b)(2) approval and conventional approval?
That’s "interesting", I suppose I should add, for a certain subset of folks working in drug development and commercialization. It’s only interesting to that peculiar niche, but for those people I suspect it’s extremely interesting - because it is a real situation that a drug company may find itself in, and there are concrete consequences to the decision.

Unfortunately, this is also a really difficult question to answer. As phrased, you'd almost need a randomized trial to answer it. Let’s create a version which is less interesting but easier to answer:
What are the overall development time and cost differences between drugs seeking approval via 505(b)(2) and conventional pathways?
This is much easier to answer, as pharmaceutical companies could look back on development times and costs of all their compounds, and directly compare the different types. It is, however, a much less useful question. Many new drugs are simply not eligible for 505(b)(2) approval. If those drugs
Extreme qualitative differences of 505(b)(2) drugs.
Source: Thomson Reuters analysis via RAPS
are substantially different in any way (riskier, more novel, etc.), then they will change the comparison in highly non-useful ways. In fact, in 2014, only 1 drug classified as a New Molecular Entity (NME) went through 505(b)(2) approval, versus 32 that went through conventional approval. And in fact, there are many qualities that set 505(b)(2) drugs apart.

So we’re likely to get a lot of confounding factors in our comparison, and it’s unclear how the answer would (or should) guide us if we were truly trying to decide which route to take for a particular new drug. It might help us if we were trying to evaluate a large-scale shift to prioritizing 505(b)(2) eligible drugs, however.

Unfortunately, even this question is apparently too difficult to answer. Instead, the Tufts CSDD chose to ask and answer yet another variant:
What is the difference in time that it takes the FDA for its internal review process between 505(b)(2) and conventionally-approved drugs?
This question has the supreme virtue of being answerable. In fact, I believe that all of the data you’d need is contained within the approval letter that FDA posts publishes for each new approved drug.

But at the same time, it isn’t a particularly interesting question anymore. The promise of the 505(b)(2) pathway is that it should reduce total development time and cost, but on both those dimensions, the report appears to fall flat.
  • Cost: This analysis says nothing about reduced costs – those savings would mostly come in the form of fewer clinical trials, and this focuses entirely on the FDA review process.
  • Time: FDA review and approval is only a fraction of a drug’s journey from patent to market. In fact, it often takes up less than 10% of the time from initial IND to approval. So any differences in approval times will likely easily be overshadowed by differences in time spent in development. 
But even more fundamentally, the problem here is that this study gives the appearance of providing an answer to our original question, but in fact is entirely uninformative in this regard. The accompanying press release states:
The 505(b)(2) approval pathway for new drug applications in the United States, aimed at avoiding unnecessary duplication of studies performed on a previously approved drug, has not led to shorter approval times.
This is more than a bit misleading. The 505(b)(2) statute does not in any way address approval timelines – that’s not it’s intent. So showing that it hasn’t led to shorter approval times is less of an insight than it is a natural consequence of the law as written.

Most importantly, showing that 505(b)(2) drugs had a longer average approval time than conventionally-approved drugs in no way should be interpreted as adding any evidence to the idea that those drugs were slowed down by the 505(b)(2) process itself. Because 505(b)(2) drugs are qualitatively different from other new molecules, this study can’t claim that they would have been developed faster had their owners initially chosen to go the route of conventional approval. In fact, such a decision might have resulted in both increased time in trials and increased approval time.

This study simply is not designed to provide an answer to the truly interesting underlying question.

[Disclosure: the above review is based entirely on a CSDD press release and summary page. The actual report costs $125, which is well in excess of this blog’s expense limit. It is entirely possible that the report itself contains more-informative insights, and I’ll happily update that post if that should come to my attention.]




b

A human bird flu case is thought to be found in Canada for the first time

A person has tested positive in British Columbia, Canadian health officials said, though the results must be sent to another lab for confirmation.




b

What does a 2nd Trump term mean for the Affordable Care Act?

President-elect Donald Trump tried unsuccessfully to get rid of the Affordable Care Act during his first term. What action will he take this time around?




b

More young people are surviving cancer. Then they face a life altered by it

More people are getting cancer in their 20s, 30s, and 40s, and surviving, thanks to rapid advancement in care. Many will have decades of life ahead of them, which means they face greater and more complex challenges in survivorship. Lourdes Monje is navigating these waters at age 29.




b

Remarkably resilient refugees: A teen on his own, a woman who was raped

Sudan's civil war has displaced 10 million citizens. Here are profiles of two young people from the most vulnerable groups: an unaccompanied minor caring for twin brothers, a woman who was raped.




b

Online yoga classes prove helpful for back pain in new study

Participant reported relief from chronic low back pain and reduced need for pain-relief medications.




b

Chronic itch is miserable. Scientists are just scratching the surface

Journalist Annie Lowrey has a rare disease that causes a near-constant itch that doesn't respond to most treatments. She likens the itchiness to a car alarm: "You can't stop thinking about it."




b

Menjelajahi Dunia Keajaiban Slot Online Pragmatic Play

Dunia perjudian daring telah menyaksikan kemunculan penyedia perangkat lunak yang menghebohkan, dan di antara mereka, Pragmatic Play telah berhasil menarik perhatian para pemain dengan berbagai slot online unggulan. Dalam artikel…

The post Menjelajahi Dunia Keajaiban Slot Online Pragmatic Play appeared first on Biosimilarnews.




b

Nama-Nama Provider Slot Online Terbaik 2024

Industri slot online terus mekar dan mengukir epik baru dalam dunia judi online. Tahun 2024 menjadi saksi bagi loncatan tinggi dalam inovasi dan hiburan, terutama dari para provider terkemuka yang…

The post Nama-Nama Provider Slot Online Terbaik 2024 appeared first on Biosimilarnews.




b

Cara Melihat Maxwin Saat Bermain Slot

Mesin slot sering kali memiliki istilah maxwin yang mengacu pada jumlah tertinggi yang bisa dimenangkan seorang pemain dalam permainan. Maxwins dalam slot biasanya dikaitkan dengan kombinasi simbol tertentu yang, jika…

The post Cara Melihat Maxwin Saat Bermain Slot appeared first on Biosimilarnews.



  • Tips & Trik

b

Game Slot Gacor Gampang Menang Habanero

Habanero tidak hanya menyajikan game slot biasa, melainkan sebuah petualangan menang tanpa batas. Dengan tema-tema yang beragam, mulai dari petualangan antariksa hingga ke dunia mitologi, setiap game Habanero memiliki keunikan…

The post Game Slot Gacor Gampang Menang Habanero appeared first on Biosimilarnews.




b

Provider Judi Slot Gacor Online Terbaik serta Populer di Tahun 2024

Seolah-olah melintasi portal waktu, kita memasuki tahun 2024 dengan deretan provider judi slot online yang tidak hanya menemani, tetapi juga menggoda imajinasi. Setiap klik, setiap putaran gulungan, membuka lembaran baru…

The post Provider Judi Slot Gacor Online Terbaik serta Populer di Tahun 2024 appeared first on Biosimilarnews.




b

Tiny Biosensor Unlocks the Secrets of Sweat



Sweat: We all do it. It plays an essential role in controlling body temperature by cooling the skin through evaporation. But it can also carry salts and other molecules out of the body in the process. In medieval Europe, people would lick babies; if the skin was salty, they knew that serious illness was likely. (We now know that salty skin can be an indicator for cystic fibrosis.)

Scientists continue to study how the materials in sweat can reveal details about an individual’s health, but often they must rely on gathering samples from subjects during strenuous exercise in order to get samples that are sufficiently large for analysis.

Now researchers in China have developed a wearable sensor system that can collect and process small amounts of sweat while providing continuous detection. They have named the design a “skin-interfaced intelligent graphene nanoelectronic” patch, or SIGN for short. The researchers, who described their work in a paper published in Advanced Functional Materials, did not respond to IEEE Spectrum’s interview requests.

The SIGN sensor patch relies on three separate components to accomplish its task. First, the sweat must be transported from the skin into microfluidic chambers. Next, a special membrane removes impurities from the fluid. Finally, this liquid is delivered to a bioreceptor that can be tuned to detect different metabolites.

The transport system relies on a combination of hydrophilic (water-attracting) and hydrophobic (water-repelling) materials. This system can move aqueous solutions along microchannels, even against gravity. This makes it possible to transport small samples with precision, regardless of the device’s orientation.

The fluid is transported to a Janus membrane, where impurities are blocked. This means that the sample that reaches the sensor is more likely to produce accurate results.

Finally, the purified sweat arrives at a flexible biosensor. This graphene sensor is activated by enzymes designed to detect the desired biomarker. The result is a transistor that can accurately measure the amount of the biomarker in the sample.

At its center, the system has a membrane that removes impurities from sweat and a biosensor that detects biomarkers.Harbin Institute of Technology/Shenyang Aerospace University

One interesting feature of the SIGN patch is that it can provide continuous measurements. The researchers tested the device through multiple cycles of samples with known concentrations of a target biomarker, and it was about as accurate after five cycles as it was after just one. This result suggests that it could be worn over an extended period without having to be replaced.

Continuous measurements can provide useful longitudinal data. However, Tess Skyrme, a senior technology analyst at the research firm IDTechEx, points out that continuous devices can have very different sampling rates. “Overall, the right balance of efficient, comfortable, and granular data collection is necessary to disrupt the market,” she says, noting that devices also need to optimize “battery life, calibration, and data accuracy.”

The researchers have focused on lactate—a metabolite that can be used to assess a person’s levels of exercise and fatigue—as the initial biomarker to be detected. This function is of particular interest to athletes, but it can also be used to monitor the health status of workers in jobs that require strenuous physical activity, especially in hazardous or extreme working conditions.

Not all experts are convinced that biomarkers in sweat can provide accurate health data. Jason Heikenfeld, director of the Novel Device Lab at the University of Cincinnati, has pivoted his research on wearable biosensing from sweat to the interstitial fluid between blood vessels and cells. “Sweat glucose and lactate are way inferior to measures that can be made in interstitial fluid with devices like glucose monitors,” he tells Spectrum.

The researchers also developed a package to house the sensor. It’s designed to minimize power consumption, using a low-power microcontroller, and it includes a Bluetooth communications chip to transmit data wirelessly from the SIGN patch. The initial design provides for 2 hours of continuous use without charging, or up to 20 hours in standby mode.




b

“Snake-like” Probe Images Arteries from Within



Neurosurgeon Vitor Mendes Pereira has grown accustomed to treating brain aneurysms with only blurry images for guidance.

Equipped with a rough picture of the labyrinthine network of arteries in the brain, he does his best to insert mesh stents or coils of platinum wire—interventions intended to promote clotting and to seal off a bulging blood vessel.

The results are not always perfect. Without a precise window into the arterial architecture at the aneurysm site, Pereira says that he and other neurovascular specialists occasionally misplace these implants, leaving patients at a heightened risk of stroke, clotting, inflammation, and life-threatening ruptures. But a new fiber-optic imaging probe offers hope for improved outcomes.

Pereira et al./Science Translational Medicine

According to Pereira’s early clinical experience, the technology—a tiny snake-like device that winds its way through the intricate maze of brain arteries and, using spirals of light, captures high-resolution images from the inside-out—provides an unprecedented level of structural detail that enhances the ability of clinicians to troubleshoot implant placement and better manage disease complications.

“We can see a lot more information that was not accessible before,” says Pereira, director of endovascular research and innovation at St. Michael’s Hospital in Toronto. “This is, for us, an incredible step forward.”

And not just for brain aneurysms. In a report published today in Science Translational Medicine, Pereira and his colleagues describe their first-in-human experience using the platform to guide treatment for 32 people with strokes, artery hardening, and various other conditions arising from aberrant blood vessels in the brain.

Whereas before, with technologies such as CT scans, MRIs, ultrasounds, and x-rays, clinicians had a satellite-like view of the brain’s vascular network, now they have a Google Street View-like perspective, complete with in-depth views of artery walls, plaques, immune cell aggregates, implanted device positions, and more.

“The amount of detail you could get you would never ever see with any other imaging modality,” says Adnan Siddiqui, a neurosurgeon at the University at Buffalo, who was not involved in the research. “This technology holds promise to be able to really transform the way we evaluate success or failure of our procedures, as well as to diagnose complications before they occur.”

A Decade of Innovation

The new fiber-optic probe is flexible enough to snake through the body’s arteries and provide previously unavailable information to surgeons.Pereira et al./Science Translational Medicine

The new imaging platform is the brainchild of Giovanni Ughi, a biomedical engineer at the University of Massachusetts’ Chan Medical School in Worcester. About a decade ago, he set out to adapt a technique called optical coherence tomography (OCT) for imaging inside the brain’s arteries.

OCT relies on the backscattering of near-infrared light to create cross-sectional images with micrometer-scale spatial resolution. Although OCT had long been used in clinical settings to generate pictures from the back of the eye and from inside the arteries that supply blood to the heart, the technology had proven difficult to adapt for brain applications owing to several technical challenges.

One major challenge is that the fiber-optic probes used in the technology are typically quite stiff, making them too rigid to twist and bend through the convoluted passageways of the brain’s vasculature. Additionally, the torque cables—traditionally used to rotate the OCT lens to image surrounding vessels and devices in three dimensions as the probe retracts—were too large to fit inside the catheters that are telescopically advanced into the brain’s arteries to address blockages or other vascular issues.

“We had to invent a new technology,” Ughi explains. “Our probe had to be very, very flexible, but also very, very small to be compatible with the clinical workflow.”

To achieve these design criteria, Ughi and his colleagues altered the properties of the glass at the heart of their fiber-optic cables, devised a new system of rotational control that does away with torque cables, miniaturized the imaging lens, and made a number of other engineering innovations.

The end result: a slender probe, about the size of a fine wire, that spins 250 times per second, snapping images as it glides back through the blood vessel. Researchers flush out blood cells with a tablespoon of liquid, then manually or automatically retract the probe, revealing a section of the artery about the length of a lip balm tube.

St. Michael’s Foundation

Clinical Confirmation

After initial testing in rabbits, dogs, pigs, and human cadavers, Ughi’s team sent the device to two clinical groups: Pereira’s in Toronto and Pedro Lylyk’s at the Sagrada Familia Clinic in Buenos Aires, Argentina. Across the two groups, neurosurgeons treated the 32 participants in the latest study, snaking the imaging probe through the patients’ groins or wrists and into their brains.

The procedure was safe and well-tolerated across different anatomies, underlying disease conditions, and the complexity of prior interventions. Moreover, the information provided frequently led to actionable insights—in one case, prompting clinicians to prescribe anti-platelet drugs when hidden clots were discovered; in another, aiding in the proper placement of stents that were not flush against the arterial wall.

“We were successful in every single case,” Ughi says. “So, this was a huge confirmation that the technology is ready to move forward.”

“We can see a lot more information that was not accessible before.” —Vitor Mendes Pereira, St. Michael’s Hospital

A startup called Spryte Medical aims to do just that. According to founder and CEO David Kolstad, the company is in discussions with regulatory authorities in Europe, Japan, and the United States to determine the steps necessary to bring the imaging probe to market.

At the same time, Spryte—with Ughi as senior director of advanced development and software engineering—is working on machine learning software to automate the image analysis process, thus simplifying diagnostics and treatment planning for clinicians.

Bolstered by the latest data, cerebrovascular specialists like Siddiqui now say they are chomping at the bit to get their hands on the imaging probe once it clears regulatory approval.

“I’m really impressed,” Siddiqui says. “This is a tool that many of us who do these procedures wish they had.”




b

Noninvasive Spinal Stimulation Gets a (Current) Boost



In 2010, Melanie Reid fell off a horse and was paralyzed below the shoulders.

“You think, ‘I am where I am; nothing’s going to change,’ ” she said, but many years after her accident, she participated in a medical trial of a new, noninvasive rehabilitative device that can deliver more electrical stimulation than similar devices without harming the user. For Reid, use of the device has led to small improvements in her ability to use her hands, and meaningful changes to her daily life.

“Everyone thinks with spinal injury all you want to do is be able to walk again, but if you’re a tetraplegic or quadriplegic, what matters most is working hands,” said Reid, a columnist for The Times, as part of a press briefing. “There’s no miracles in spinal injury, but tiny gains can be life-changing.”

For the study, Reid used a new noninvasive therapeutic device produced by Onward Medical. The device, ARC-EX (“EX” indicating “external”), uses electrodes placed along the spine near the site of injury—in the case of quadriplegia, the neck—to promote nerve activity and growth during physical-therapy exercises. The goal is to not only increase motor function while the device is attached and operating, but the long-term effectiveness of rehabilitation drills. A study focused on arm and hand abilities in patients with quadriplegia was published 20 May in Nature Medicine.

Researchers have been investigating electrical stimulation as a treatment for spinal cord injury for roughly 40 years, but “one of the innovations in this system is using a very high-frequency waveform,” said coauthor Chet Moritz, a neurotechnologist at the University of Washington. The ARC-EX uses a 10-kilohertz carrier frequency overlay, which researchers think may numb the skin beneath the electrode, allowing patients to tolerate five times as much amperage as from similar exploratory devices. For Reid, this manifested as a noticeable “buzz,” which felt strange, but not painful.

The study included 60 participants across 14 sites around the world. Each participant undertook two months of standard physical therapy, followed by two months of therapy combined with the ARC-EX. Although aspects of treatment such as electrode placement were fairly standardized, the current amplitude was personalized to each patient, and sometimes individual exercises, said Moritz.

The ARC-EX uses a 10-kilohertz current to provider stronger stimulation for people with spinal cord injuries.

Over 70 percent of patients showed an increase in at least one measurement of both strength and function between standard therapy and ARC-EX therapy. These changes also meant that 87 percent of study participants noted some improvement in quality of life in a followup questionnaire. No major safety concerns tied to the device or rehabilitation process were reported.

Onward will seek approval from the U.S. Food and Drug Administration for the device by the end of 2024, said study coauthor Grégoire Courtine, a neuroscientist and cofounder of Onward Medical. Onward is also working on an implantable spinal stimulator called ARC-IM; other prosthetic approaches, such as robotic exoskeletons, are being investigated elsewhere. ARC-EX was presented as a potentially important cost-accessible, noninvasive treatment option, especially in the critical window for recovery a year or so after a spinal cord injury. However, the price to insurers or patients of a commercial version is still subject to negotiation.

The World Health Organization says there are over 15 million people with spinal cord injuries. Moritz estimates that around 90 percent of patients, even many with no movement in their hands, could benefit from the new therapy.

Dimitry Sayenko, who studies spinal cord injury recovery at Houston Methodist and was not involved in the study, praised the relatively large sample size and clear concern for patient safety. But he stresses that the mechanisms underlying spinal stimulation are not well understood. “So far it’s literally plug and play,” said Sayenko. “We don’t understand what’s happening under the electrodes for sure—we can only indirectly assume or speculate.”

The new study supports the idea that noninvasive spinal cord stimulation can provide some benefit to some people but was not designed to help predict who will benefit, precisely how people will benefit, or how to optimize care. The study authors acknowledged the limited scope and need for further research, which might help turn currently “tiny gains” into what Sayenko calls “larger, more dramatic, robust effects.”




b

Bionic Eye Gets a New Lease on Life



The future of an innovative retinal implant and dozens of its users just got brighter, after Science, a bioelectronics startup run by Neuralink’s cofounder, Max Hodak, acquired Pixium’s technology at the last minute.

Pixium Vision, whose Prima system to tackle vision loss is implanted in 47 people across Europe and the United States, was in danger of disappearing completely until Science stepped in to buy the French company’s assets in April, for an undisclosed amount.

Pixium has been developing Prima for a decade, building on work by Daniel Palanker, a professor of ophthalmology at Stanford University. The 2-by-2-millimeter square implant is surgically implanted under the retina, where it turns infrared data from camera-equipped glasses into pulses of electricity. These replace signals generated by photoreceptor rods and cones, which are damaged in people suffering from age-related macular degeneration (AMD).

Early feasibility studies in the E.U. and the United States suggested Prima was safe and potentially effective, but Pixium ran out of money last November before the final results of a larger, multiyear pivotal trial in Europe.

“It’s very important to us to avoid another debacle like Argus II.”

With the financial and legal clock ticking down, the trial data finally arrived in March this year. “And the results from that were just pretty stunning,” says Max Hodak, Science’s founder and CEO, in his first interview since the acquisition.

Although neither Pixium nor Science has yet released the full dataset, Hodak shared with IEEE Spectrum videos of three people using Prima, each of them previously unable to read or recognize faces due to AMD. The videos show them slowly but fluently reading a hardback book, filling in a crossword puzzle, and playing cards.

“This is legit ‘form vision’ that I don’t think any device has ever done,” says Hodak. Form vision is the ability to recognize visual elements as parts of a larger object. “It’s this type of data that convinced us. And from there we were like, this should get to patients.”

As well as buying the Prima technology, Hodak says that Science will hire the majority of Pixium’s 35 engineering and regulatory staff, in a push to get the technology approved in Europe as quickly as possible.

The Prima implant receives visual data and is powered by near-infrared signals beamed from special spectacles.Pixium

Another priority is supporting existing Prima patients, says Lloyd Diamond, Pixium’s outgoing CEO. “It’s very important to us to avoid another debacle like Argus II,” he says, referring to another retinal implant whose manufacturer went out of business in 2022, leaving users literally in the dark.

Diamond is excited to be working with Science, which is based in Silicon Valley with a chip foundry in North Carolina. “They have a very deep workforce in software development, in electronic development, and in biologic research,” he says. “And there are probably only a few foundries in the world that could manufacture an implant such as ours. Being able to internalize part of that process is a very big advantage.”

Hodak hopes that a first-generation Prima product could quickly be upgraded with a wide-angle camera and the latest electronics. “We think that there’s one straight shrink, where we’ll move to smaller pixels and get higher visual acuity,” he says. “After that, we’ll probably move to a 3D electrode design, where we’ll be able to get closer to single-cell resolution.” That could deliver even sharper artificial vision.

In parallel, Science will continue Pixium’s discussions with the FDA in the United States about advancing a clinical trial there.

The success of Prima is critical, says Hodak, who started Science in 2021 after leaving Neuralink, a brain-computer interface company he cofounded with Elon Musk. “Elon can do whatever he wants for as long as he wants, but we need something that can finance future development,” he says. “Prima is big enough in terms of impact to patients and society that it is capable of helping us finance the rest of our ambitions.”

These include a next-generation Prima device, which Hodak says he is already talking about with Palanker, and a second visual prosthesis, currently called the Science Eye. This will tackle retinitis pigmentosa, a condition affecting peripheral vision—the same condition targeted by Second Sight’s ill-fated Argus II device.

“The Argus II just didn’t work that well,” says Hodak. “In the end, it was a pure bridge to nowhere.” Like the Argus II and Prima, the Science Eye relies on camera glasses and an implant, but with the addition of optogenetic therapy. This uses a genetically engineered virus to deliver a gene to specific optic nerve cells in the retina, making them light-sensitive at a particular wavelength. A tiny implanted display with a resolution sharper than an iPhone screen then enables fine control over the newly sensitized cells.

That system is still undergoing animal trials, but Hodak is almost ready to pull the trigger on its first human clinical studies, likely in Australia and New Zealand.

“In the long term, I think precision optogenetics will be more powerful than Prima’s electrical stimulation,” he says. “But we’re agnostic about which approach works to restore vision.”

One thing he does believe vehemently, unlike Musk, is that the retina is the best place to put an implant. Neuralink and Cortigent (the successor company of Second Sight) are both working on prosthetics that target the brain’s visual cortex.

“There’s a lot that you can do in cortex, but vision is not one of them,” says Hodak. He thinks the visual cortex is too complex, too distributed, and too difficult to access surgically to be useful.

“As long as the optic nerve is intact, the retina is the ideal place to think about restoring vision to the brain,” he says. “This is all a question of effect size. If someone has been in darkness for a decade, with no light, no perception, and you can give them any type of visual stimulus, they’re going to be into it. The Pixium patients can intuitively read, and that was really what convinced us that this was worth picking up and pursuing.”