an

NHS England lowers threshold for COVID-19 vaccination site applications

Community pharmacies able to administer up to 400 COVID-19 vaccines per week can now apply to become designated vaccination sites, NHS England has said.




an

New drug cuts the risk of death in bladder cancer by 30% compared with chemotherapy, study suggests

A new type of drug that targets chemotherapy directly to cancer cells reduces the risk of death from the most common type of bladder cancer by 30%, a phase III trial in the New England Journal of Medicine has suggested.




an

RPS pays tribute to pharmacy law and ethics pioneer Joy Wingfield

The Royal Pharmaceutical Society has expressed its sadness at the death of Joy Wingfield, honorary professor of Pharmacy Law and Ethics at the University of Nottingham.




an

IFM’s Hat Trick and Reflections On Option-To-Buy M&A

Today IFM Therapeutics announced the acquisition of IFM Due, one of its subsidiaries, by Novartis. Back in Sept 2019, IFM granted Novartis the right to acquire IFM Due as part of an “option to buy” collaboration around cGAS-STING antagonists for

The post IFM’s Hat Trick and Reflections On Option-To-Buy M&A appeared first on LifeSciVC.




an

Lessons From A Private Funding Round: Science, Relationships, And Experience

By Mike Cloonan, CEO of Sionna Therapeutics, as part of the From The Trenches feature of LifeSciVC An insightful piece on this blog following the JPM healthcare conference noted the “refreshing burst of enthusiasm” in the biotech sector. It’s true

The post Lessons From A Private Funding Round: Science, Relationships, And Experience appeared first on LifeSciVC.




an

Deconstructing the Diligence Process: An Approach to Vetting New Product Theses

By Aimee Raleigh, Principal at Atlas Venture, as part of the From The Trenches feature of LifeSciVC Ever wondered what goes into diligencing a new idea, program, company, or platform? While each diligence is unique and every investor will have

The post Deconstructing the Diligence Process: An Approach to Vetting New Product Theses appeared first on LifeSciVC.




an

Pharmacology: The Anchor for Nearly Every Diligence

By Haojing Rong and Aimee Raleigh, as part of the From The Trenches feature of LifeSciVC This blog post is the second in a series on key diligence concepts and questions. If you missed the intro blog post yesterday, click

The post Pharmacology: The Anchor for Nearly Every Diligence appeared first on LifeSciVC.




an

The Biotech Startup Contraction Continues… And That’s A Good Thing

Venture creation in biotech is witnessing a sustained contraction. After the pandemic bubble’s over-indulgence, the venture ecosystem appears to have reset its pace of launching new startups. According to the latest Pitchbook data, venture creation in biotech hit its slowest

The post The Biotech Startup Contraction Continues… And That’s A Good Thing appeared first on LifeSciVC.




an

Stars and Scars… Some Lessons Learned About Leadership

By Arthur O. Tzianabos, PhD, CEO of Lifordi Immunotherapeutics, as part of the From the Trenches feature of LifeSciVC As the biotech industry continues to pick up steam, I have been getting a number of phone calls from folks in

The post Stars and Scars… Some Lessons Learned About Leadership appeared first on LifeSciVC.




an

Mariana Oncology’s Radiopharm Platform Acquired By Novartis

Novartis recently announced the acquisition of Mariana Oncology, an emerging biotech focused on advancing a radioligand therapeutics platform, for up to $1.75 billion in upfronts and future milestones. The capstone of its three short years of operations, this acquisition represents

The post Mariana Oncology’s Radiopharm Platform Acquired By Novartis appeared first on LifeSciVC.




an

Biotech Risk Cycles: Assets And Platforms

Today’s market likes products. Platforms aren’t in vogue anymore. Investors, especially in the public markets, only want late stage de-risked assets. Pharma only seems to be buying these kinds of asset. VCs need to focus on clinical stage companies. Or

The post Biotech Risk Cycles: Assets And Platforms appeared first on LifeSciVC.




an

Tell the UK’s research regulator to do more on clinical trial transparency

The UK body that oversees health research is writing a new strategy on clinical trial transparency and it wants to hear opinions on it. The Health Research Authority (HRA) says its strategy aims to “make transparency easy, make compliance clear and make information public.” It has opened a public consultation on the strategy and some […]




an

UK universities and NHS trusts that flout the rules on clinical trials identified in report to Parliament

An AllTrials report for the House of Commons Science and Technology Select Committee this week has found that 33 NHS trust sponsors and six UK universities are reporting none of their clinical trial results, while others have gone from 0% to 100% following an announcement from the Select Committee in January that universities and NHS […]




an

Can FDA's New Transparency Survive Avandia?

PDUFA V commitments signal a strong commitment to tolerance of open debate in the face of uncertainty.

I can admit to a rather powerful lack of enthusiasm when reading about interpersonal squabbles. It’s even worse in the scientific world: when I read about debates getting mired in personal attacks I tend to simply stop reading and move on to something else.

However, the really interesting part of this week’s meeting of an FDA joint Advisory Committee to discuss the controversial diabetes drug Avandia – at least in the sense of likely long-term impact – is not the scientific question under discussion, but the surfacing and handling of the raging interpersonal battle going on right now inside the Division of Cardiovascular and Renal Products. So I'll have to swallow my distaste and follow along with the drama.

Two words that make us mistrust Duke:
 Anil Potti Christian Laettner

Not that the scientific question at hand – does Avandia pose significant heart risks? – isn't interesting. It is. But if there’s one thing that everyone seems to agree on, it’s that we don’t have good data on the topic. Despite the re-adjudication of RECORD, no one trusts its design (and, ironically, the one trial with a design to rigorously answer the question was halted after intense pressure, despite an AdComm recommendation that it continue).  And no one seems particularly enthused about changing the current status of Avandia: in all likelihood it will continue to be permitted to be marketed under heavy restrictions. Rather than changing the future of diabetes, I suspect the committee will be content to let us slog along the same mucky trail.

The really interesting question, that will potentially impact CDER for years to come, is how it can function with frothing, open dissent among its staffers. As has been widely reported, FDA reviewer Tom Marciniak has written a rather wild and vitriolic assessment of the RECORD trial, excoriating most everyone involved. In a particularly stunning passage, Marciniak appears to claim that the entire output of anyone working at Duke University cannot be trusted because of the fraud committed by Duke cancer researcher Anil Potti:
I would have thought that the two words “Anil Potti” are sufficient for convincing anyone that Duke University is a poor choice for a contractor whose task it is to confirm the integrity of scientific research. 
(One wonders how far Marciniak is willing to take his guilt-by-association theme. Are the words “Cheng Yi Liang” sufficient to convince us that all FDA employees, including Marciniak, are poor choices for deciding matter relating to publicly-traded companies? Should I not comment on government activities because I’m a resident of Illinois (my two words: “Rod Blagojevich”)?)

Rather than censoring or reprimanding Marciniak, his supervisors have taken the extraordinary step of letting him publicly air his criticisms, and then they have in turn publicly criticized his methods and approach.

I have been unable to think of a similar situation at any regulatory agency. The tolerance for dissent being displayed by FDA is, I believe, completely unprecedented.

And that’s the cliffhanger for me: can the FDA’s commitment to transparency extend so far as to accommodate public disagreements about its own approval decisions? Can it do so even when the disagreements take an extremely nasty and inappropriate tone?

  • Rather than considering that open debate is a good thing, will journalists jump on the drama and portray agency leadership as weak and indecisive?
  • Will the usual suspects in Congress be able to exploit this disagreement for their own political gain? How many House subcommittees will be summoning Janet Woodcock in the coming weeks?

I think what Bob Temple and Norman Stockbridge are doing is a tremendous experiment in open government. If they can pull it off, it could force other agencies to radically rethink how they go about crafting and implementing regulations. However, I also worry that it is politically simply not a viable approach, and that the agency will ultimately be seriously hurt by attacks from the media and legislators.

Where is this coming from?

As part of its recent PDUFA V commitment, the FDA put out a fascinating draft document, Structured Approach to Benefit-Risk Assessment in Drug Regulatory Decision-Making. It didn't get a lot of attention when first published back in February (few FDA documents do). However, it lays out a rather bold vision for how the FDA can acknowledge the existence of uncertainty in its evaluation of new drugs. Its proposed structure even envisions an open and honest accounting of divergent interpretations of data:
When they're frothing at the mouth, even Atticus
doesn't let them publish a review
A framework for benefit-risk decision-making that summarizes the relevant facts, uncertainties, and key areas of judgment, and clearly explains how these factors influence a regulatory decision, can greatly inform and clarify the regulatory discussion. Such a framework can provide transparency regarding the basis of conflicting recommendations made by different parties using the same information.
(Emphasis mine.)

Of course, the structured framework here is designed to reflect rational disagreement. Marciniak’s scattershot insults are in many ways a terrible first case for trying out a new level of transparency.

The draft framework notes that safety issues, like Avandia, are some of the major areas of uncertainty in the regulatory process. Contrast this vision of coolly and systematically addressing uncertainties with the sad reality of Marciniak’s attack:
In contrast to the prospective and highly planned studies of effectiveness, safety findings emerge from a wide range of sources, including spontaneous adverse event reports, epidemiology studies, meta-analyses of controlled trials, or in some cases from randomized, controlled trials. However, even controlled trials, where the evidence of an effect is generally most persuasive, can sometimes provide contradictory and inconsistent findings on safety as the analyses are in many cases not planned and often reflect multiple testing. A systematic approach that specifies the sources of evidence, the strength of each piece of evidence, and draws conclusions that explain how the uncertainty weighed on the decision, can lead to more explicit communication of regulatory decisions. We anticipate that this work will continue beyond FY 2013.
I hope that work will continue beyond 2013. Thoughtful, open discussions of real uncertainties are one of the most worthwhile goals FDA can aspire to, even if it means having to learn how to do so without letting the Marciniaks of the world scuttle the whole endeavor.

[Update June 6: Further bolstering the idea that the AdCom is just as much about FDA's ability to transparently manage differences of expert opinion in the face of uncertain data, CDER Director Janet Woodcock posted this note on the FDA's blog. She's pretty explicit about the bigger picture:
There have been, and continue to be, differences of opinion and scientific disputes, which is not uncommon within the agency, stemming from varied conclusions about the existing data, not only with Avandia, but with other FDA-regulated products. 
At FDA, we actively encourage and welcome robust scientific debate on the complex matters we deal with — as such a transparent approach ensures the scientific input we need, enriches the discussions, and enhances our decision-making.
I agree, and hope she can pull it off.]




an

Preview of Enrollment Analytics: Moving Beyond the Funnel (Shameless DIA Self-Promotion, Part 2)


Are we looking at our enrollment data in the right way?


I will be chairing a session on Tuesday on this topic, joined by a couple of great presenters (Diana Chung from Gilead and Gretchen Goller from PRA).

Here's a short preview of the session:



Hope to see you there. It should be a great discussion.

Session Details:

June 25, 1:45PM - 3:15PM

  • Session Number: 241
  • Room Number: 205B


1. Enrollment Analytics: Moving Beyond the Funnel
Paul Ivsin
VP, Consulting Director
CAHG Clinical Trials

2. Use of Analytics for Operational Planning
Diana Chung, MSc
Associate Director, Clinical Operations
Gilead

3. Using Enrollment Data to Communicate Effectively with Sites
Gretchen Goller, MA
Senior Director, Patient Access and Retention Services
PRA





an

Brazen Scofflaws? Are Pharma Companies Really Completely Ignoring FDAAA?

Results reporting requirements are pretty clear. Maybe critics should re-check their methods?

Ben Goldacre has rather famously described the clinical trial reporting requirements in the Food and Drug Administration Amendments Act of 2007 as a “fake fix” that was being thoroughly “ignored” by the pharmaceutical industry.

Pharma: breaking the law in broad daylight?
He makes this sweeping, unconditional proclamation about the industry and its regulators on the basis of  a single study in the BMJ, blithely ignoring the fact that a) the authors of the study admitted that they could not adequately determine the number of studies that were meeting FDAAA requirements and b) a subsequent FDA review that identified only 15 trials potentially out of compliance, out of a pool of thousands.


Despite the fact that the FDA, which has access to more data, says that only a tiny fraction of studies are potentially noncompliant, Goldacre's frequently repeated claims that the law is being ignored seems to have caught on in the general run of journalistic and academic discussions about FDAAA.

And now there appears to be additional support for the idea that a large percentage of studies are noncompliant with FDAAA results reporting requirements, in the form of a new study in the Journal of Clinical Oncology: "Public Availability of Results of Trials Assessing Cancer Drugs in the United States" by Thi-Anh-Hoa Nguyen, et al.. In it, the authors report even lower levels of FDAAA compliance – a mere 20% of randomized clinical trials met requirements of posting results on clinicaltrials.gov within one year.

Unsurprisingly, the JCO results were immediately picked up and circulated uncritically by the usual suspects.

I have to admit not knowing much about pure academic and cooperative group trial operations, but I do know a lot about industry-run trials – simply put, I find the data as presented in the JCO study impossible to believe. Everyone I work with in pharma trials is painfully aware of the regulatory environment they work in. FDAAA compliance is a given, a no-brainer: large internal legal and compliance teams are everywhere, ensuring that the letter of the law is followed in clinical trial conduct. If anything, pharma sponsors are twitchily over-compliant with these kinds of regulations (for example, most still adhere to 100% verification of source documentation – sending monitors to physically examine every single record of every single enrolled patient - even after the FDA explicitly told them they didn't have to).

I realize that’s anecdotal evidence, but when such behavior is so pervasive, it’s difficult to buy into data that says it’s not happening at all. The idea that all pharmaceutical companies are ignoring a highly visible law that’s been on the books for 6 years is extraordinary. Are they really so brazenly breaking the rules? And is FDA abetting them by disseminating incorrect information?

Those are extraordinary claims, and would seem to require extraordinary evidence. The BMJ study had clear limitations that make its implications entirely unclear. Is the JCO article any better?

Some Issues


In fact, there appear to be at least two major issues that may have seriously compromised the JCO findings:

1. Studies that were certified as being eligible for delayed reporting requirements, but do not have their certification date listed.

The study authors make what I believe to be a completely unwarranted assumption:

In trials for approval of new drugs or approval for a new indication, a certification [permitting delayed results reporting] should be posted within 1 year and should be publicly available.

It’s unclear to me why the authors think the certifications “should be” publicly available. In re-reading FDAAA section 801, I don’t see any reference to that being a requirement. I suppose I could have missed it, but the authors provide a citation to a page that clearly does not list any such requirement.

But their methodology assumes that all trials that have a certification will have it posted:

If no results were posted at ClinicalTrials.gov, we determined whether the responsible party submitted a certification. In this case, we recorded the date of submission of the certification to ClinicalTrials.gov.

If a sponsor gets approval from FDA to delay reporting (as is routine for all drugs that are either not approved for any indication, or being studied for a new indication – i.e., the overwhelming majority of pharma drug trials), but doesn't post that approval on the registry, the JCO authors deem that trial “noncompliant”. This is not warranted: the company may have simply chosen not to post the certification despite being entirely FDAAA compliant.

2. Studies that were previously certified for delayed reporting and subsequently reported results

It is hard to tell how the authors treated this rather-substantial category of trials. If a trial was certified for delayed results reporting, but then subsequently published results, the certification date becomes difficult to find. Indeed, it appears in the case where there were results, the authors simply looked at the time from study completion to results posting. In effect, this would re-classify almost every single one of these trials from compliant to non-compliant. Consider this example trial:


  • Phase 3 trial completes January 2010
  • Certification of delayed results obtained December 2010 (compliant)
  • FDA approval June 2013
  • Results posted July 2013 (compliant)


In looking at the JCO paper's methods section, it really appears that this trial would be classified as reporting results 3.5 years after completion, and therefore be considered noncompliant with FDAAA. In fact, this trial is entirely kosher, and would be extremely typical for many phase 2 and 3 trials in industry.

Time for Some Data Transparency


The above two concerns may, in fact, be non-issues. They certainly appear to be implied in the JCO paper, but the wording isn't terribly detailed and could easily be giving me the wrong impression.

However, if either or both of these issues are real, they may affect the vast majority of "noncompliant" trials in this study. Given the fact that most clinical trials are either looking at new drugs, or looking at new indications for new drugs, these two issues may entirely explain the gap between the JCO study and the unequivocal FDA statements that contradict it.

I hope that, given the importance of transparency in research, the authors will be willing to post their data set publicly so that others can review their assumptions and independently verify their conclusions. It would be more than a bit ironic otherwise.

[Image credit: Shamless lawlessness via Flikr user willytronics.]


Thi-Anh-Hoa Nguyen, Agnes Dechartres, Soraya Belgherbi, and Philippe Ravaud (2013). Public Availability of Results of Trials Assessing Cancer Drugs in the United States JOURNAL OF CLINICAL ONCOLOGY DOI: 10.1200/JCO.2012.46.9577




an

Can a Form Letter from FDA "Blow Your Mind"?

Adam Feuerstein appears to be a generally astute observer of the biotech scene. As a finance writer, he's accosted daily with egregiously hyped claims from small drug companies and their investors, and I think he tends to do an excellent job of spotting cases where breathless excitement is unaccompanied by substantive information.


However, Feuerstein's healthy skepticism seems to have abandoned him last year in the case of a biotech called Sarepta Therapeutics, who released some highly promising - but also incredibly limited - data on their treatment for Duchenne muscular dystrophy. After a disappointing interaction with the FDA, Sarepta's stock dropped, and Feuerstein appeared to realize that he'd lost some objectivity on the topic.


However, with the new year comes new optimism, and Feuerstein seems to be back to squinting hard at tea leaves - this time in the case of a form letter from the FDA.


He claims that the contents of the letter will "blow your mind". To him, the key passage is:


We understand that you feel that eteplirsen is highly effective, and may be confused by what you have read or heard about FDA's actions on eteplirsen. Unfortunately, the information reported in the press or discussed in blogs does not necessarily reflect FDA's position. FDA has reached no conclusions about the possibility of using accelerated approval for any new drug for the treatment of Duchenne muscular dystrophy, and for eteplirsen in particular.


Feuerstein appears to think that the fact that FDA "has reached no conclusions" may mean that it may be "changing its mind". To which he adds: "Wow!"
Adam Feuerstein: This time,
too much froth, not enough coffee?


I'm not sure why he thinks that. As far as I can tell, the FDA will never reach a conclusion like this before its gone through the actual review process. After all, if FDA already knows the answer before the full review, what would the point of the review even be? It would seem a tremendous waste of agency resources. Not to mention how non-level the playing field would be if some companies were given early yes/no decisions while others had to go through a full review.


It seems fair to ask: is this a substantive change by FDA review teams, or would it be their standard response to any speculation about whether and how they would approve or reject a new drug submission? Can Feuerstein point to other cases where FDA has given a definitive yes or no on an application before the application was ever filed? I suspect not, but am open to seeing examples.


A more plausible theory for this letter is that the FDA is attempting a bit of damage control. It is not permitted to share anything specific it said or wrote to Sarepta about the drug, and has come under some serious criticism for “rejecting” Sarepta’s Accelerated Approval submission. The agency has been sensitive to the DMD community, even going so far as to have Janet Woodcock and Bob Temple meet with DMD parents and advocates last February. Sarepta has effectively positioned FDA as the reason for it’s delay in approval, but no letters have actually been published, so the conversation has been a bit one-sided. This letter appears to be an attempt at balancing perspectives a bit, although the FDA is still hamstrung by its restriction on relating any specific communications.

Ultimately, this is a form letter that contains no new information: FDA has reached no conclusions because FDA is not permitted to reach conclusions until it has completed a fair and thorough review, which won't happen until the drug is actually submitted for approval.

We talk about "transparency" in terms of releasing clinical trials data, but to me there is a great case to be made for increase regulatory transparency. The benefits to routine publication of most FDA correspondence and meeting results (including such things as Complete Response letters, explaining FDA's thinking when it rejects new applications) would actually go a long way towards improving public understanding of the drug review and approval process.




an

Waiver of Informed Consent - proposed changes in the 21st Century Cures Act

Adam Feuerstein points out - and expresses considerable alarm over - an overlooked clause in the 21st Century Cures Act:


In another tweet, he suggests that the act will "decimate" informed consent in drug trials. Subsequent responses and retweets  did nothing to clarify the situation, and if anything tended to spread, rather than address, Feuerstein's confusion.

Below is a quick recap of the current regulatory context and a real-life example of where the new wording may be helpful. In short, though, I think it's safe to say:


  1. Waiving informed consent is not new; it's already permitted under current regs
  2. The standards for obtaining a waiver of consent are stringent
  3. They may, in fact, be too stringent in a small number of situations
  4. The act may, in fact, be helpful in those situations
  5. Feuerstein may, in fact, need to chill out a little bit


(For the purposes of this discussion, I’m talking about drug trials, but I believe the device trial situation is parallel.)

Section 505(i) - the section this act proposes to amend - instructs the Secretary of Health and Human Services to propagate rules regarding clinical research. Subsection 4 addresses informed consent:

…the manufacturer, or the sponsor of the investigation, require[e] that experts using such drugs for investigational purposes certify to such manufacturer or sponsor that they will inform any human beings to whom such drugs, or any controls used in connection therewith, are being administered, or their representatives, that such drugs are being used for investigational purposes and will obtain the consent of such human beings or their representatives, except where it is not feasible or it is contrary to the best interests of such human beings.

[emphasis  mine]

Note that this section already recognizes situations where informed consent may be waived for practical or ethical reasons.

These rules were in fact promulgated under 45 CFR part 46, section 116. The relevant bit – as far as this conversation goes – regards circumstances under which informed consent might be fully or partially waived. Specifically, there are 4 criteria, all of which need to be met:

 (1) The research involves no more than minimal risk to the subjects;
 (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;
 (3) The research could not practicably be carried out without the waiver or alteration; and
 (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

In practice, this is an especially difficult set of criteria to meet for most studies. Criterion (1) rules out most “conventional” clinical trials, because the hallmarks of those trials (use of an investigational medicine, randomization of treatment, blinding of treatment allocation) are all deemed to be more than “minimal risk”. That leaves observational studies – but even many of these cannot clear the bar of criterion (3).

That word “practicably” is a doozy.

Here’s an all-too-real example from recent personal experience. A drug manufacturer wants to understand physicians’ rationales for performing a certain procedure. It seems – but there is little hard data – that a lot of physicians do not strictly follow guidelines on when to perform the procedure. So we devise a study: whenever the procedure is performed, we ask the physician to complete a quick form categorizing why they made their decision. We also ask him or her to transcribe a few pieces of data from the patient chart.

Even though the patients aren’t personally identifiable, the collection of medical data qualifies this as a clinical trial.

It’s a minimal risk trial, definitely: the trial doesn’t dictate at all what the doctor should do, it just asks him or her to record what they did and why, and supply a bit of medical context for the decision. All told, we estimated 15 minutes of physician time to complete the form.

The IRB monitoring the trial, however, denied our request for a waiver of informed consent, since it was “practicable” (not easy, but possible) to obtain informed consent from the patient.  Informed consent – even with a slimmed-down form – was going to take a minimum of 30 minutes, so the length of the physician’s involvement tripled. In addition, many physicians opted out of the trial because they felt that the informed consent process added unnecessary anxiety and alarm for their patients, and provided no corresponding benefit.

The end result was not surprising: the budget for the trial more than doubled, and enrollment was far below expectations.

Which leads to two questions:

1.       Did the informed consent appreciably help a single patient in the trial? Very arguably, no. Consenting to being “in” the trial made zero difference in the patients’ care, added time to their stay in the clinic, and possibly added to their anxiety.
2.       Was less knowledge collected as a result? Absolutely, yes. The sponsor could have run two studies for the same cost. Instead, they ultimately reduced the power of the trial in order to cut losses.


Bottom line, it appears that the modifications proposed in the 21st Century Cures Act really only targets trials like the one in the example. The language clearly retains criteria 1 and 2 of the current HHS regs, which are the most important from a patient safety perspective, but cuts down the “practicability” requirement, potentially permitting high quality studies to be run with less time and cost.

Ultimately, it looks like a very small, but positive, change to the current rules.

The rest of the act appears to be a mash-up of some very good and some very bad (or at least not fully thought out) ideas. However, this clause should not be cause for alarm.




an

Will Your Family Make You a Better Trial Participant?

It is becoming increasing accepted within the research community that patient engagement leads to a host of positive outcomes – most importantly (at least practically speaking) improved clinical trial recruitment and retention.

But while we can all agree that "patient engagement is good" in a highly general sense, we don't have much consensus on what the implications of that idea might be. There is precious little hard evidence about how to either attract engaged patients, or how we might effectively turn "regular patients" into "engaged patients".

That latter point - that we could improve trial enrollment and completion rates by converting the (very large) pool of less-engaged patient - is a central tenet of the mHealth movement in clinical trials. Since technology can now accompany us almost anywhere, it would seem that we have an unprecedented opportunity to reach out and connect with current and potential trial participants.

However, there are signs that this promised revolution in patient engagement hasn't come about. From the decline of new apps being downloaded to the startlingly high rate of people abandoning their wearable health devices, there's a growing body of evidence suggesting that we aren't in fact making very good progress towards increasing engagement. We appear to have underestimated the inertia of the disengaged patient.

So what can we do? We know people like their technology, but if they're not using it to engage with their healthcare decisions, we're no better off as a result.

Daniel Calvert, in a recent blog post at Parallel 6 offers an intriguing solution: he suggests we go beyond the patient and engage their wider group of loved ones. By engaging what Calvert calls the Support Circle - those people most likely to "encourage the health and well being of that patient as they undergo a difficult period of their life" - trial teams will find themselves with a more supported, and therefore more engaged, participant, with corresponding benefits to enrollment and retention. 

Calvert outlines a number of potential mechanisms to get spouses, children, and other loved ones involved in the trial process:
During the consent process the patient can invite their support team in with them. A mobile application can be put on their phones enabling encouraging messages, emails, and texts to be sent. Loved ones can see if their companion or family member did indeed take today’s medication or make last Monday’s appointment. Gamification offers badges or pop-ups: “Two months of consecutive appointments attended” or “perfect eDiary log!” Loved ones can see those notifications, like/comment, and constantly encourage the patients. 
Supporting materials can also be included in the Support Circle application. There are a host of unknown terms to patients and their team. Glossaries, videos, FAQs, contact now, and so much more can be made available at their fingertips.
I have to admit I'm fascinated by Calvert's idea. I want him to be right: the picture of supportive, encouraging, loving spouses and children standing by to help a patient get through a clinical trial is an attractive one. So is the idea that they're just waiting for us to include them - all we need to do is a bit of digital communication with them to get them fully on board as members of the study team.

The problem, however, remains: we have absolutely no evidence that this approach will work. There is no data showing that it is superior to other approaches to engage trial patients.

(In fact, we may even have some indirect evidence that it may hinder enrollment: in trials that require active caregiver participation, such as those in Alzheimer's Disease, caregivers are believed to often contribute to the barriers to patient enrollment).

Calvert's idea is a good one, and it's worthy of consideration. More importantly, it's worthy of being rigorously tested against other recruitment and retention approaches. We have a lot of cool new technologies, and even more great ideas - we're not lacking for those. What we're lacking is hard data showing us how these things perform. What we especially need is comparative data showing how new tactics work relative to other approaches.

Over 5 years ago, I wrote a blog post bemoaning the sloppy approaches we take in trial recruitment - a fact made all the more painfully ironic by the massive intellectual rigor of the trials themselves. I'm not at all sure that we've made any real progress in those 5 years.

In my next post, I'll outline what I believe are some of the critical steps we need to take to improve the current situation, and start bringing some solid evidence to the table along with our ideas.

[Photo credit: Flikr user Matthew G, "Love (of technology)"]







an

Establishing efficacy - without humans?

The decade following passage of FDAAA has been one of easing standards for drug approvals in the US, most notably with the advent of “breakthrough” designation created by FDASIA in 2012 and the 21st Century Cures Act in 2016.

Although, as of this writing, there is no nominee for FDA Commissioner, it appears to be safe to say that the current administration intends to accelerate the pace of deregulation, mostly through further lowering of approval requirements. In fact, some of the leading contenders for the position are on record as supporting a return to pre-Kefauver-Harris days, when drug efficacy was not even considered for approval.

Build a better mouse model, and pharma will
beat a path to your door - no laws needed.

In this context, it is at least refreshing to read a proposal to increase efficacy standards. This comes from two bioethicists at McGill University, who make the somewhat-startling case for a higher degree of efficacy evaluation before a drug begins any testing in humans.
We contend that a lack of emphasis on evidence for the efficacy of drug candidates is all too common in decisions about whether an experimental medicine can be tested in humans. We call for infrastructure, resources and better methods to rigorously evaluate the clinical promise of new interventions before testing them on humans for the first time.
The author propose some sort of centralized clearinghouse to evaluate efficacy more rigorously. It is unclear what they envision this new multispecialty review body’s standards for green-lighting a drug to enter human testing. Instead they propose three questions:
  • What is the likelihood that the drug will prove clinically useful?
  • Assume the drug works in humans. What is the likelihood of observing the preclinical results?
  • Assume the drug does not work in humans. What is the likelihood of observing the preclinical results?
These seem like reasonable questions, I suppose – and are likely questions that are already being asked of preclinical data. They certainly do not rise to the level of providing a clear standard for regulatory approval, though perhaps it’s a reasonable place to start.

The most obvious counterargument here is one that the authors curiously don’t pick up on at all: if we had the ability to accurately (or even semiaccurately) predict efficacy preclinically, pharma sponsors would already be doing it. The comment notes: “More-thorough assessments of clinical potential before trials begin could lower failure rates and drug-development costs.” And it’s hard not to agree: every pharmaceutical company would love to have even an incrementally-better sense of whether their early pipeline drugs will be shown to work as hoped.

The authors note
Commercial interests cannot be trusted to ensure that human trials are launched only when the case for clinical potential is robust. We believe that many FIH studies are launched on the basis of flimsy, underscrutinized evidence.
However, they do not produce any evidence that industry is in any way deliberately underperforming their preclinical work, merely that preclinical efficacy is often difficult to reproduce and is poorly correlated with drug performance in humans.

Pharmaceutical companies have many times more candidate compounds than they can possibly afford to put into clinical trials. Figuring out how to lower failure rates – or at least the total cost of failure - is a prominent industry obsession, and efficacy remains the largest source of late-stage trial failure. This quest to “fail faster” has resulted in larger and more expensive phase 2 trials, and even to increased efficacy testing in some phase 1 trials. And we do this not because of regulatory pressure, but because of hopes that these efforts will save overall costs. So it seems beyond probable that companies would immediately invest more in preclinical efficacy testing, if such testing could be shown to have any real predictive power. But generally speaking, it does not.

As a general rule, we don’t need regulations that are firmly aligned with market incentives, we need regulations if and when we think those incentives might run counter to the general good. In this case, there are already incredibly strong market incentives to improve preclinical assessments. Where companies have attempted to do something with limited success, it would seem quixotic to think that regulatory fiat will accomplish more.

(One further point. The authors try to link the need for preclinical efficacy testing to the 2016 Bial tragedy. This seems incredibly tenuous: the authors speculate that perhaps trial participants would not have been harmed and killed if Bial had been required to produce more evidence of BIA102474’s clinical efficacy before embarking on their phase 1 trials. But that would have been entirely coincidental in this case: if the drug had in fact more evidence of therapeutic promise, the tragedy still would have happened, because it had nothing at all to do with the drug’s efficacy.

This is to some extent a minor nitpick, since the argument in favor of earlier efficacy testing does not depend on a link to Bial. However, I bring it up because a) the authors dedicate the first four paragraphs of their comment to the link, and b) there appears to be a minor trend of using the death and injuries of that trial to justify an array of otherwise-unrelated initiatives. This seems like a trend we should discourage.)

[Update 2/23: I posted this last night, not realizing that only a few hours earlier, John LaMattina had published on this same article. His take is similar to mine, in that he is suspicious of the idea that pharmaceutical companies would knowingly push ineffective drugs up their pipeline.]

Kimmelman, J., & Federico, C. (2017). Consider drug efficacy before first-in-human trials Nature, 542 (7639), 25-27 DOI: 10.1038/542025a




an

The Streetlight Effect and 505(b)(2) approvals

It is a surprisingly common peril among analysts: we don’t have the data to answer the question we’re interested in, so we answer a related question where we do have data. Unfortunately, the new answer turns out to shed no light on the original interesting question.

This is sometimes referred to as the Streetlight Effect – a phenomenon aptly illustrated by Mutt and Jeff over half a century ago:


This is the situation that the Tufts Center for the Study of Drug Development seems to have gotten itself into in its latest "Impact Report".  It’s worth walking through the process of how an interesting question ends up in an uninteresting answer.

So, here’s an interesting question:
My company owns a drug that may be approvable through FDA’s 505(b)(2) pathway. What is the estimated time and cost difference between pursuing 505(b)(2) approval and conventional approval?
That’s "interesting", I suppose I should add, for a certain subset of folks working in drug development and commercialization. It’s only interesting to that peculiar niche, but for those people I suspect it’s extremely interesting - because it is a real situation that a drug company may find itself in, and there are concrete consequences to the decision.

Unfortunately, this is also a really difficult question to answer. As phrased, you'd almost need a randomized trial to answer it. Let’s create a version which is less interesting but easier to answer:
What are the overall development time and cost differences between drugs seeking approval via 505(b)(2) and conventional pathways?
This is much easier to answer, as pharmaceutical companies could look back on development times and costs of all their compounds, and directly compare the different types. It is, however, a much less useful question. Many new drugs are simply not eligible for 505(b)(2) approval. If those drugs
Extreme qualitative differences of 505(b)(2) drugs.
Source: Thomson Reuters analysis via RAPS
are substantially different in any way (riskier, more novel, etc.), then they will change the comparison in highly non-useful ways. In fact, in 2014, only 1 drug classified as a New Molecular Entity (NME) went through 505(b)(2) approval, versus 32 that went through conventional approval. And in fact, there are many qualities that set 505(b)(2) drugs apart.

So we’re likely to get a lot of confounding factors in our comparison, and it’s unclear how the answer would (or should) guide us if we were truly trying to decide which route to take for a particular new drug. It might help us if we were trying to evaluate a large-scale shift to prioritizing 505(b)(2) eligible drugs, however.

Unfortunately, even this question is apparently too difficult to answer. Instead, the Tufts CSDD chose to ask and answer yet another variant:
What is the difference in time that it takes the FDA for its internal review process between 505(b)(2) and conventionally-approved drugs?
This question has the supreme virtue of being answerable. In fact, I believe that all of the data you’d need is contained within the approval letter that FDA posts publishes for each new approved drug.

But at the same time, it isn’t a particularly interesting question anymore. The promise of the 505(b)(2) pathway is that it should reduce total development time and cost, but on both those dimensions, the report appears to fall flat.
  • Cost: This analysis says nothing about reduced costs – those savings would mostly come in the form of fewer clinical trials, and this focuses entirely on the FDA review process.
  • Time: FDA review and approval is only a fraction of a drug’s journey from patent to market. In fact, it often takes up less than 10% of the time from initial IND to approval. So any differences in approval times will likely easily be overshadowed by differences in time spent in development. 
But even more fundamentally, the problem here is that this study gives the appearance of providing an answer to our original question, but in fact is entirely uninformative in this regard. The accompanying press release states:
The 505(b)(2) approval pathway for new drug applications in the United States, aimed at avoiding unnecessary duplication of studies performed on a previously approved drug, has not led to shorter approval times.
This is more than a bit misleading. The 505(b)(2) statute does not in any way address approval timelines – that’s not it’s intent. So showing that it hasn’t led to shorter approval times is less of an insight than it is a natural consequence of the law as written.

Most importantly, showing that 505(b)(2) drugs had a longer average approval time than conventionally-approved drugs in no way should be interpreted as adding any evidence to the idea that those drugs were slowed down by the 505(b)(2) process itself. Because 505(b)(2) drugs are qualitatively different from other new molecules, this study can’t claim that they would have been developed faster had their owners initially chosen to go the route of conventional approval. In fact, such a decision might have resulted in both increased time in trials and increased approval time.

This study simply is not designed to provide an answer to the truly interesting underlying question.

[Disclosure: the above review is based entirely on a CSDD press release and summary page. The actual report costs $125, which is well in excess of this blog’s expense limit. It is entirely possible that the report itself contains more-informative insights, and I’ll happily update that post if that should come to my attention.]




an

For good sleep and good health, regulate your exposure to light

Your daily light exposure impacts your health. A new study finds that too much light at night and not enough natural light during the day can be harmful. This story first aired on Morning Edition on Nov. 4, 2024.




an

A human bird flu case is thought to be found in Canada for the first time

A person has tested positive in British Columbia, Canadian health officials said, though the results must be sent to another lab for confirmation.




an

What does a 2nd Trump term mean for the Affordable Care Act?

President-elect Donald Trump tried unsuccessfully to get rid of the Affordable Care Act during his first term. What action will he take this time around?




an

More young people are surviving cancer. Then they face a life altered by it

More people are getting cancer in their 20s, 30s, and 40s, and surviving, thanks to rapid advancement in care. Many will have decades of life ahead of them, which means they face greater and more complex challenges in survivorship. Lourdes Monje is navigating these waters at age 29.




an

Patrick Dempsey aims to raise awareness of cancer disparities and encourage screening

NPR's Leila Fadel talks with actor Patrick Dempsey about his efforts to raise money for cancer treatment and prevention.




an

Remarkably resilient refugees: A teen on his own, a woman who was raped

Sudan's civil war has displaced 10 million citizens. Here are profiles of two young people from the most vulnerable groups: an unaccompanied minor caring for twin brothers, a woman who was raped.




an

Menjelajahi Dunia Keajaiban Slot Online Pragmatic Play

Dunia perjudian daring telah menyaksikan kemunculan penyedia perangkat lunak yang menghebohkan, dan di antara mereka, Pragmatic Play telah berhasil menarik perhatian para pemain dengan berbagai slot online unggulan. Dalam artikel…

The post Menjelajahi Dunia Keajaiban Slot Online Pragmatic Play appeared first on Biosimilarnews.




an

Kumpulan Game Slot Gacor Dengan Persentase RTP Tertinggi Hari Ini

Dalam dunia perjudian online yang terus berkembang, pencarian para pemain untuk menemukan peluang terbaik dalam meraih kemenangan mengarah pada fenomena populer: kumpulan game slot gacor dengan persentase RTP tertinggi hari…

The post Kumpulan Game Slot Gacor Dengan Persentase RTP Tertinggi Hari Ini appeared first on Biosimilarnews.




an

Tips Rahasia Menang Mudah Main Slot Online Gacor

Mengungkap rahasia menang mudah dalam bermain slot online gacor menjadi dambaan setiap pemain judi daring. Pertama, perhatikan dengan seksama pemilihan mesin slot yang tepat. Pilihlah mesin dengan tingkat pembayaran atau…

The post Tips Rahasia Menang Mudah Main Slot Online Gacor appeared first on Biosimilarnews.




an

Game Slot Gacor Gampang Menang Habanero

Habanero tidak hanya menyajikan game slot biasa, melainkan sebuah petualangan menang tanpa batas. Dengan tema-tema yang beragam, mulai dari petualangan antariksa hingga ke dunia mitologi, setiap game Habanero memiliki keunikan…

The post Game Slot Gacor Gampang Menang Habanero appeared first on Biosimilarnews.




an

Link Daftar Situs Slot Gacor Gampang Menang Maxwin Terpercaya Hari Ini

Keuntungan besar dan kegembiraan yang ditawarkan oleh mesin slot online membuatnya semakin populer. Namun, dalam lautan situs slot yang ada, bagaimana Anda bisa menemukan situs slot terbaik yang dapat memberikan…

The post Link Daftar Situs Slot Gacor Gampang Menang Maxwin Terpercaya Hari Ini appeared first on Biosimilarnews.




an

MRI Sheds Its Shielding and Superconducting Magnets



Magnetic resonance imaging (MRI) has revolutionized healthcare by providing radiation-free, non-invasive 3-D medical images. However, MRI scanners often consume 25 kilowatts or more to power magnets producing magnetic fields up to 1.5 tesla. These requirements typically limits scanners’ use to specialized centers and departments in hospitals.

A University of Hong Kong team has now unveiled a low-power, highly simplified, full-body MRI device. With the help of artificial intelligence, the new scanner only requires a compact 0.05 T magnet and can run off a standard wall power outlet, requiring only 1,800 watts during operation. The researchers say their new AI-enabled machine can produce clear, detailed images on par with those from high-power MRI scanners currently used in clinics, and may one day help greatly improve access to MRI worldwide.

To generate images, MRI applies a magnetic field to align the poles of the body’s protons in the same direction. An MRI scanner then probes the body with radio waves, knocking the protons askew. When the radio waves turn off, the protons return to their original alignment, transmitting radio signals as they do so. MRI scanners receive these signals, converting them into images.

More than 150 million MRI scans are conducted worldwide annually, according to the Organization for Economic Cooperation and Development. However, despite five decades of development, clinical MRI procedures remain out of reach for more than two-thirds of the world’s population, especially in low- and middle-income countries. For instance, whereas the United States has 40 scanners per million inhabitants, in 2016 there were only 84 MRI units serving West Africa’s population of more than 370 million.

This disparity largely stems from the high costs and specialized settings required for standard MRI scanners. They use powerful superconducting magnets that require a lot of space, power, and specialized infrastructure. They also need rooms shielded from radio interference, further adding to hardware costs, restricting their mobility, and hampering their availability in other medical settings.

Scientists around the globe have already been exploring low-cost MRI scanners that operate at ultra-low-field (ULF) strengths of less than 0.1 T. These devices may consume much less power and prove potentially portable enough for bedside use. Indeed, as the Hong Kong team notes, MRI development initially focused on low fields of about 0.05 T, until the introduction of the first whole-body 1.5 T superconducting scanner by General Electric in 1983.

The new MRI scanner (top left) is smaller than conventional scanners, and does away with bulky RF shielding and superconducting magnetics. The new scanner’s imaging resolution is on par with conventional scanners (bottom).Ed X. Wu/The University of Hong Kong

Current ULF MRI scanners often rely on AI to help reconstruct images from what signals they gather using relatively weak magnetic fields. However, until now, these devices were limited to solely imaging the brain, extremities, or single organs, Udunna Anazodo, an assistant professor of neurology and neurosurgery at McGill University in Montreal who did not take part in the work, notes in a review of the new study.

The Hong Kong team have now developed a whole-body ULF MRI scanner in which patients are placed between two permanent neodymium ferrite boron magnet plates—one above the body and the other below. Although these permanent magnets are far weaker than superconductive magnets, they are low-cost, readily available, and don’t require liquid helium or to be cooled to superconducting temperatures. In addition, the amount of energy ULF MRI scanners deposit into the body is roughly one-thousandth that from conventional scanners, making heat generation during imaging much less of a concern, Anazodo notes in her review. ULF MRI is also much quieter than regular MRI, which may help with pediatric scanning, she adds.

The new machine consists of two units, each roughly the size of a hospital gurney. One unit houses the MRI device, while the other supports the patient’s body as it slides into the scanner.

To account for radio interference from both the outside environment and the ULF MRI’s own electronics, the scientists deployed 10 small sensor coils around the scanner and inside the electronics cabinet to help the machine detect potentially disruptive radio signals. They also employed deep learning AI methods to help reconstruct images even in the presence of strong noise. They say this eliminates the need for shielding against radio waves, making the new device far more portable than conventional MRI.

In tests on 30 healthy volunteers, the device captured detailed images of the brain, spine, abdomen, heart, lung, and extremities. Scanning each of these targets took eight minutes or less for image resolutions of roughly 2 by 2 by 8 cubic millimeters. In Anazodo’s review, she notes the new machine produced image qualities comparable to those of conventional MRI scanners.

“It’s the beginning of a multidisciplinary endeavor to advance an entirely new class of simple, patient-centric and computing-powered point-of-care diagnostic imaging device,” says Ed Wu, a professor and chair of biomedical engineering at the University of Hong Kong.

The researchers used standard off-the-shelf electronics. All in all, they estimate hardware costs at about US $22,000. (According to imaging equipment company Block Imaging in Holt, Michigan, entry-level MRI scanners start at $225,000, and advanced premium machines can cost $500,000 or more.)

The prototype scanner’s magnet assembly is relatively heavy, weighing about 1,300 kilograms. (This is still lightweight compared to a typical clinical MRI scanner, which can weigh up to 17 tons, according to New York University’s Langone Health center.) The scientists note that optimizing the hardware could reduce the magnet assembly’s weight to about 600 kilograms, which would make the entire scanner mobile.

The researchers note their new device is not meant to replace conventional high-magnetic-field MRI. For instance, a 2023 study notes that next-generation MRI scanners using powerful 7 T magnets could yield a resolution of just 0.35 millimeters. Instead, ULF MRI can complement existing MRI by going to places that can’t host standard MRI devices, such as intensive care units and community clinics.

In an email, Anazodo adds this new Hong Kong work is just one of a number of exciting ULF MRI scanners under development. For instance, she notes that Gordon Sarty at the University of Saskatchewan and his colleagues are developing that device that is potentially even lighter, cheaper and more portable than the Hong Kong machine, which they are researching for use in whole-body imaging on the International Space Station.

Wu and his colleagues detailed their findings online 10 May in the journal Science.

This article appears in the July 2024 print issue as “Compact MRI Ditches Superconducting Magnets.”




an

Noise Cancellation for Your Brain



Elemind, a 5-year-old startup based in Cambridge, Mass., today unveiled a US $349 wearable for neuromodulation, the company’s first product. According to cofounder and CEO Meredith Perry, the technology tracks the oscillation of brain waves using electroencephalography (EEG) sensors that detect the electrical activity of the brain and then influence those oscillations using bursts of sound delivered via bone conduction.

Elemind’s first application for this wearable aims to suppress alpha waves to help induce sleep. There are other wearables on the market that monitor brain waves and, through biofeedback, encourage users to actively modify their alpha patterns. Elemind’s headband appears to be the first device to use sound to directly influence the brain waves of a passive user.

In a clinical trial, says Perry [no relation to author], 76 percent of subjects fell asleep more quickly. Those who did see a difference averaged 48 percent less time to progress from awake to asleep. The results were similar to those of comparable trials of pharmaceutical sleep aids, Perry indicated.

“For me,” Perry said, “it cuts through my rumination, quiets my thinking. It’s like noise cancellation for the brain.”

I briefly tested Elemind’s headband in May. I found it comfortable, with a thick cushioned band that sits across the forehead connected to a stretchy elastic loop to keep it in place. In the band are multiple EEG electrodes, a processor, a three-axis accelerometer, a rechargeable lithium-polymer battery, and custom electronics that gather the brain’s electrical signals, estimate their phase, and generate pink noise through a bone-conduction speaker. The whole thing weighs about 60 grams—about as much as a small kiwi fruit.

My test conditions were far from optimal for sleep: early afternoon, a fairly bright conference room, a beanbag chair as bed, and a vent blowing. And my test lasted just 4 minutes. I can say that I didn’t find the little bursts of pink noise (white noise without the higher frequencies) unpleasant. And since I often wear an eye mask, feeling fabric on my face wasn’t disturbing. It wasn’t the time or place to try for sound sleep, but I—and the others in the room—noted that after 2 minutes I was yawning like crazy.

How Elemind tweaks brain waves

What was going on in my brain? Briefly, different brain states are associated with different frequencies of waves. Someone who is relaxed with eyes closed but not asleep produces alpha waves at around 10 hertz. As they drift off to sleep, the alpha waves are supplanted by theta waves, at around 5 Hz. Eventually, the delta waves of deep sleep show up at around 1 Hz.

Ryan Neely, Elemind’s vice president of science and research, explains: “As soon as you put the headband on,” he says, “the EEG system starts running. It uses straightforward signal processing with bandpass filtering to isolate the activity in the 8- to 12-Hz frequency range—the alpha band.”

“Then,” Neely continues, “our algorithm looks at the filtered signal to identify the phase of each oscillation and determines when to generate bursts of pink noise.”

To help a user fall asleep more quickly [top], bursts of pink noise are timed to generate a brain response that is out of phase with alpha waves and so suppresses them. To enhance deep sleep [bottom], the pink noise is timed to generate a brain response that is in phase with delta waves.Source: Elemind

These auditory stimuli, he explains, create ripples in the waves coming from the brain. Elemind’s system tries to align these ripples with a particular phase in the wave. Because there is a gap between the stimulus and the evoked response, Elemind tested its system on 21 people and calculated the average delay, taking that into account when determining when to trigger a sound.

To induce sleep, Elemind’s headband targets the trough in the alpha wave, the point at which the brain is most excitable, Neely says.

“You can think of the alpha rhythm as a gate for communication between different areas of the brain,” he says. “By interfering with that communication, that coordination between different brain areas, you can disrupt patterns, like the ruminations that keep you awake.”

With these alpha waves suppressed, Neely says, the slower oscillations, like the theta waves of light sleep, take over.

Elemind doesn’t plan to stop there. The company plans to add an algorithm that addresses delta waves, the low-frequency 0.5- to 2-Hz waves characteristic of deep sleep. Here, Elemind’s technology will attempt to amplify this pattern with the intent of improving sleep quality.

Is this safe? Yes, Neely says, because auditory stimulation is self-limiting. “Your brain waves have a natural space they can occupy,” he explains, “and this stimulation just moved it within that natural space, unlike deep-brain stimulation, which can move the brain activity outside natural parameters.”

Going beyond sleep to sedation, memory, and mental health

Applications may eventually go beyond inducing and enhancing sleep. Researchers at the University of Washington and McGill University have completed a clinical study to determine if Elemind’s technology can be used to increase the pain threshold of subjects undergoing sedation. The results are being prepared for peer review.

Elemind is also working with a team involving researchers at McGill and the Leuven Brain Institute to determine if the technology can enhance memory consolidation in deep sleep and perhaps have some usefulness for people with mild cognitive impairment and other memory disorders.

Neely would love to see more applications investigated in the future.

“Inverse alpha stimulation [enhancing instead of suppressing the signal] could increase arousal,” he says. “That’s something I’d love to look into. And looking into mental-health treatment would be interesting, because phase coupling between the different brain regions appears to be an important factor in depression and anxiety disorders.”

Perry, who previously founded the wireless power startup UBeam, cofounded Elemind with four university professors with expertise in neuroscience, optogenetics, biomedical engineering, and artificial intelligence. The company has $12 million in funding to date and currently has 13 employees.

Preorders at $349 start today for beta units, and Elemind expects to start general sales later this year. The company will offer customers an optional membership at $7 to $13 monthly that will allow cloud storage of sleep data and access to new apps as they are released.




an

Origami Helps Implant Sensors in Bio-Printed Tissue



In the United States alone, more than 100,000 people currently need a lifesaving organ transplant. Instead of waiting for donors, one way to solve this crisis in the future is to assemble replacement organs with bio-printing—3D printing that uses inks containing living cells. Scientists in Israel have found that origami techniques could help fold sensors into bio-printed materials to help determine whether they are behaving safely and properly.

Although bio-printing something as complex as a human organ is still a distant possibility, there are a host of near-term applications for the technique. For example, in drug research, scientists can bio-print living, three-dimensional tissues with which to examine the effects of various compounds.

Ideally, researchers would like to embed sensors within bio-printed items to keep track of how well they are behaving. However, the three-dimensional nature of bio-printed objects makes it difficult to lodge sensors within them in a way that can monitor every part of the structures.

“It will, hopefully in the future, allow us to monitor and assess 3D biostructures before we would like to transplant them.” —Ben Maoz, Tel Aviv University

Now scientists have developed a 3D platform inspired by origami that can help embed sensors in bio-printed objects in precise locations. “It will, hopefully in the future, allow us to monitor and assess 3D biostructures before we would like to transplant them,” says Ben Maoz, a professor of biomedical engineering at Tel Aviv University in Israel.

The new platform is a silicone rubber device that can fold around a bio-printed structure. The prototype holds a commercial array of 3D electrodes to capture electrical signals. It also possesses other electrodes that can measure electrical resistance, which can reveal how permeable cells are to various medications. A custom 3D software model can tailor the design of the origami and all the electrodes so that the sensors can be placed in specific locations in the bio-printed object.

The scientists tested their device on bio-printed clumps of brain cells. The research team also grew a layer of cells onto the origami that mimicked the blood-brain barrier, a cell layer that protects the brain from undesirable substances that the body’s blood might be carrying. By folding this combination of origami and cells onto the bio-printed structures, Maoz and his colleagues were able to monitor neural activity within the brain cells and see how their synthetic blood-brain barrier might interfere with medications intended to treat brain diseases.

Maoz says the new device can incorporate many types of sensors beyond electrodes, such as temperature or acidity sensors. It can also incorporate flowing liquid to supply oxygen and nutrients to cells, the researchers note.

Currently, this device “will mainly be used for research and not for clinical use,” Maoz says. Still, it could “significantly contribute to drug development—assessing drugs that are relevant to the brain.”

The researchers say they can use their origami device with any type of 3D tissue. For example, Maoz says they can use it on bio-printed structures made from patient cells “to help with personalized medicine and drug development.”

The origami platform could also help embed devices that can modify bio-printed objects. For instance, many artificially grown tissues function better if they are placed under the kinds of physical stresses they might normally experience within the body, and the origami platform could integrate gadgets that can exert such mechanical forces on bio-printed structures. “This can assist in accelerating tissue maturation, which might be relevant to clinical applications,” Maoz says.

The scientists detailed their findings in the 26 June issue of Advanced Science.




an

Next-Gen Brain Implant Uses a Graphene Chip



A Barcelona-based startup called Inbrain Neuroelectronics has produced a novel brain implant made of graphene and is gearing up for its first in-human test this summer.

The technology is a type of brain-computer interface. BCIs have garnered interest because they record signals from the brain and transmit them to a computer for analysis. They have been used for medical diagnostics, as communication devices for people who can’t speak, and to control external equipment, including robotic limbs. But Inbrain intends to transform its BCI technology into a therapeutic tool for patients with neurological issues such as Parkinson’s disease.

Because Inbrain’s chip is made of graphene, the neural interface has some interesting properties, including the ability to be used to both record from and stimulate the brain. That bidirectionality comes from addressing a key problem with the metallic chips typically used in BCI technology: Faradaic reactions. Faradaic reactions are a particular type of electrochemical processes that occurs between a metal electrode and an electrolyte solution. As it so happens, neural tissue is largely composed of aqueous electrolytes. Over time, these Faradaic reactions reduce the effectiveness of the metallic chips.

That’s why Inbrain replaced the metals typically used in such chips with graphene, a material with great electrical conductivity. “Metals have Faraday reactions that actually make all the electrons interact with each other, degrading their effectiveness...for transmitting signals back to the brain,” said Carolina Aguilar, CEO and cofounder of Inbrain.

Because graphene is essentially carbon and not a metal, Aguilar says the chip can inject 200 times as much charge without creating a Faradic reaction. As a result, the material is stable over the millions of pulses of stimulation required of a therapeutic tool. While Inbrain is not yet testing the chip for brain stimulation, the company expects to reach that goal in due time.

The graphene-based chip is produced on a wafer using traditional semiconductor technology, according to Aguilar. At clean-room facilities, Inbrain fabricates a 10-micrometer-thick chip. The chip consists of what Aguilar terms “graphene dots” (not to be confused with graphene quantum dots) that range in size from 25 to 300 micrometers. “This micrometer scale allows us to get that unique resolution on the decoding of the signals from the brain, and also provides us with the micrometric stimulation or modulation of the brain,” added Aguilar.

Testing the Graphene-Based BCI

The first test of the platform in a human patient will soon be performed at the University of Manchester, in England, where it will serve as an interface during the resection of a brain tumor. When resecting a tumor, surgeons must ensure that they don’t damage areas like the brain’s language centers so the patient isn’t impaired after the surgery. “The chip is positioned during the tumor resection so that it can read, at a very high resolution, the signals that tell the surgeon where there is a tumor and where there is not a tumor,” says Aguilar. That should enable the surgeons to extract the tumor with micrometric precision while preserving functional areas like speech and cognition.

Aguilar added, “We have taken this approach for our first human test because it is a very reliable and quick path to prove the safety of graphene, but also demonstrate the potential of what it can do in comparison to metal technology that is used today.”

Aguilar stresses that the Inbrain team has already tested the graphene-based chip’s biocompatibility. “We have been working for the last three years in biocompatibility through various safety studies in large animals,” said Aguilar. “So now we can have these green lights to prove an additional level of safety with humans.”

While this test of the chip at Manchester is aimed at aiding in brain tumor surgery, the same technology could eventually be used to help Parkinson’s patients. Toward this aim, Inbrain’s system was granted Breakthrough Device Designation last September from the U.S. Food & Drug Administration as an adjunctive therapy for treating Parkinson’s disease. “For Parkinson’s treatment, we have been working on different preclinical studies that have shown reasonable proof of superiority versus current commercial technology in the [reduction] of Parkinson’s disease symptoms,” said Aguilar.

For treating Parkinson’s, Inbrain’s chip connects with the nigrostriatal pathway in the brain that is critical for movements. The chip will first decode the intention message from the brain that triggers a step or the lifting of the arm—something that a typical BCI can do. But Inbrain’s chip, with its micrometric precision, can also decode pathological biomarkers related to Parkinson’s symptoms, such as tremors, rigidity, and freezing of the gait.

By determining these biomarkers with great precision, Inbrain’s technology can determine how well a patient’s current drug regimen is working. In this first iteration of the Inbrain chip, it doesn’t treat the symptoms of Parkinson’s directly, but instead makes it possible to better target and reduce the amount of drugs that are used in treatment.

“Parkinson’s patients take huge amounts of drugs that have to be changed over time just to keep up with the growing resistance patients develop to the power of the drug,” said Aguilar. “We can reduce it at least 50 percent and hopefully in the future more as our devices become precise.”




an

Biocompatible Mic Could Lead to Better Cochlear Implants



Cochlear implants—the neural prosthetic cousins of standard hearing aids—can be a tremendous boon for people with profound hearing loss. But many would-be users are turned off by the device’s cumbersome external hardware, which must be worn to process signals passing through the implant. So researchers have been working to make a cochlear implant that sits entirely inside the ear, to restore speech and sound perception without the lifestyle restrictions imposed by current devices.

A new biocompatible microphone offers a bridge to such fully internal cochlear implants. About the size of a grain of rice, the microphone is made from a flexible piezoelectric material that directly measures the sound-induced motion of the eardrum. The tiny microphone’s sensitivity matches that of today’s best external hearing aids.

Cochlear implants create a novel pathway for sounds to reach the brain. An external microphone and processor, worn behind the ear or on the scalp, collect and translate incoming sounds into electrical signals, which get transmitted to an electrode that’s surgically implanted in the cochlea, deep within the inner ear. There, the electrical signals directly stimulate the auditory nerve, sending information to the brain to interpret as sound.

But, says Hideko Heidi Nakajima, an associate professor of otolaryngology at Harvard Medical School and Massachusetts Eye and Ear, “people don’t like the external hardware.” They can’t wear it while sleeping, or while swimming or doing many other forms of exercise, and so many potential candidates forgo the device altogether. What’s more, incoming sound goes directly into the microphone and bypasses the outer ear, which would otherwise perform the key functions of amplifying sound and filtering noise. “Now the big idea is instead to get everything—processor, battery, microphone—inside the ear,” says Nakajima. But even in clinical trials of fully internal designs, the microphone’s sensitivity—or lack thereof—has remained a roadblock.

Nakajima, along with colleagues from MIT, Harvard, and Columbia University, fabricated a cantilever microphone that senses the motion of a bone attached behind the eardrum called the umbo. Sound entering the ear canal causes the umbo to vibrate unidirectionally, with a displacement 10 times as great as other nearby bones. The tip of the “UmboMic” touches the umbo, and the umbo’s movements flex the material and produce an electrical charge through the piezoelectric effect. These electrical signals can then be processed and transmitted to the auditory nerve. “We’re using what nature gave us, which is the outer ear,” says Nakajima.

Why a cochlear implant needs low-noise, low-power electronics

Making a biocompatible microphone that can detect the eardrum’s minuscule movements isn’t easy, however. Jeff Lang, a professor of electrical engineering at MIT who jointly led the work, points out that only certain materials are tolerated by the human body. Another challenge is shielding the device from internal electronics to reduce noise. And then there’s long-term reliability. “We’d like an implant to last for decades,” says Lang.

In tests of the implantable microphone prototype, a laser beam measures the umbo’s motion, which gets transferred to the sensor tip. JEFF LANG & HEIDI NAKAJIMA

The researchers settled on a triangular design for the 3-by-3-millimeter sensor made from two layers of polyvinylidene fluoride (PVDF), a biocompatible piezoelectric polymer, sandwiched between layers of flexible, electrode-patterned polymer. When the cantilever tip bends, one PVDF layer produces a positive charge and the other produces a negative charge—taking the difference between the two cancels much of the noise. The triangular shape provides the most uniform stress distribution within the bending cantilever, maximizing the displacement it can undergo before it breaks. “The sensor can detect sounds below a quiet whisper,” says Lang.

Emma Wawrzynek, a graduate student at MIT, says that working with PVDF is tricky because it loses its piezoelectric properties at high temperatures, and most fabrication techniques involve heating the sample. “That’s a challenge especially for encapsulation,” which involves encasing the device in a protective layer so it can remain safely in the body, she says. The group had success by gradually depositing titanium and gold onto the PVDF while using a heat sink to cool it. That approach created a shielding layer that protects the charge-sensing electrodes from electromagnetic interference.

The other tool for improving a microphone’s performance is, of course, amplifying the signal. “On the electronics side, a low-noise amp is not necessarily a huge challenge to build if you’re willing to spend extra power,” says Lang. But, according to MIT graduate student John Zhang, cochlear implant manufacturers try to limit power for the entire device to 5 milliwatts, and just 1 mW for the microphone. “The trade-off between noise and power is hard to hit,” Zhang says. He and fellow student Aaron Yeiser developed a custom low-noise, low-power charge amplifier that outperformed commercially available options.

“Our goal was to perform better than or at least equal the performance of high-end capacitative external microphones,” says Nakajima. For leading external hearing-aid microphones, that means sensitivity down to a sound pressure level of 30 decibels—the equivalent of a whisper. In tests of the UmboMic on human cadavers, the researchers implanted the microphone and amplifier near the umbo, input sound through the ear canal, and measured what got sensed. Their device reached 30 decibels over the frequency range from 100 hertz to 6 kilohertz, which is the standard for cochlear implants and hearing aids and covers the frequencies of human speech. “But adding the outer ear’s filtering effects means we’re doing better [than traditional hearing aids], down to 10 dB, especially in speech frequencies,” says Nakajima.

Plenty of testing lies ahead, at the bench and on sheep before an eventual human trial. But if their UmboMic passes muster, the team hopes that it will help more than 1 million people worldwide go about their lives with a new sense of sound.

The work was published on 27 June in the Journal of Micromechanics and Microengineering.




an

Cat's Eye Camera Can See Through Camouflage



Did that rock move, or is it a squirrel crossing the road? Tracking objects that look a lot like their surroundings is a big problem for many autonomous vision systems. AI algorithms can solve this camouflage problem, but they take time and computing power. A new camera designed by researchers in South Korea provides a faster solution. The camera takes inspiration from the eyes of a cat, using two modifications that let it distinguish objects from their background, even at night.

“In the future … a variety of intelligent robots will require the development of vision systems that are best suited for their specific visual tasks,” says Young Min Song, a professor of electrical engineering and computer science at Gwangju Institute of Science and Technology and one of the camera’s designers. Song’s recent research has been focused on using the “perfectly adapted” eyes of animals to enhance camera hardware, allowing for specialized cameras for different jobs. For example, fish eyes have wider fields of view as a consequence of their curved retinas. Cats may be common and easy to overlook, he says, but their eyes actually offer a lot of inspiration.

This particular camera copied two adaptations from cats’ eyes: their vertical pupils and a reflective structure behind their retinas. Combined, these allowed the camera to be 10 percent more accurate at distinguishing camouflaged objects from their backgrounds and 52 percent more efficient at absorbing incoming light.

Using a vertical pupil to narrow focus

While conventional cameras can clearly see the foreground and background of an image, the slitted pupils of a cat focus directly on a target, preventing it from blending in with its surroundings. Kim et al./Science Advances

In conventional camera systems, when there is adequate light, the aperture—the camera’s version of a pupil—is small and circular. This structure allows for a large depth of field (the distance between the closest and farthest objects in focus), clearly seeing both the foreground and the background. By contrast, cat eyes narrow to a vertical pupil during the day. This shifts the focus to a target, distinguishing it more clearly from the background.

The researchers 3D printed a vertical slit to use as an aperture for their camera. They tested the vertical slit using seven computer vision algorithms designed to track moving objects. The vertical slit increased contrast between a target object and its background, even if they were visually similar. It beat the conventional camera on five of the seven tests. For the two tests it performed worse than the conventional camera, the accuracies of the two cameras were within 10 percent of each other.

Using a reflector to gather additional light

Cats can see more clearly at night than conventional cameras due to reflectors in their eyes that bring extra light to their retinas.Kim et al./Science Advances

Cat eyes have an in-built reflector, called a tapetum lucidum, which sits behind the retina. It reflects light that passes through the retina back at it, so it can process both the incoming light and reflected light, giving felines superior night vision. You can see this biological adaptation yourself by looking at a cat’s eyes at night: they will glow.

The researchers created an artificial version of this biological structure by placing a silver reflector under each photodiode in the camera. Photodiodes without a reflector generated current when more than 1.39 watts per square meter of light fell on them, while photodiodes with a reflector activated with 0.007 W/m2 of light. That means the photodiode could generate an image with about 1/200th the light.

Each photodiode was placed above a reflector and joined by metal electrodes to create a curved image sensor.Kim et al./Science Advances

To decrease visual aberrations (imperfections in the way the lens of the camera focuses light), Song and his team opted to create a curved image sensor, like the back of the human eye. In such a setup, a standard image sensor chip won’t work, because it’s rigid and flat. Instead it often relies on many individual photodiodes arranged on a curved substrate. A common problem with such curved sensors is that they require ultrathin silicon photodiodes, which inherently absorb less light than a standard imager’s pixels. But reflectors behind each photodiode in the artificial cat’s eye compensated for this, enabling the researchers to create a curved imager without sacrificing light absorption.

Together, vertical slits and reflectors led to a camera that could see more clearly in the dark and isn’t fooled by camouflage. “Applying these two characteristics to autonomous vehicles or intelligent robots could naturally improve their ability to see objects more clearly at night and to identify specific targets more accurately,” says Song. He foresees this camera being used for self-driving cars or drones in complex urban environments.

Song’s lab is continuing to work on using biological solutions to solve artificial vision problems. Currently, they are developing devices that mimic how brains process images, hoping to one day combine them with their biologically-inspired cameras. The goal, says Song, is to “mimic the neural systems of nature.”

Song and his colleague’s work was published this week in the journal Science Advances.

This article appears in the November 2024 print issue.




an

Stretchy Wearables Can Now Heal Themselves



If you’ve ever tried to get a bandage to stick to your elbow, you understand the difficulty in creating wearable devices that attach securely to the human body. Add digital electronic circuitry, and the problem becomes more complicated. Now include the need for the device to fix breaks and damage automatically—and let’s make it biodegradable while we’re at it—and many researchers would throw up their hands in surrender.

Fortunately, an international team led by researchers at Korea University Graduate School of Converging Science and Technology (KU-KIST) persevered, and has developed conductor materials that it claims are stretchable, self-healing, and biocompatible. Their project was described this month in the journal Science Advances.

The biodegradable conductor offers a new approach to patient monitoring and delivering treatments directly to the tissues and organs where they are needed. For example, a smart patch made of these materials could measure motion, temperature, and other biological data. The material could also be used to create sensor patches that can be implanted inside the body, and even mounted on the surface of internal organs. The biocompatible materials can be designed to degrade after a period of time, eliminating the need for an invasive procedure to remove the sensor later.

“This new technology is a glimpse at the future of remote healthcare,” says Robert Rose, CEO of Rose Strategic Partners, LLC. “Remote patient monitoring is an industry still in its early stages, but already we are seeing the promise of what is not only possible, but close on the horizon. Imagine a device implanted at a surgical site to monitor and report your internal healing progress. If it is damaged, the device can heal itself, and when the job is done, it simply dissolves. It sounds like science fiction, but it’s now science fact.”

Self-healing elastics

After being cut a ribbonlike film was able to heal itself in about 1 minute.Suk-Won Hwang

The system relies on two different layers of flexible material, both self-healing: one is for conduction and the other is an elastomer layer that serves as a substrate to support the sensors and circuitry needed to collect data. The conductor layer is based on a substance known by the acronym PEDOT:PSS, which is short for Poly(3,4-ethylenedioxythiophene) polystyrene sulfonate. It’s a conductive polymer widely used in making flexible displays and touch panels, as well as wearable devices. To increase the polymer’s conductivity and self-healing properties, the research team used additives including polyethylene glycol and glycol, which helped increase conductivity as well as the material’s ability to automatically repair damage such as cuts or tears.

In order to conform to curved tissues and survive typical body motion, the substrate layer must be extremely flexible. The researchers based it on elastomers that can match the shape of curved tissues, such as skin or individual organs.

These two layers stick to each other, thanks to chemical bonds that can connect the polymer chains of the plastic films in each layer. Combined, these materials create a system that is flexible and stretchable. In testing, the researchers showed that the materials could survive stretching up to 500 percent.

The self-healing function arises from the material’s ability to reconnect to itself when cut or otherwise damaged. This self-healing feature is based on a chemical process called disulfide metathesis. In short, polymer molecules containing pairs of linked sulfur atoms, called disulfides, have the ability to reform themselves after being severed. The phenomenon arises from a chemical process called disulfide-disulfide shuffling reactions, in which disulfide bonds in the molecule break and then reform, not necessarily between the original partners. According to the KU-KIST researchers, after being cut, their material was able to recover conductivity in its circuits within about two minutes without any intervention. The material was also tested for bending, twisting, and its ability to function both in air and under water.

This approach offers many advantages over other flexible electronics designs. For example, silver nanowires and carbon nanotubes have been used as the basis for stretchable devices, but they can be brittle and lack the self-healing properties of the KU-KIST materials. Other materials such as liquid metals can self-heal, but they are typically difficult to handle and integrate into wearable circuitry.

As a demonstration, the team created a multifunction sensor that included humidity, temperature, and pressure sensors that was approximately 4.5 square centimeters. In spite of being cut in four separate locations, it was able to heal itself and continue to provide sensor readings.

Implant tested in a rat

To take the demonstration a step further, the researchers created a 1.8-cm2 device that was attached to a rat’s bladder. The device was designed to wrap around the bladder and then adhere to itself, so no adhesives or sutures were required to attach the sensor onto the bladder. The team chose the bladder for their experiments because, under normal conditions, its size can change by 300 percent.

The device incorporated both electrodes and pressure sensors, which were able to detect changes in the bladder pressure. The electrodes could detect bladder voiding, through electromyography signals, as well as stimulate the bladder to induce urination. As with the initial demonstration, intentional damage to the device’s circuitry healed on its own, without intervention.

The biocompatible and biodegradable nature of the materials is important because it means that devices fabricated with them can be worn on the skin, as well as implanted within the body. The fact that the materials are biodegradable means that implants would not need a second surgical procedure to remove them. They could be left in place after serving their purpose, and they would be absorbed by the body.

According to Suk-Won Hwang, assistant professor at KU-KIST, a few hurdles remain on the path to commercialization. “We need to test the biocompatibility of some of the materials used in the conductor and substrate layers. While scalable production appears to be feasible, the high cost of disulfide derivatives might make the technology too expensive, aside from some special applications,” he says. “Biocompatibility testing and material synthesis optimization will take one to two years, at least.”




an

Dean Kamen Says Inventing Is Easy, but Innovating Is Hard



This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

Over the past 20 years, technological advances have enabled inventors to go from strength to strength. And yet, according to the legendary inventor Dean Kamen, innovation has stalled. Kamen made a name for himself with inventions including the first portable insulin pump for diabetics, an advanced wheelchair that can climb steps, and the Segway mobility device. Here, he talks about his plan for enabling innovators.

How has inventing changed since you started in the 1990s?

Dean Kamen: Kids all over the world can now be inventing in the world of synthetic biology the way we played with Tinkertoys and Erector Sets and Lego. I used to put pins and smelly formaldehyde in frogs in high school. Today in high school, kids will do experiments that would have won you the Nobel Prize in Medicine 40 years ago. But none of those kids are likely in any short time to be on the market with a pharmaceutical that will have global impact. Today, while invention is getting easier and easier, I think there are some aspects of innovation that have gotten much more difficult.

Can you explain the difference?

Kamen: Most people think those two words mean the same thing. Invention is coming up with an idea or a thing or a process that has never been done that way before. [Thanks to] more access to technology and 3D printers and simulation programs and virtual ways to make things, the threshold to be able to create something new and different has dramatically lowered.

Historically, inventions were only the starting point to get to innovation. And I’ll define an innovation as something that reached a scale where it impacted a piece of the world, or transformed it: the wheel, steam, electricity, Internet. Getting an invention to the scale it needs to be to become an innovation has gotten easier—if it’s software. But if it’s sophisticated technology that requires mechanical or physical structure in a very competitive world? It’s getting harder and harder to do due to competition, due to global regulatory environments.

[For example,] in proteomics [the study of proteins] and genomics and biomedical engineering, the invention part is, believe it or not, getting a little easier because we know so much, because there are development platforms now to do it. But getting a biotech product cleared by the Food and Drug Administration is getting more expensive and time consuming, and the risks involved are making the investment community much more likely to invest in the next version of Angry Birds than curing cancer.

A lot of ink has been spilled about how AI is changing inventing. Why hasn’t that helped?

Kamen: AI is an incredibly valuable tool. As long as the value you’re looking for is to be able to collect massive amounts of data and being able to process that data effectively. That’s very different than what a lot of people believe, which is that AI is inventing and creating from whole cloth new and different ideas.

How are you using AI to help with innovation?

Kamen: Every medical school has incredibly brilliant professors and grad students with petri dishes. “Look, I can make nephrons. We can grow people a new kidney. They won’t need dialysis.” But they only have petri dishes full of the stuff. And the scale they need is hundreds and hundreds of liters.

I started a not-for-profit called ARMI—the Advanced Regenerative Manufacturing Institute—to help make it practical to manufacture human cells, tissues, and organs. We are using artificial intelligence to speed up our development processes and eliminate going down frustratingly long and expensive [dead-end] paths. We figure out how to bring tissue manufacturing to scale. We build the bioreactors, sensor technologies, robotics, and controls. We’re going to put them together and create an industry that can manufacture hundreds of thousands of replacement kidneys, livers, pancreases, lungs, blood, bone, you name it.

So ARMI’s purpose is to help would-be innovators?

Kamen: We are not going to make a product. We’re not even going to make a whole company. We’re going to create baseline core technologies that will enable all sorts of products and companies to emerge to create an entire new industry. It will be an innovation in health care that will lower costs because cures are much cheaper than chronic treatments. We have to break down the barriers so that these fantastic inventions can become global innovations.

This article appears in the November 2024 print issue as “The Inventor’s Inventor.”




an

Crop Parasites Can Be Deterred by “Electric Fences”



Imagine you’re a baby cocoa plant, just unfurling your first tentative roots into the fertile, welcoming soil.

Somewhere nearby, a predator stirs. It has no ears to hear you, no eyes to see you. But it knows where you are, thanks in part to the weak electric field emitted by your roots.

It is microscopic, but it’s not alone. By the thousands, the creatures converge, slithering through the waterlogged soil, propelled by their flagella. If they reach you, they will use fungal-like hyphae to penetrate and devour you from the inside. They’re getting closer. You’re a plant. You have no legs. There’s no escape.

But just before they fall upon you, they hesitate. They seem confused. Then, en masse, they swarm off in a different direction, lured by a more attractive electric field. You are safe. And they will soon be dead.

If Eleonora Moratto and Giovanni Sena get their way, this is the future of crop pathogen control.

Many variables are involved in the global food crisis, but among the worst are the pests that devastate food crops, ruining up to 40 percent of their yield before they can be harvested. One of these—the little protist in the example above, an oomycete formally known as Phytophthora palmivorahas a US $1 billion appetite for economic staples like cocoa, palm, and rubber.

There is currently no chemical defense that can vanquish these creatures without poisoning the rest of the (often beneficial) organisms living in the soil. So Moratto, Sena, and their colleagues at Sena’s group at Imperial College London settled on a non-traditional approach: They exploited P. palmivora’s electric sense, which can be spoofed.

All plant roots that have been measured to date generate external ion flux, which translates into a very weak electric field. Decades of evidence suggests that this signal is an important target for predators’ navigation systems. However, it remains a matter of some debate how much their predators rely on plants’ electrical signatures to locate them, as opposed to chemical or mechanical information. Last year, Moratto and Sena’s group found that P. palmivora spores are attracted to the positive electrode of a cell generating current densities of 1 ampere per square meter. “The spores followed the electric field,” says Sena, suggesting that a similar mechanism helps them find natural bioelectric fields emitted by roots in the soil.

That got the researchers wondering: Might such an artificial electric field override the protists’ other sensory inputs, and scramble their compasses as they tried to use plant roots’ much weaker electrical output?

To test the idea, the researchers developed two ways to protect plant roots using a constant vertical electric field. They cultivated two common snacks for P. palmivoraa flowering plant related to cabbage and mustard, and a legume often used as a livestock feed plant—in tubes in a hydroponic solution.

Two electric-field configurations were tested: A “global” vertical field [left] and a field generated by two small nearby electrodes. The global field proved to be slightly more effective.Eleonora Moratto

In the first assay, the researchers sandwiched the plant roots between rows of electrodes above and below, which completely engulfed them in a “global” vertical field. For the second set, the field was generated using two small electrodes a short distance away from the plant, creating current densities on the order of 10 A/m2. Then they unleashed the protists.

With respect to the control group, both methods successfully diverted a significant portion of the predators away from the plant roots. They swarmed the positive electrode, where—since zoospores can’t survive for longer than about 2 to 3 hours without a host—they presumably starved to death. Or worse. Neil Gow, whose research presented some of the first evidence for zoospore electrosensing, has other theories about their fate. “Applied electrical fields generate toxic products and steep pH gradients near and around the electrodes due to the electrolysis of water,” he says. “The tropism towards the electrode might be followed by killing or immobilization due to the induced pH gradients.”

Not only did the technique prevent infestation, but some evidence indicates that it may also mitigate existing infections. The researchers published their results in August in Scientific Reports.

The global electric field was marginally more successful than the local. However, it would be harder to translate from lab conditions into a (literal) field trial in soil. The local electric field setup would be easy to replicate: “All you have to do is stick the little plug into the soil next to the crop you want to protect,” says Sena.

Moratto and Sena say this is a proof of concept that demonstrates a basis for a new, pesticide-free way to protect food crops. (Sena likens the technique to the decoys used by fighter jets to draw away incoming missiles by mimicking the signals of the original target.) They are now looking for funding to expand the project. The first step is testing the local setup in soil; the next is to test the approach on Phytophthora infestans, a meaner, scarier cousin of P. palmivora.

P. infestans attacks a more varied diet of crops—you may be familiar with its work during the Irish potato famine. The close genetic similarities imply another promising candidate for electrical pest control. This investigation, however, may require more funding. P. infestans research can be undertaken only under more stringent laboratory security protocols.

The work at Imperial ties into the broader—and somewhat charged—debate around electrostatic ecology; that is, the extent to which creatures including ticks make use of heretofore poorly understood electrical mechanisms to orient themselves and in other ways enhance their survival. “Most people still aren’t aware that naturally occurring electricity can play an ecological role,” says Sam England, a behavioral ecologist with Berlin’s Natural History Museum. “So I suspect that once these electrical phenomena become more well known and understood, they will inspire a greater number of practical applications like this one.”




an

Gandhi Inspired a New Kind of Engineering



This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

The teachings of Mahatma Gandhi were arguably India’s greatest contribution to the 20th century. Raghunath Anant Mashelkar has borrowed some of that wisdom to devise a frugal new form of innovation he calls “Gandhian engineering.” Coming from humble beginnings, Mashelkar is driven to ensure that the benefits of science and technology are shared more equally. He sums up his philosophy with the epigram “more from less for more.” This engineer has led India’s preeminent R&D organization, the Council of Scientific and Industrial Research, and he has advised successive governments.

What was the inspiration for Gandhian engineering?

Raghunath Anant Mashelkar: There are two quotes of Gandhi’s that were influential. The first was, “The world has enough for everyone’s need, but not enough for everyone’s greed.” He was saying that when resources are exhaustible, you should get more from less. He also said the benefits of science must reach all, even the poor. If you put them together, it becomes “more from less for more.”

My own life experience inspired me, too. I was born to a very poor family, and my father died when I was six. My mother was illiterate and brought me to Mumbai in search of a job. Two meals a day was a challenge, and I walked barefoot until I was 12 and studied under streetlights. So it also came from my personal experience of suffering because of a lack of resources.

How does Gandhian engineering differ from existing models of innovation?

Mashelkar: Conventional engineering is market or curiosity driven, but Gandhian engineering is application and impact driven. We look at the end user and what we want to achieve for the betterment of humanity.

Most engineering is about getting more from more. Take an iPhone: They keep creating better models and charging higher prices. For the poor it is less from less: Conventional engineering looks at removing features as the only way to reduce costs.

In Gandhian engineering, the idea is not to create affordable [second-rate] products, but to make high technology work for the poor. So we reinvent the product from the ground up. While the standard approach aims for premium price and high margins, Gandhian engineering will always look at affordable price, but high volumes.

The Jaipur foot is a light, durable, and affordable prosthetic.Gurinder Osan/AP

What is your favorite example of Gandhian engineering?

Mashelkar: My favorite is the Jaipur foot. Normally, a sophisticated prosthetic foot costs a few thousand dollars, but the Jaipur foot does it for [US] $20. And it’s very good technology; there is a video of a person wearing a Jaipur foot climbing a tree, and you can see the flexibility is like a normal foot. Then he runs one kilometer in 4 minutes, 30 seconds.

What is required for Gandhian engineering to become more widespread?

Mashelkar: In our young people, we see innovation and we see passion, but compassion is the key. We also need more soft funding [grants or zero-interest loans], because venture capital companies often turn out to be “vulture capital” in a way, because they want immediate returns.

We need a shift in the mindset of businesses—they can make money not just from premium products for those at the top of the pyramid, but also products with affordable excellence designed for large numbers of people.

This article appears in the November 2024 print issue as “The Gandhi Inspired Inventor.”




an

For this Stanford Engineer, Frugal Invention Is a Calling



Manu Prakash spoke with IEEE Spectrum shortly after returning to Stanford University from a month aboard a research vessel off the coast of California, where he was testing tools to monitor oceanic carbon sequestration. The associate professor conducts fieldwork around the world to better understand the problems he’s working on, as well as the communities that will be using his inventions.

This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

Prakash develops imaging instruments and diagnostic tools, often for use in global health and environmental sciences. His devices typically cost radically less than conventional equipment—he aims for reductions of two or more orders of magnitude. Whether he’s working on pocketable microscopes, mosquito or plankton monitors, or an autonomous malaria diagnostic platform, Prakash always includes cost and access as key aspects of his engineering. He calls this philosophy “frugal science.”

Why should we think about science frugally?

Manu Prakash: To me, when we are trying to ask and solve problems and puzzles, it becomes important: In whose hands are we putting these solutions? A frugal approach to solving the problem is the difference between 1 percent of the population or billions of people having access to that solution.

Lack of access creates these kinds of barriers in people’s minds, where they think they can or cannot approach a kind of problem. It’s important that we as scientists or just citizens of this world create an environment that feels that anybody has a chance to make important inventions and discoveries if they put their heart to it. The entrance to all that is dependent on tools, but those tools are just inaccessible.

How did you first encounter the idea of “frugal science”?

Prakash: I grew up in India and lived with very little access to things. And I got my Ph.D. at MIT. I was thinking about this stark difference in worlds that I had seen and lived in, so when I started my lab, it was almost a commitment to [asking]: What does it mean when we make access one of the critical dimensions of exploration? So, I think a lot of the work I do is primarily driven by curiosity, but access brings another layer of intellectual curiosity.

How do you identify a problem that might benefit from frugal science?

Prakash: Frankly, it’s hard to find a problem that would not benefit from access. The question to ask is “Where are the neglected problems that we as a society have failed to tackle?” We do a lot of work in diagnostics. A lot [of our solutions] beat the conventional methods that are neither cost effective nor any good. It’s not about cutting corners; it’s about deeply understanding the problem—better solutions at a fraction of the cost. It does require invention. For that order of magnitude change, you really have to start fresh.

Where does your involvement with an invention end?

Prakash: Inventions are part of our soul. Your involvement never ends. I just designed the 415th version of Foldscope [a low-cost “origami” microscope]. People only know it as version 3. We created Foldscope a long time ago; then I realized that nobody was going to provide access to it. So we went back and invented the manufacturing process for Foldscope to scale it. We made the first 100,000 Foldscopes in the lab, which led to millions of Foldscopes being deployed.

So it’s continuous. If people are scared of this, they should never invent anything [laughs], because once you invent something, it’s a lifelong project. You don’t put it aside; the project doesn’t put you aside. You can try to, but that’s not really possible if your heart is in it. You always see problems. Nothing is ever perfect. That can be ever consuming. It’s hard. I don’t want to minimize this process in any way or form.




an

Pregnant and Empowered: Why Trust is the Latest Form of Member Engagement

Three ways health plans can engage, connect with, and delight their pregnant members to nurture goodwill, earn long-term trust, and foster loyal relationships that last.

The post Pregnant and Empowered: Why Trust is the Latest Form of Member Engagement appeared first on MedCity News.




an

Through Early Discussions About Elder Care, Doctors Can Empower Seniors to Age in Place

The vast majority of older adults want to age at home. To support that goal, doctors should encourage them to consider their care options — long before they need assistance.

The post Through Early Discussions About Elder Care, Doctors Can Empower Seniors to Age in Place appeared first on MedCity News.




an

The Startup Economy is Turbulent. Here’s How Founders Can Recognize and Avoid Common Pitfalls

While startups in highly regulated industries like healthcare and finance are almost certain to face heightened scrutiny, there are controllable factors that can offset these challenges.

The post The Startup Economy is Turbulent. Here’s How Founders Can Recognize and Avoid Common Pitfalls appeared first on MedCity News.




an

FDA Takes Step Toward Removal of Ineffective Decongestants From the Market

The FDA has proposed removing oral phenylephrine from its guidelines for over-the-counter drugs due to inefficacy as a decongestant. Use of this ingredient in cold and allergy medicines grew after a federal law required that pseudoephedrine-containing products be kept behind pharmacy counters.

The post FDA Takes Step Toward Removal of Ineffective Decongestants From the Market appeared first on MedCity News.




an

There’s an Opportunity for More Providers to Partner with the 988 Lifeline, Execs Say

Two executives at behavioral health care companies discussed why it’s important for provider organizations to partner with the 988 Suicide & Crisis Lifeline during a panel at the Behavioral Health Tech conference.

The post There’s an Opportunity for More Providers to Partner with the 988 Lifeline, Execs Say appeared first on MedCity News.




an

Driving Genetic Testing Adoption and Improved Patient Care through Health Data Intelligence

By fostering collaboration and seamless data integration into healthcare systems, the industry is laying the groundwork for a future in which “personalized medicine” is so commonplace within clinical practice that we will just start calling it “medicine.”

The post Driving Genetic Testing Adoption and Improved Patient Care through Health Data Intelligence appeared first on MedCity News.




an

‘Serial Killing’ Cell Therapy From Autolus Lands FDA Approval in Blood Cancer

Autolus Therapeutics’ Aucatzyl is now FDA approved for treating advanced cases of B-cell precursor acute lymphoblastic leukemia. While it goes after the same target as Gilead Sciences’ Tecartus, Autolus engineered its CAR T-therapy with properties that could improve safety, efficacy, and durability.

The post ‘Serial Killing’ Cell Therapy From Autolus Lands FDA Approval in Blood Cancer appeared first on MedCity News.