ca

Tell the UK’s research regulator to do more on clinical trial transparency

The UK body that oversees health research is writing a new strategy on clinical trial transparency and it wants to hear opinions on it. The Health Research Authority (HRA) says its strategy aims to “make transparency easy, make compliance clear and make information public.” It has opened a public consultation on the strategy and some […]




ca

UK universities and NHS trusts that flout the rules on clinical trials identified in report to Parliament

An AllTrials report for the House of Commons Science and Technology Select Committee this week has found that 33 NHS trust sponsors and six UK universities are reporting none of their clinical trial results, while others have gone from 0% to 100% following an announcement from the Select Committee in January that universities and NHS […]




ca

AllTrials guide to asking academic institutions about missing results

When university and hospital trusts were called to the UK parliament last year to answer questions on why they were not following the rules on reporting results, we saw how effective the questioning from politicians was. Those of you who watched the parliamentary session saw the pressure the university representatives were put under. Because the politicians asked […]




ca

Half of US clinical trials are breaking the law on reporting results

New research has shown that the majority of clinical trials which should be following the US law on reporting results aren’t. Less than half (41%) of clinical trial results were reported on time and 1 in 3 trials (36%) remain unreported. The research also found that clinical trials sponsored by companies are the most likely […]




ca

Hundreds of clinical trials ruled to be breaking the law

A judge in New York has ruled that hundreds of clinical trials registered on ClinicalTrials.gov are breaking the law by not reporting results. The ruling came in a court case launched against the US Department of Health and Human Services by two plaintiffs, a family doctor and a professor of journalism. The case focused on […]




ca

Clinical Trial Enrollment, ASCO 2013 Edition

Even by the already-painfully-embarrassingly-low standards of clinical trial enrollment in general, patient enrollment in cancer clinical trials is slow. Horribly slow. In many cancer trials, randomizing one patient every three or four months isn't bad at all – in fact, it's par for the course. The most
commonly-cited number is that only 3% of cancer patients participate in a trial – and although exact details of how that number is measured are remarkably difficult to pin down, it certainly can't be too far from reality.

Ultimately, the cost of slow enrollment is borne almost entirely by patients; their payment takes the form of fewer new therapies and less evidence to support their treatment decisions.

So when a couple dozen thousand of the world's top oncologists fly into Chicago to meet, you'd figure that improving accrual would be high on everyone’s agenda. You can't run your trial without patients, after all.

But every year, the annual ASCO meeting underdelivers in new ideas for getting more patients into trials. I suppose this a consequence of ASCO's members-only focus: getting the oncologists themselves to address patient accrual is a bit like asking NASCAR drivers to tackle the problems of aerodynamics, engine design, and fuel chemistry.

Nonetheless, every year, a few brave souls do try. Here is a quick rundown of accrual-related abstracts at this year’s meeting, conveniently sorted into 3 logical categories:

1. As Lord Kelvin may or may not have said, “If you cannot measure it, you cannot improve it.”


Probably the most sensible of this year's crop, because rather than trying to make something out of nothing, the authors measure exactly how pervasive the nothing is. Specifically, they attempt to obtain fairly basic patient accrual data for the last three years' worth of clinical trials in kidney cancer. Out of 108 trials identified, they managed to get – via search and direct inquiries with the trial sponsors – basic accrual data for only 43 (40%).

That certainly qualifies as “terrible”, though the authors content themselves with “poor”.

Interestingly, exactly zero of the 32 industry-sponsored trials responded to the authors' initial survey. This fits with my impression that pharma companies continue to think of accrual data as proprietary, though what sort of business advantage it gives them is unclear. Any one company will have only run a small fraction of these studies, greatly limiting their ability to draw anything resembling a valid conclusion.


CALGB investigators look at 110 trials over the past 10 years to see if they can identify any predictive markers of successful enrollment. Unfortunately, the trials themselves are pretty heterogeneous (accrual periods ranged from 6 months to 8.8 years), so finding a consistent marker for successful trials would seem unlikely.

And, in fact, none of the usual suspects (e.g., startup time, disease prevalence) appears to have been significant. The exception was provision of medication by the study, which was positively associated with successful enrollment.

The major limitation with this study, apart from the variability of trials measured, is in its definition of “successful”, which is simply the total number of planned enrolled patients. Under both of their definitions, a slow-enrolling trial that drags on for years before finally reaching its goal is successful, whereas if that same trial had been stopped early it is counted as unsuccessful. While that sometimes may be the case, it's easy to imagine situations where allowing a slow trial to drag on is a painful waste of resources – especially if results are delayed enough to bring their relevance into question.

Even worse, though, is that a trial’s enrollment goal is itself a prediction. The trial steering committee determines how many sites, and what resources, will be needed to hit the number needed for analysis. So in the end, this study is attempting to identify predictors of successful predictions, and there is no reason to believe that the initial enrollment predictions were made with any consistent methodology.

2. If you don't know, maybe ask somebody?



With these two abstracts we celebrate and continue the time-honored tradition of alchemy, whereby we transmute base opinion into golden data. The magic number appears to be 100: if you've got 3 digits' worth of doctors telling you how they feel, that must be worth something.

In the first abstract, a working group is formed to identify and vote on the major barriers to accrual in oncology trials. Then – and this is where the magic happens – that same group is asked to identify and vote on possible ways to overcome those barriers.

In the second, a diverse assortment of community oncologists were given an online survey to provide feedback on the design of a phase 3 trial in light of recent new data. The abstract doesn't specify who was initially sent the survey, so we cannot tell response rate, or compare survey responders to the general population (I'll take a wild guess and go with “massive response bias”).

Market research is sometimes useful. But what cancer clinical trial do not need right now are more surveys are working groups. The “strategies” listed in the first abstract are part of the same cluster of ideas that have been on the table for years now, with no appreciable increase in trial accrual.

3. The obligatory “What the What?” abstract



The force with which my head hit my desk after reading this abstract made me concerned that it had left permanent scarring.

If this had been re-titled “Poor Measurement of Accrual Factors Leads to Inaccurate Accrual Reporting”, would it still have been accepted for this year’s meeting? That's certainly a more accurate title.

Let’s review: a trial intends to enroll both white and minority patients. Whites enroll much faster, leading to a period where only minority patients are recruited. Then, according to the authors, “an almost 4-fold increase in minority accrual raises question of accrual disparity.” So, sites will only recruit minority patients when they have no choice?

But wait: the number of sites wasn't the same during the two periods, and start-up times were staggered. Adjusting for actual site time, the average minority accrual rate was 0.60 patients/site/month in the first part and 0.56 in the second. So the apparent 4-fold increase was entirely an artifact of bad math.

This would be horribly embarrassing were it not for the fact that bad math seems to be endemic in clinical trial enrollment. Failing to adjust for start-up time and number of sites is so routine that not doing it is grounds for a presentation.

The bottom line


What we need now is to rigorously (and prospectively) compare and measure accrual interventions. We have lots of candidate ideas, and there is no need for more retrospective studies, working groups, or opinion polls to speculate on which ones will work best.  Where possible, accrual interventions should themselves be randomized to minimize confounding variables which prevent accurate assessment. Data needs to be uniformly and completely collected. In other words, the standards that we already use for clinical trials need to be applied to the enrollment measures we use to engage patients to participate in those trials.

This is not an optional consideration. It is an ethical obligation we have to cancer patients: we need to assure that we are doing all we can to maximize the rate at which we generate new evidence and test new therapies.

[Image credit: Logarithmic turtle accrual rates courtesy of Flikr user joleson.]




ca

Can FDA's New Transparency Survive Avandia?

PDUFA V commitments signal a strong commitment to tolerance of open debate in the face of uncertainty.

I can admit to a rather powerful lack of enthusiasm when reading about interpersonal squabbles. It’s even worse in the scientific world: when I read about debates getting mired in personal attacks I tend to simply stop reading and move on to something else.

However, the really interesting part of this week’s meeting of an FDA joint Advisory Committee to discuss the controversial diabetes drug Avandia – at least in the sense of likely long-term impact – is not the scientific question under discussion, but the surfacing and handling of the raging interpersonal battle going on right now inside the Division of Cardiovascular and Renal Products. So I'll have to swallow my distaste and follow along with the drama.

Two words that make us mistrust Duke:
 Anil Potti Christian Laettner

Not that the scientific question at hand – does Avandia pose significant heart risks? – isn't interesting. It is. But if there’s one thing that everyone seems to agree on, it’s that we don’t have good data on the topic. Despite the re-adjudication of RECORD, no one trusts its design (and, ironically, the one trial with a design to rigorously answer the question was halted after intense pressure, despite an AdComm recommendation that it continue).  And no one seems particularly enthused about changing the current status of Avandia: in all likelihood it will continue to be permitted to be marketed under heavy restrictions. Rather than changing the future of diabetes, I suspect the committee will be content to let us slog along the same mucky trail.

The really interesting question, that will potentially impact CDER for years to come, is how it can function with frothing, open dissent among its staffers. As has been widely reported, FDA reviewer Tom Marciniak has written a rather wild and vitriolic assessment of the RECORD trial, excoriating most everyone involved. In a particularly stunning passage, Marciniak appears to claim that the entire output of anyone working at Duke University cannot be trusted because of the fraud committed by Duke cancer researcher Anil Potti:
I would have thought that the two words “Anil Potti” are sufficient for convincing anyone that Duke University is a poor choice for a contractor whose task it is to confirm the integrity of scientific research. 
(One wonders how far Marciniak is willing to take his guilt-by-association theme. Are the words “Cheng Yi Liang” sufficient to convince us that all FDA employees, including Marciniak, are poor choices for deciding matter relating to publicly-traded companies? Should I not comment on government activities because I’m a resident of Illinois (my two words: “Rod Blagojevich”)?)

Rather than censoring or reprimanding Marciniak, his supervisors have taken the extraordinary step of letting him publicly air his criticisms, and then they have in turn publicly criticized his methods and approach.

I have been unable to think of a similar situation at any regulatory agency. The tolerance for dissent being displayed by FDA is, I believe, completely unprecedented.

And that’s the cliffhanger for me: can the FDA’s commitment to transparency extend so far as to accommodate public disagreements about its own approval decisions? Can it do so even when the disagreements take an extremely nasty and inappropriate tone?

  • Rather than considering that open debate is a good thing, will journalists jump on the drama and portray agency leadership as weak and indecisive?
  • Will the usual suspects in Congress be able to exploit this disagreement for their own political gain? How many House subcommittees will be summoning Janet Woodcock in the coming weeks?

I think what Bob Temple and Norman Stockbridge are doing is a tremendous experiment in open government. If they can pull it off, it could force other agencies to radically rethink how they go about crafting and implementing regulations. However, I also worry that it is politically simply not a viable approach, and that the agency will ultimately be seriously hurt by attacks from the media and legislators.

Where is this coming from?

As part of its recent PDUFA V commitment, the FDA put out a fascinating draft document, Structured Approach to Benefit-Risk Assessment in Drug Regulatory Decision-Making. It didn't get a lot of attention when first published back in February (few FDA documents do). However, it lays out a rather bold vision for how the FDA can acknowledge the existence of uncertainty in its evaluation of new drugs. Its proposed structure even envisions an open and honest accounting of divergent interpretations of data:
When they're frothing at the mouth, even Atticus
doesn't let them publish a review
A framework for benefit-risk decision-making that summarizes the relevant facts, uncertainties, and key areas of judgment, and clearly explains how these factors influence a regulatory decision, can greatly inform and clarify the regulatory discussion. Such a framework can provide transparency regarding the basis of conflicting recommendations made by different parties using the same information.
(Emphasis mine.)

Of course, the structured framework here is designed to reflect rational disagreement. Marciniak’s scattershot insults are in many ways a terrible first case for trying out a new level of transparency.

The draft framework notes that safety issues, like Avandia, are some of the major areas of uncertainty in the regulatory process. Contrast this vision of coolly and systematically addressing uncertainties with the sad reality of Marciniak’s attack:
In contrast to the prospective and highly planned studies of effectiveness, safety findings emerge from a wide range of sources, including spontaneous adverse event reports, epidemiology studies, meta-analyses of controlled trials, or in some cases from randomized, controlled trials. However, even controlled trials, where the evidence of an effect is generally most persuasive, can sometimes provide contradictory and inconsistent findings on safety as the analyses are in many cases not planned and often reflect multiple testing. A systematic approach that specifies the sources of evidence, the strength of each piece of evidence, and draws conclusions that explain how the uncertainty weighed on the decision, can lead to more explicit communication of regulatory decisions. We anticipate that this work will continue beyond FY 2013.
I hope that work will continue beyond 2013. Thoughtful, open discussions of real uncertainties are one of the most worthwhile goals FDA can aspire to, even if it means having to learn how to do so without letting the Marciniaks of the world scuttle the whole endeavor.

[Update June 6: Further bolstering the idea that the AdCom is just as much about FDA's ability to transparently manage differences of expert opinion in the face of uncertain data, CDER Director Janet Woodcock posted this note on the FDA's blog. She's pretty explicit about the bigger picture:
There have been, and continue to be, differences of opinion and scientific disputes, which is not uncommon within the agency, stemming from varied conclusions about the existing data, not only with Avandia, but with other FDA-regulated products. 
At FDA, we actively encourage and welcome robust scientific debate on the complex matters we deal with — as such a transparent approach ensures the scientific input we need, enriches the discussions, and enhances our decision-making.
I agree, and hope she can pull it off.]




ca

Counterfeit Drugs in Clinical Trials?

This morning I ran across a bit of a coffee-spitter: in the middle of an otherwise opaquely underinformative press release fromTranscelerate Biopharma about the launch of their

Counterfeits flooding
the market? Really?
"Comparator Network" - which will perhaps streamline member companies' ability to obtain drugs from each other for clinical trials using active comparator arms -  the CEO of the consortium, Dalvir Gill, drops a rather remarkable quote:

"Locating and accessing these comparators at the right time, in the right quantities and with the accompanying drug stability and regulatory information we need, doesn't always happen efficiently. This is further complicated by infiltration of the commercial drug supply chain by counterfeit drugs.  With the activation of our Comparator Network the participating TransCelerate companies will be able to source these comparator drugs directly from each other, be able to secure supply when they need it in the quantities they need, have access to drug data and totally mitigate the risk of counterfeit drugs in that clinical trial."

[Emphasis added.]

I have to admit to being a little floored by the idea that there is any sort of risk, in industry-run clinical trials, of counterfeit medication "infiltration".

Does Gill know something that the rest of us don't? Or is this just an awkward slap at perceived competition – innuendo against the companies that currently manage clinical trial comparator drug supply? Or an attempt at depicting the trials of non-Transcelerate members as risky and prone to fraud?

Either way, it could use some explaining. Thinking I might have missed something, I did do a quick literature search to see if I could come across any references to counterfeits in trials. Google Scholar and PubMed produced no useful results, but Wikipedia helpfully noted in its entry on counterfeit medications:

Counterfeit drugs have even been known to have been involved in clinical drug trials.[citation needed]


And on that point, I think we can agree: Citation needed. I hope the folks at Transcelerate will oblige.




ca

Can a Form Letter from FDA "Blow Your Mind"?

Adam Feuerstein appears to be a generally astute observer of the biotech scene. As a finance writer, he's accosted daily with egregiously hyped claims from small drug companies and their investors, and I think he tends to do an excellent job of spotting cases where breathless excitement is unaccompanied by substantive information.


However, Feuerstein's healthy skepticism seems to have abandoned him last year in the case of a biotech called Sarepta Therapeutics, who released some highly promising - but also incredibly limited - data on their treatment for Duchenne muscular dystrophy. After a disappointing interaction with the FDA, Sarepta's stock dropped, and Feuerstein appeared to realize that he'd lost some objectivity on the topic.


However, with the new year comes new optimism, and Feuerstein seems to be back to squinting hard at tea leaves - this time in the case of a form letter from the FDA.


He claims that the contents of the letter will "blow your mind". To him, the key passage is:


We understand that you feel that eteplirsen is highly effective, and may be confused by what you have read or heard about FDA's actions on eteplirsen. Unfortunately, the information reported in the press or discussed in blogs does not necessarily reflect FDA's position. FDA has reached no conclusions about the possibility of using accelerated approval for any new drug for the treatment of Duchenne muscular dystrophy, and for eteplirsen in particular.


Feuerstein appears to think that the fact that FDA "has reached no conclusions" may mean that it may be "changing its mind". To which he adds: "Wow!"
Adam Feuerstein: This time,
too much froth, not enough coffee?


I'm not sure why he thinks that. As far as I can tell, the FDA will never reach a conclusion like this before its gone through the actual review process. After all, if FDA already knows the answer before the full review, what would the point of the review even be? It would seem a tremendous waste of agency resources. Not to mention how non-level the playing field would be if some companies were given early yes/no decisions while others had to go through a full review.


It seems fair to ask: is this a substantive change by FDA review teams, or would it be their standard response to any speculation about whether and how they would approve or reject a new drug submission? Can Feuerstein point to other cases where FDA has given a definitive yes or no on an application before the application was ever filed? I suspect not, but am open to seeing examples.


A more plausible theory for this letter is that the FDA is attempting a bit of damage control. It is not permitted to share anything specific it said or wrote to Sarepta about the drug, and has come under some serious criticism for “rejecting” Sarepta’s Accelerated Approval submission. The agency has been sensitive to the DMD community, even going so far as to have Janet Woodcock and Bob Temple meet with DMD parents and advocates last February. Sarepta has effectively positioned FDA as the reason for it’s delay in approval, but no letters have actually been published, so the conversation has been a bit one-sided. This letter appears to be an attempt at balancing perspectives a bit, although the FDA is still hamstrung by its restriction on relating any specific communications.

Ultimately, this is a form letter that contains no new information: FDA has reached no conclusions because FDA is not permitted to reach conclusions until it has completed a fair and thorough review, which won't happen until the drug is actually submitted for approval.

We talk about "transparency" in terms of releasing clinical trials data, but to me there is a great case to be made for increase regulatory transparency. The benefits to routine publication of most FDA correspondence and meeting results (including such things as Complete Response letters, explaining FDA's thinking when it rejects new applications) would actually go a long way towards improving public understanding of the drug review and approval process.




ca

These Words Have (Temporarily) Relocated

Near the end of last year, I had the bright idea of starting a second blog, Placebo Lead-In, to capture a lot of smaller items that I found interesting but wasn't going to work up into a full-blown, 1000 word post.

According to Murphy’s Law, or the Law of Unintended Consequences, or the Law of Biting Off More Than You Can Chew, or some such similar iron rule of the universe, what happened next should have been predictable.

First, my team at CAHG Trials launched a new blog, First Patient In. FPI is dedicated to an open discussion of patient recruitment ideas, and I’m extremely proud of what we've published so far.

Next, I was invited to be a guest blogger for the upcoming Partnerships in Clinical Trials Conference.

Suddenly, I've gone from 1 blog to 4. And while my writing output appears to have increased, it definitely hasn't quadrupled. So this blog has been quiet for a bit too long as a result.

The good news is that the situation is temporary - Partnerships will actually happen at the end of this month. (If you’re going: drop me a line and let’s meet. If you’re not: you really should come and join us!) My contributions to FPI will settle into a monthly post, as I have a fascinating and clever team to handle most of the content.

In case you've missed it, then, here is a brief summary of my posts elsewhere over the past 2 months.

First Patient In


Partnerships in Clinical Trials



Please take a look, and I will see you back here soon.

[Photo credit: detour sign via Flikr user crossley]




ca

Establishing efficacy - without humans?

The decade following passage of FDAAA has been one of easing standards for drug approvals in the US, most notably with the advent of “breakthrough” designation created by FDASIA in 2012 and the 21st Century Cures Act in 2016.

Although, as of this writing, there is no nominee for FDA Commissioner, it appears to be safe to say that the current administration intends to accelerate the pace of deregulation, mostly through further lowering of approval requirements. In fact, some of the leading contenders for the position are on record as supporting a return to pre-Kefauver-Harris days, when drug efficacy was not even considered for approval.

Build a better mouse model, and pharma will
beat a path to your door - no laws needed.

In this context, it is at least refreshing to read a proposal to increase efficacy standards. This comes from two bioethicists at McGill University, who make the somewhat-startling case for a higher degree of efficacy evaluation before a drug begins any testing in humans.
We contend that a lack of emphasis on evidence for the efficacy of drug candidates is all too common in decisions about whether an experimental medicine can be tested in humans. We call for infrastructure, resources and better methods to rigorously evaluate the clinical promise of new interventions before testing them on humans for the first time.
The author propose some sort of centralized clearinghouse to evaluate efficacy more rigorously. It is unclear what they envision this new multispecialty review body’s standards for green-lighting a drug to enter human testing. Instead they propose three questions:
  • What is the likelihood that the drug will prove clinically useful?
  • Assume the drug works in humans. What is the likelihood of observing the preclinical results?
  • Assume the drug does not work in humans. What is the likelihood of observing the preclinical results?
These seem like reasonable questions, I suppose – and are likely questions that are already being asked of preclinical data. They certainly do not rise to the level of providing a clear standard for regulatory approval, though perhaps it’s a reasonable place to start.

The most obvious counterargument here is one that the authors curiously don’t pick up on at all: if we had the ability to accurately (or even semiaccurately) predict efficacy preclinically, pharma sponsors would already be doing it. The comment notes: “More-thorough assessments of clinical potential before trials begin could lower failure rates and drug-development costs.” And it’s hard not to agree: every pharmaceutical company would love to have even an incrementally-better sense of whether their early pipeline drugs will be shown to work as hoped.

The authors note
Commercial interests cannot be trusted to ensure that human trials are launched only when the case for clinical potential is robust. We believe that many FIH studies are launched on the basis of flimsy, underscrutinized evidence.
However, they do not produce any evidence that industry is in any way deliberately underperforming their preclinical work, merely that preclinical efficacy is often difficult to reproduce and is poorly correlated with drug performance in humans.

Pharmaceutical companies have many times more candidate compounds than they can possibly afford to put into clinical trials. Figuring out how to lower failure rates – or at least the total cost of failure - is a prominent industry obsession, and efficacy remains the largest source of late-stage trial failure. This quest to “fail faster” has resulted in larger and more expensive phase 2 trials, and even to increased efficacy testing in some phase 1 trials. And we do this not because of regulatory pressure, but because of hopes that these efforts will save overall costs. So it seems beyond probable that companies would immediately invest more in preclinical efficacy testing, if such testing could be shown to have any real predictive power. But generally speaking, it does not.

As a general rule, we don’t need regulations that are firmly aligned with market incentives, we need regulations if and when we think those incentives might run counter to the general good. In this case, there are already incredibly strong market incentives to improve preclinical assessments. Where companies have attempted to do something with limited success, it would seem quixotic to think that regulatory fiat will accomplish more.

(One further point. The authors try to link the need for preclinical efficacy testing to the 2016 Bial tragedy. This seems incredibly tenuous: the authors speculate that perhaps trial participants would not have been harmed and killed if Bial had been required to produce more evidence of BIA102474’s clinical efficacy before embarking on their phase 1 trials. But that would have been entirely coincidental in this case: if the drug had in fact more evidence of therapeutic promise, the tragedy still would have happened, because it had nothing at all to do with the drug’s efficacy.

This is to some extent a minor nitpick, since the argument in favor of earlier efficacy testing does not depend on a link to Bial. However, I bring it up because a) the authors dedicate the first four paragraphs of their comment to the link, and b) there appears to be a minor trend of using the death and injuries of that trial to justify an array of otherwise-unrelated initiatives. This seems like a trend we should discourage.)

[Update 2/23: I posted this last night, not realizing that only a few hours earlier, John LaMattina had published on this same article. His take is similar to mine, in that he is suspicious of the idea that pharmaceutical companies would knowingly push ineffective drugs up their pipeline.]

Kimmelman, J., & Federico, C. (2017). Consider drug efficacy before first-in-human trials Nature, 542 (7639), 25-27 DOI: 10.1038/542025a




ca

A human bird flu case is thought to be found in Canada for the first time

A person has tested positive in British Columbia, Canadian health officials said, though the results must be sent to another lab for confirmation.




ca

What does a 2nd Trump term mean for the Affordable Care Act?

President-elect Donald Trump tried unsuccessfully to get rid of the Affordable Care Act during his first term. What action will he take this time around?




ca

More young people are surviving cancer. Then they face a life altered by it

More people are getting cancer in their 20s, 30s, and 40s, and surviving, thanks to rapid advancement in care. Many will have decades of life ahead of them, which means they face greater and more complex challenges in survivorship. Lourdes Monje is navigating these waters at age 29.




ca

Patrick Dempsey aims to raise awareness of cancer disparities and encourage screening

NPR's Leila Fadel talks with actor Patrick Dempsey about his efforts to raise money for cancer treatment and prevention.




ca

Cara Melihat Maxwin Saat Bermain Slot

Mesin slot sering kali memiliki istilah maxwin yang mengacu pada jumlah tertinggi yang bisa dimenangkan seorang pemain dalam permainan. Maxwins dalam slot biasanya dikaitkan dengan kombinasi simbol tertentu yang, jika…

The post Cara Melihat Maxwin Saat Bermain Slot appeared first on Biosimilarnews.



  • Tips & Trik

ca

Link Daftar Situs Slot Gacor Gampang Menang Maxwin Terpercaya Hari Ini

Keuntungan besar dan kegembiraan yang ditawarkan oleh mesin slot online membuatnya semakin populer. Namun, dalam lautan situs slot yang ada, bagaimana Anda bisa menemukan situs slot terbaik yang dapat memberikan…

The post Link Daftar Situs Slot Gacor Gampang Menang Maxwin Terpercaya Hari Ini appeared first on Biosimilarnews.




ca

Noise Cancellation for Your Brain



Elemind, a 5-year-old startup based in Cambridge, Mass., today unveiled a US $349 wearable for neuromodulation, the company’s first product. According to cofounder and CEO Meredith Perry, the technology tracks the oscillation of brain waves using electroencephalography (EEG) sensors that detect the electrical activity of the brain and then influence those oscillations using bursts of sound delivered via bone conduction.

Elemind’s first application for this wearable aims to suppress alpha waves to help induce sleep. There are other wearables on the market that monitor brain waves and, through biofeedback, encourage users to actively modify their alpha patterns. Elemind’s headband appears to be the first device to use sound to directly influence the brain waves of a passive user.

In a clinical trial, says Perry [no relation to author], 76 percent of subjects fell asleep more quickly. Those who did see a difference averaged 48 percent less time to progress from awake to asleep. The results were similar to those of comparable trials of pharmaceutical sleep aids, Perry indicated.

“For me,” Perry said, “it cuts through my rumination, quiets my thinking. It’s like noise cancellation for the brain.”

I briefly tested Elemind’s headband in May. I found it comfortable, with a thick cushioned band that sits across the forehead connected to a stretchy elastic loop to keep it in place. In the band are multiple EEG electrodes, a processor, a three-axis accelerometer, a rechargeable lithium-polymer battery, and custom electronics that gather the brain’s electrical signals, estimate their phase, and generate pink noise through a bone-conduction speaker. The whole thing weighs about 60 grams—about as much as a small kiwi fruit.

My test conditions were far from optimal for sleep: early afternoon, a fairly bright conference room, a beanbag chair as bed, and a vent blowing. And my test lasted just 4 minutes. I can say that I didn’t find the little bursts of pink noise (white noise without the higher frequencies) unpleasant. And since I often wear an eye mask, feeling fabric on my face wasn’t disturbing. It wasn’t the time or place to try for sound sleep, but I—and the others in the room—noted that after 2 minutes I was yawning like crazy.

How Elemind tweaks brain waves

What was going on in my brain? Briefly, different brain states are associated with different frequencies of waves. Someone who is relaxed with eyes closed but not asleep produces alpha waves at around 10 hertz. As they drift off to sleep, the alpha waves are supplanted by theta waves, at around 5 Hz. Eventually, the delta waves of deep sleep show up at around 1 Hz.

Ryan Neely, Elemind’s vice president of science and research, explains: “As soon as you put the headband on,” he says, “the EEG system starts running. It uses straightforward signal processing with bandpass filtering to isolate the activity in the 8- to 12-Hz frequency range—the alpha band.”

“Then,” Neely continues, “our algorithm looks at the filtered signal to identify the phase of each oscillation and determines when to generate bursts of pink noise.”

To help a user fall asleep more quickly [top], bursts of pink noise are timed to generate a brain response that is out of phase with alpha waves and so suppresses them. To enhance deep sleep [bottom], the pink noise is timed to generate a brain response that is in phase with delta waves.Source: Elemind

These auditory stimuli, he explains, create ripples in the waves coming from the brain. Elemind’s system tries to align these ripples with a particular phase in the wave. Because there is a gap between the stimulus and the evoked response, Elemind tested its system on 21 people and calculated the average delay, taking that into account when determining when to trigger a sound.

To induce sleep, Elemind’s headband targets the trough in the alpha wave, the point at which the brain is most excitable, Neely says.

“You can think of the alpha rhythm as a gate for communication between different areas of the brain,” he says. “By interfering with that communication, that coordination between different brain areas, you can disrupt patterns, like the ruminations that keep you awake.”

With these alpha waves suppressed, Neely says, the slower oscillations, like the theta waves of light sleep, take over.

Elemind doesn’t plan to stop there. The company plans to add an algorithm that addresses delta waves, the low-frequency 0.5- to 2-Hz waves characteristic of deep sleep. Here, Elemind’s technology will attempt to amplify this pattern with the intent of improving sleep quality.

Is this safe? Yes, Neely says, because auditory stimulation is self-limiting. “Your brain waves have a natural space they can occupy,” he explains, “and this stimulation just moved it within that natural space, unlike deep-brain stimulation, which can move the brain activity outside natural parameters.”

Going beyond sleep to sedation, memory, and mental health

Applications may eventually go beyond inducing and enhancing sleep. Researchers at the University of Washington and McGill University have completed a clinical study to determine if Elemind’s technology can be used to increase the pain threshold of subjects undergoing sedation. The results are being prepared for peer review.

Elemind is also working with a team involving researchers at McGill and the Leuven Brain Institute to determine if the technology can enhance memory consolidation in deep sleep and perhaps have some usefulness for people with mild cognitive impairment and other memory disorders.

Neely would love to see more applications investigated in the future.

“Inverse alpha stimulation [enhancing instead of suppressing the signal] could increase arousal,” he says. “That’s something I’d love to look into. And looking into mental-health treatment would be interesting, because phase coupling between the different brain regions appears to be an important factor in depression and anxiety disorders.”

Perry, who previously founded the wireless power startup UBeam, cofounded Elemind with four university professors with expertise in neuroscience, optogenetics, biomedical engineering, and artificial intelligence. The company has $12 million in funding to date and currently has 13 employees.

Preorders at $349 start today for beta units, and Elemind expects to start general sales later this year. The company will offer customers an optional membership at $7 to $13 monthly that will allow cloud storage of sleep data and access to new apps as they are released.




ca

Is AI Search a Medical Misinformation Disaster?



Last month when Google introduced its new AI search tool, called AI Overviews, the company seemed confident that it had tested the tool sufficiently, noting in the announcement that “people have already used AI Overviews billions of times through our experiment in Search Labs.” The tool doesn’t just return links to Web pages, as in a typical Google search, but returns an answer that it has generated based on various sources, which it links to below the answer. But immediately after the launch users began posting examples of extremely wrong answers, including a pizza recipe that included glue and the interesting fact that a dog has played in the NBA.

Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory.

While the pizza recipe is unlikely to convince anyone to squeeze on the Elmer’s, not all of AI Overview’s extremely wrong answers are so obvious—and some have the potential to be quite harmful. Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory and has a new book out about the online propagandists who “turn lies into reality.” She has studied the spread of medical misinformation via social media, so IEEE Spectrum spoke to her about whether AI search is likely to bring an onslaught of erroneous medical advice to unwary users.

I know you’ve been tracking disinformation on the Web for many years. Do you expect the introduction of AI-augmented search tools like Google’s AI Overviews to make the situation worse or better?

Renée DiResta: It’s a really interesting question. There are a couple of policies that Google has had in place for a long time that appear to be in tension with what’s coming out of AI-generated search. That’s made me feel like part of this is Google trying to keep up with where the market has gone. There’s been an incredible acceleration in the release of generative AI tools, and we are seeing Big Tech incumbents trying to make sure that they stay competitive. I think that’s one of the things that’s happening here.

We have long known that hallucinations are a thing that happens with large language models. That’s not new. It’s the deployment of them in a search capacity that I think has been rushed and ill-considered because people expect search engines to give them authoritative information. That’s the expectation you have on search, whereas you might not have that expectation on social media.

There are plenty of examples of comically poor results from AI search, things like how many rocks we should eat per day [a response that was drawn for an Onion article]. But I’m wondering if we should be worried about more serious medical misinformation. I came across one blog post about Google’s AI Overviews responses about stem-cell treatments. The problem there seemed to be that the AI search tool was sourcing its answers from disreputable clinics that were offering unproven treatments. Have you seen other examples of that kind of thing?

DiResta: I have. It’s returning information synthesized from the data that it’s trained on. The problem is that it does not seem to be adhering to the same standards that have long gone into how Google thinks about returning search results for health information. So what I mean by that is Google has, for upwards of 10 years at this point, had a search policy called Your Money or Your Life. Are you familiar with that?

I don’t think so.

DiResta: Your Money or Your Life acknowledges that for queries related to finance and health, Google has a responsibility to hold search results to a very high standard of care, and it’s paramount to get the information correct. People are coming to Google with sensitive questions and they’re looking for information to make materially impactful decisions about their lives. They’re not there for entertainment when they’re asking a question about how to respond to a new cancer diagnosis, for example, or what sort of retirement plan they should be subscribing to. So you don’t want content farms and random Reddit posts and garbage to be the results that are returned. You want to have reputable search results.

That framework of Your Money or Your Life has informed Google’s work on these high-stakes topics for quite some time. And that’s why I think it’s disturbing for people to see the AI-generated search results regurgitating clearly wrong health information from low-quality sites that perhaps happened to be in the training data.

So it seems like AI overviews is not following that same policy—or that’s what it appears like from the outside?

DiResta: That’s how it appears from the outside. I don’t know how they’re thinking about it internally. But those screenshots you’re seeing—a lot of these instances are being traced back to an isolated social media post or a clinic that’s disreputable but exists—are out there on the Internet. It’s not simply making things up. But it’s also not returning what we would consider to be a high-quality result in formulating its response.

I saw that Google responded to some of the problems with a blog post saying that it is aware of these poor results and it’s trying to make improvements. And I can read you the one bullet point that addressed health. It said, “For topics like news and health, we already have strong guardrails in place. In the case of health, we launched additional triggering refinements to enhance our quality protections.” Do you know what that means?

DiResta: That blog posts is an explanation that [AI Overviews] isn’t simply hallucinating—the fact that it’s pointing to URLs is supposed to be a guardrail because that enables the user to go and follow the result to its source. This is a good thing. They should be including those sources for transparency and so that outsiders can review them. However, it is also a fair bit of onus to put on the audience, given the trust that Google has built up over time by returning high-quality results in its health information search rankings.

I know one topic that you’ve tracked over the years has been disinformation about vaccine safety. Have you seen any evidence of that kind of disinformation making its way into AI search?

DiResta: I haven’t, though I imagine outside research teams are now testing results to see what appears. Vaccines have been so much a focus of the conversation around health misinformation for quite some time, I imagine that Google has had people looking specifically at that topic in internal reviews, whereas some of these other topics might be less in the forefront of the minds of the quality teams that are tasked with checking if there are bad results being returned.

What do you think Google’s next moves should be to prevent medical misinformation in AI search?

DiResta: Google has a perfectly good policy to pursue. Your Money or Your Life is a solid ethical guideline to incorporate into this manifestation of the future of search. So it’s not that I think there’s a new and novel ethical grounding that needs to happen. I think it’s more ensuring that the ethical grounding that exists remains foundational to the new AI search tools.




ca

Superconducting Wire Sets New Current Capacity Record



UPDATE 31 OCTOBER 2024: No. 1 no longer. The would-have-been groundbreaking study published in Nature Communications by Amit Goyal et al. claiming the world’s highest-performing high-temperature superconducting wires yet has been retracted by the authors.

The journal’s editorial statement that now accompanies the paper says that after publication, an error in the calculation of the reported performance was identified. All of the study’s authors agreed with the retraction.

The researchers were first alerted to the issue by Evgeny Talantsev at the Mikheev Institute of Metal Physics in Ekaterinburg, Russia, and Jeffery Tallon at the Victoria University of Wellington in New Zealand. In a 2015 study, the two researchers had suggested upper limits for thin-film superconductors, and Tallon notes follow-up papers showed these limits held for more than 100 known superconductors. “The Goyal paper claimed current densities 2.5 times higher, so it was immediately obvious to us that there was a problem here,” he says.

Upon request, Goyal and his colleagues “very kindly agreed to release their raw data and did so quickly,” Tallon says. He and Talantsev discovered a mistake in the conversion of magnetization units.

“Most people who had been in the game for a long time would be fully conversant with the units conversion because the instruments all deliver magnetic data in [centimeter-gram-second] gaussian units, so they always have to be converted to [the International System of Units],” Tallon says. “It has always been a little tricky, but students are asked to take great care and check their numbers against other reports to see if they agree.”

In a statement, Goyal notes he and his colleagues “intend to continue to push the field forward” by continuing to explore ways to enhance wire performance using nanostructural modifications. —Charles Q. Choi

Original article from 17 August, 2024 follows:

Superconductors have for decades spurred dreams of extraordinary technological breakthroughs, but many practical applications for them have remained out of reach. Now a new study reveals what may be the world’s highest-performing high-temperature superconducting wires yet, ones that carry 50 percent as much current as the previous record-holder. Scientists add this advance was achieved without increased costs or complexity to how superconducting wires are currently made.

Superconductors conduct electricity with zero resistance. Classic superconductors work only at super-cold temperatures below 30 degrees Kelvin. In contrast, high-temperature superconductors can operate at temperatures above 77 K, which means they can be cooled to superconductivity using comparatively inexpensive and less burdensome cryogenics built around liquid nitrogen coolant.

Regular electrical conductors all resist electron flow to some degree, resulting in wasted energy. The fact that superconductors conduct electricity without dissipating energy has long lead to dreams of significantly more efficient power grids. In addition, the way in which rivers of electric currents course through them means superconductors can serve as powerful electromagnets, for applications such as maglev trains, better MRI scanners for medicine, doubling the amount of power generated from wind turbines, and nuclear fusion power plants.

“Today, companies around the world are fabricating kilometer-long, high-temperature superconductor wires,” says Amit Goyal, SUNY Distinguished Professor and SUNY Empire Innovation Professor at the University of Buffalo in New York.

However, many large-scale applications for superconductors may stay fantasies until researchers can find a way to fabricate high-temperature superconducting wires in a more cost-effective manner.

In the new research, scientists have created wires that have set new records for the amount of current they can carry at temperatures ranging from 5 K to 77 K. Moreover, fabrication of the new wires requires processes no more complex or costly than those currently used to make high-temperature superconducting wires.

“The performance we have reported in 0.2-micron-thick wires is similar to wires almost 10 times thicker,” Goyal says.

At 4.2 K, the new wires carried 190 million amps per square centimeter without any externally applied magnetic field. This is some 50 percent better than results reported in 2022 and a full 100 percent better than ones detailed in 2021, Goyal and his colleagues note. At 20 K and under an externally applied magnetic field of 20 tesla—the kind of conditions envisioned for fusion applications—the new wires may carry about 9.3 million amps per square centimeter, roughly 5 times greater than present-day commercial high-temperature superconductor wires, they add.

Another factor key to the success of commercial high-temperature superconductor wires is pinning force—the ability to keep magnetic vortices pinned in place within the superconductors that could otherwise interfere with electron flow. (So in that sense higher pinning force values are better here—more conducive to the range of applications expected for such high-capacity, high-temperature superconductors.) The new wires showed record-setting pinning forces of more than 6.4 trillion newtons at 4.3 K under a 7 tesla magnetic field. This is more than twice as much as results previously reported in 2022.

The new wires are based on rare-earth barium copper oxide (REBCO). The wires use nanometer-sized columns of insulating, non-superconducting barium zirconate at nanometer-scale spacings within the superconductor that can help pin down magnetic vortices, allowing for higher supercurrents.

The researchers made these gains after a few years spent optimizing deposition processes, Goyal says. “We feel that high-temperature superconductor wire performance can still be significantly improved,” he adds. “We have several paths to get to better performance and will continue to explore these routes.”

Based on these results, high-temperature superconductor wire manufacturers “will hopefully further optimize their deposition conditions to improve the performance of their wires,” Goyal says. “Some companies may be able to do this in a short time.”

The hope is that superconductor companies will be able to significantly improve performance without too many changes to present-day manufacturing processes. “If high-temperature superconductor wire manufacturers can even just double the performance of commercial high-temperature superconductor wires while keeping capital equipment costs the same, it could make a transformative impact to the large-scale applications of superconductors,” Goyal says.

The scientists detailed their findings on 7 August in the journal Nature Communications.

This story was updated on 19 August 2024 to correct Amit Goyal’s title and affiliation.




ca

Cat's Eye Camera Can See Through Camouflage



Did that rock move, or is it a squirrel crossing the road? Tracking objects that look a lot like their surroundings is a big problem for many autonomous vision systems. AI algorithms can solve this camouflage problem, but they take time and computing power. A new camera designed by researchers in South Korea provides a faster solution. The camera takes inspiration from the eyes of a cat, using two modifications that let it distinguish objects from their background, even at night.

“In the future … a variety of intelligent robots will require the development of vision systems that are best suited for their specific visual tasks,” says Young Min Song, a professor of electrical engineering and computer science at Gwangju Institute of Science and Technology and one of the camera’s designers. Song’s recent research has been focused on using the “perfectly adapted” eyes of animals to enhance camera hardware, allowing for specialized cameras for different jobs. For example, fish eyes have wider fields of view as a consequence of their curved retinas. Cats may be common and easy to overlook, he says, but their eyes actually offer a lot of inspiration.

This particular camera copied two adaptations from cats’ eyes: their vertical pupils and a reflective structure behind their retinas. Combined, these allowed the camera to be 10 percent more accurate at distinguishing camouflaged objects from their backgrounds and 52 percent more efficient at absorbing incoming light.

Using a vertical pupil to narrow focus

While conventional cameras can clearly see the foreground and background of an image, the slitted pupils of a cat focus directly on a target, preventing it from blending in with its surroundings. Kim et al./Science Advances

In conventional camera systems, when there is adequate light, the aperture—the camera’s version of a pupil—is small and circular. This structure allows for a large depth of field (the distance between the closest and farthest objects in focus), clearly seeing both the foreground and the background. By contrast, cat eyes narrow to a vertical pupil during the day. This shifts the focus to a target, distinguishing it more clearly from the background.

The researchers 3D printed a vertical slit to use as an aperture for their camera. They tested the vertical slit using seven computer vision algorithms designed to track moving objects. The vertical slit increased contrast between a target object and its background, even if they were visually similar. It beat the conventional camera on five of the seven tests. For the two tests it performed worse than the conventional camera, the accuracies of the two cameras were within 10 percent of each other.

Using a reflector to gather additional light

Cats can see more clearly at night than conventional cameras due to reflectors in their eyes that bring extra light to their retinas.Kim et al./Science Advances

Cat eyes have an in-built reflector, called a tapetum lucidum, which sits behind the retina. It reflects light that passes through the retina back at it, so it can process both the incoming light and reflected light, giving felines superior night vision. You can see this biological adaptation yourself by looking at a cat’s eyes at night: they will glow.

The researchers created an artificial version of this biological structure by placing a silver reflector under each photodiode in the camera. Photodiodes without a reflector generated current when more than 1.39 watts per square meter of light fell on them, while photodiodes with a reflector activated with 0.007 W/m2 of light. That means the photodiode could generate an image with about 1/200th the light.

Each photodiode was placed above a reflector and joined by metal electrodes to create a curved image sensor.Kim et al./Science Advances

To decrease visual aberrations (imperfections in the way the lens of the camera focuses light), Song and his team opted to create a curved image sensor, like the back of the human eye. In such a setup, a standard image sensor chip won’t work, because it’s rigid and flat. Instead it often relies on many individual photodiodes arranged on a curved substrate. A common problem with such curved sensors is that they require ultrathin silicon photodiodes, which inherently absorb less light than a standard imager’s pixels. But reflectors behind each photodiode in the artificial cat’s eye compensated for this, enabling the researchers to create a curved imager without sacrificing light absorption.

Together, vertical slits and reflectors led to a camera that could see more clearly in the dark and isn’t fooled by camouflage. “Applying these two characteristics to autonomous vehicles or intelligent robots could naturally improve their ability to see objects more clearly at night and to identify specific targets more accurately,” says Song. He foresees this camera being used for self-driving cars or drones in complex urban environments.

Song’s lab is continuing to work on using biological solutions to solve artificial vision problems. Currently, they are developing devices that mimic how brains process images, hoping to one day combine them with their biologically-inspired cameras. The goal, says Song, is to “mimic the neural systems of nature.”

Song and his colleague’s work was published this week in the journal Science Advances.

This article appears in the November 2024 print issue.




ca

Stretchy Wearables Can Now Heal Themselves



If you’ve ever tried to get a bandage to stick to your elbow, you understand the difficulty in creating wearable devices that attach securely to the human body. Add digital electronic circuitry, and the problem becomes more complicated. Now include the need for the device to fix breaks and damage automatically—and let’s make it biodegradable while we’re at it—and many researchers would throw up their hands in surrender.

Fortunately, an international team led by researchers at Korea University Graduate School of Converging Science and Technology (KU-KIST) persevered, and has developed conductor materials that it claims are stretchable, self-healing, and biocompatible. Their project was described this month in the journal Science Advances.

The biodegradable conductor offers a new approach to patient monitoring and delivering treatments directly to the tissues and organs where they are needed. For example, a smart patch made of these materials could measure motion, temperature, and other biological data. The material could also be used to create sensor patches that can be implanted inside the body, and even mounted on the surface of internal organs. The biocompatible materials can be designed to degrade after a period of time, eliminating the need for an invasive procedure to remove the sensor later.

“This new technology is a glimpse at the future of remote healthcare,” says Robert Rose, CEO of Rose Strategic Partners, LLC. “Remote patient monitoring is an industry still in its early stages, but already we are seeing the promise of what is not only possible, but close on the horizon. Imagine a device implanted at a surgical site to monitor and report your internal healing progress. If it is damaged, the device can heal itself, and when the job is done, it simply dissolves. It sounds like science fiction, but it’s now science fact.”

Self-healing elastics

After being cut a ribbonlike film was able to heal itself in about 1 minute.Suk-Won Hwang

The system relies on two different layers of flexible material, both self-healing: one is for conduction and the other is an elastomer layer that serves as a substrate to support the sensors and circuitry needed to collect data. The conductor layer is based on a substance known by the acronym PEDOT:PSS, which is short for Poly(3,4-ethylenedioxythiophene) polystyrene sulfonate. It’s a conductive polymer widely used in making flexible displays and touch panels, as well as wearable devices. To increase the polymer’s conductivity and self-healing properties, the research team used additives including polyethylene glycol and glycol, which helped increase conductivity as well as the material’s ability to automatically repair damage such as cuts or tears.

In order to conform to curved tissues and survive typical body motion, the substrate layer must be extremely flexible. The researchers based it on elastomers that can match the shape of curved tissues, such as skin or individual organs.

These two layers stick to each other, thanks to chemical bonds that can connect the polymer chains of the plastic films in each layer. Combined, these materials create a system that is flexible and stretchable. In testing, the researchers showed that the materials could survive stretching up to 500 percent.

The self-healing function arises from the material’s ability to reconnect to itself when cut or otherwise damaged. This self-healing feature is based on a chemical process called disulfide metathesis. In short, polymer molecules containing pairs of linked sulfur atoms, called disulfides, have the ability to reform themselves after being severed. The phenomenon arises from a chemical process called disulfide-disulfide shuffling reactions, in which disulfide bonds in the molecule break and then reform, not necessarily between the original partners. According to the KU-KIST researchers, after being cut, their material was able to recover conductivity in its circuits within about two minutes without any intervention. The material was also tested for bending, twisting, and its ability to function both in air and under water.

This approach offers many advantages over other flexible electronics designs. For example, silver nanowires and carbon nanotubes have been used as the basis for stretchable devices, but they can be brittle and lack the self-healing properties of the KU-KIST materials. Other materials such as liquid metals can self-heal, but they are typically difficult to handle and integrate into wearable circuitry.

As a demonstration, the team created a multifunction sensor that included humidity, temperature, and pressure sensors that was approximately 4.5 square centimeters. In spite of being cut in four separate locations, it was able to heal itself and continue to provide sensor readings.

Implant tested in a rat

To take the demonstration a step further, the researchers created a 1.8-cm2 device that was attached to a rat’s bladder. The device was designed to wrap around the bladder and then adhere to itself, so no adhesives or sutures were required to attach the sensor onto the bladder. The team chose the bladder for their experiments because, under normal conditions, its size can change by 300 percent.

The device incorporated both electrodes and pressure sensors, which were able to detect changes in the bladder pressure. The electrodes could detect bladder voiding, through electromyography signals, as well as stimulate the bladder to induce urination. As with the initial demonstration, intentional damage to the device’s circuitry healed on its own, without intervention.

The biocompatible and biodegradable nature of the materials is important because it means that devices fabricated with them can be worn on the skin, as well as implanted within the body. The fact that the materials are biodegradable means that implants would not need a second surgical procedure to remove them. They could be left in place after serving their purpose, and they would be absorbed by the body.

According to Suk-Won Hwang, assistant professor at KU-KIST, a few hurdles remain on the path to commercialization. “We need to test the biocompatibility of some of the materials used in the conductor and substrate layers. While scalable production appears to be feasible, the high cost of disulfide derivatives might make the technology too expensive, aside from some special applications,” he says. “Biocompatibility testing and material synthesis optimization will take one to two years, at least.”




ca

Electrical Stitches Speed Wound Healing in Rats



Surgical stitches that generate electricity can help wounds heal faster in rats, a new study from China finds.

In the body, electricity helps the heart beat, causes muscles to contract, and enables the body to communicate with the brain. Now scientists are increasingly using electricity to promote healing with so-called electroceuticals. These electrotherapies often seek to mimic the electrical signals the body naturally uses to help new cells migrate to wounds to support the healing process.

In the new study, researchers focused on sutures, which are used to close wounds and surgical incisions. Despite the way in which medical devices have evolved rapidly over the years, sutures are generally limited in capability, says Zhouquan Sun, a doctoral candidate at Donghua University in Shanghai. “This observation led us to explore integrating advanced therapeutics into sutures,” Sun says.

Prior work sought to enhance sutures by adding drugs or growth factors to the stitches. However, most of these drugs either had insignificant effects on healing, or triggered side-effects such as allergic reactions or nausea. Growth factors in sutures often degraded before they could have any effect, or failed to activate entirely.

The research team that created the new sutures previously developed fibers for electronics for nearly 10 years for applications such as sensors. “This is our first attempt to apply fiber electronics in the biomedical field,” says Chengyi Hou, a professor of materials science and engineering at Donghua University.

Making Electrical Sutures Work

The new sutures are roughly 500 microns wide, or about five times the width of the average human hair. Like typical sutures, the new stitches are biodegradable, avoiding the need for doctors to remove the stitches and potentially cause more damage to a wound.

Each suture is made of a magnesium filament core wrapped in poly(lactic-co-glycolic) acid (PLGA) nanofibers, a commercially available, inexpensive, biodegradable polymer used in sutures. The suture also includes an outer sheath made of polycaprolactone (PCL), a biodegradable polyester and another common suture material.

Previously, electrotherapy devices were often bulky and expensive, and required wires connected to an external battery. The new stitches are instead powered by the triboelectric effect, the most common cause of static electricity. When two different materials repeatedly touch and then separate—in the case of the new suture, its core and sheath—the surface of one material can steal electrons from the surface of the other. This is why rubbing feet on a carpet or a running a comb through hair can build up electric charge.

A common problem sutures face is how daily movements may cause strain that reduce their efficacy. The new stitches take advantage of these motions to help generate electricity that helps wounds heal.

The main obstacle the researchers had to surmount was developing a suture that was both thin and strong enough to serve in medicine. Over the course of nearly two years, they tinkered with the molecular weights of the polymers they used and refined their fiber spinning technology to reduce their suture’s diameter while maintaining strength, Sun says.

In lab experiments on rats, the sutures generated about 2.3 volts during normal exercise. The scientists found the new sutures could speed up wound healing by 50 percent over the course of 10 days compared to conventional sutures. They also significantly lowered bacteria levels even without the use of daily wound disinfectants, suggesting they could reduce the risk of post-operation infections.

“Future research may delve deeper into the molecular mechanisms of how electrical stimulation facilitated would healing,” says Hui Wang, a chief physician at Shanghai Sixth People’s Hospital.

Further tests are needed in clinical settings to assess how effective these sutures are in humans. If such experiments prove successful, “this bioabsorbable electrically stimulating suture could change how we treat injuries in the future,” Hou says.

The scientists detailed their findings online 8 October in the journal Nature Communications.




ca

Crop Parasites Can Be Deterred by “Electric Fences”



Imagine you’re a baby cocoa plant, just unfurling your first tentative roots into the fertile, welcoming soil.

Somewhere nearby, a predator stirs. It has no ears to hear you, no eyes to see you. But it knows where you are, thanks in part to the weak electric field emitted by your roots.

It is microscopic, but it’s not alone. By the thousands, the creatures converge, slithering through the waterlogged soil, propelled by their flagella. If they reach you, they will use fungal-like hyphae to penetrate and devour you from the inside. They’re getting closer. You’re a plant. You have no legs. There’s no escape.

But just before they fall upon you, they hesitate. They seem confused. Then, en masse, they swarm off in a different direction, lured by a more attractive electric field. You are safe. And they will soon be dead.

If Eleonora Moratto and Giovanni Sena get their way, this is the future of crop pathogen control.

Many variables are involved in the global food crisis, but among the worst are the pests that devastate food crops, ruining up to 40 percent of their yield before they can be harvested. One of these—the little protist in the example above, an oomycete formally known as Phytophthora palmivorahas a US $1 billion appetite for economic staples like cocoa, palm, and rubber.

There is currently no chemical defense that can vanquish these creatures without poisoning the rest of the (often beneficial) organisms living in the soil. So Moratto, Sena, and their colleagues at Sena’s group at Imperial College London settled on a non-traditional approach: They exploited P. palmivora’s electric sense, which can be spoofed.

All plant roots that have been measured to date generate external ion flux, which translates into a very weak electric field. Decades of evidence suggests that this signal is an important target for predators’ navigation systems. However, it remains a matter of some debate how much their predators rely on plants’ electrical signatures to locate them, as opposed to chemical or mechanical information. Last year, Moratto and Sena’s group found that P. palmivora spores are attracted to the positive electrode of a cell generating current densities of 1 ampere per square meter. “The spores followed the electric field,” says Sena, suggesting that a similar mechanism helps them find natural bioelectric fields emitted by roots in the soil.

That got the researchers wondering: Might such an artificial electric field override the protists’ other sensory inputs, and scramble their compasses as they tried to use plant roots’ much weaker electrical output?

To test the idea, the researchers developed two ways to protect plant roots using a constant vertical electric field. They cultivated two common snacks for P. palmivoraa flowering plant related to cabbage and mustard, and a legume often used as a livestock feed plant—in tubes in a hydroponic solution.

Two electric-field configurations were tested: A “global” vertical field [left] and a field generated by two small nearby electrodes. The global field proved to be slightly more effective.Eleonora Moratto

In the first assay, the researchers sandwiched the plant roots between rows of electrodes above and below, which completely engulfed them in a “global” vertical field. For the second set, the field was generated using two small electrodes a short distance away from the plant, creating current densities on the order of 10 A/m2. Then they unleashed the protists.

With respect to the control group, both methods successfully diverted a significant portion of the predators away from the plant roots. They swarmed the positive electrode, where—since zoospores can’t survive for longer than about 2 to 3 hours without a host—they presumably starved to death. Or worse. Neil Gow, whose research presented some of the first evidence for zoospore electrosensing, has other theories about their fate. “Applied electrical fields generate toxic products and steep pH gradients near and around the electrodes due to the electrolysis of water,” he says. “The tropism towards the electrode might be followed by killing or immobilization due to the induced pH gradients.”

Not only did the technique prevent infestation, but some evidence indicates that it may also mitigate existing infections. The researchers published their results in August in Scientific Reports.

The global electric field was marginally more successful than the local. However, it would be harder to translate from lab conditions into a (literal) field trial in soil. The local electric field setup would be easy to replicate: “All you have to do is stick the little plug into the soil next to the crop you want to protect,” says Sena.

Moratto and Sena say this is a proof of concept that demonstrates a basis for a new, pesticide-free way to protect food crops. (Sena likens the technique to the decoys used by fighter jets to draw away incoming missiles by mimicking the signals of the original target.) They are now looking for funding to expand the project. The first step is testing the local setup in soil; the next is to test the approach on Phytophthora infestans, a meaner, scarier cousin of P. palmivora.

P. infestans attacks a more varied diet of crops—you may be familiar with its work during the Irish potato famine. The close genetic similarities imply another promising candidate for electrical pest control. This investigation, however, may require more funding. P. infestans research can be undertaken only under more stringent laboratory security protocols.

The work at Imperial ties into the broader—and somewhat charged—debate around electrostatic ecology; that is, the extent to which creatures including ticks make use of heretofore poorly understood electrical mechanisms to orient themselves and in other ways enhance their survival. “Most people still aren’t aware that naturally occurring electricity can play an ecological role,” says Sam England, a behavioral ecologist with Berlin’s Natural History Museum. “So I suspect that once these electrical phenomena become more well known and understood, they will inspire a greater number of practical applications like this one.”




ca

For this Stanford Engineer, Frugal Invention Is a Calling



Manu Prakash spoke with IEEE Spectrum shortly after returning to Stanford University from a month aboard a research vessel off the coast of California, where he was testing tools to monitor oceanic carbon sequestration. The associate professor conducts fieldwork around the world to better understand the problems he’s working on, as well as the communities that will be using his inventions.

This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

Prakash develops imaging instruments and diagnostic tools, often for use in global health and environmental sciences. His devices typically cost radically less than conventional equipment—he aims for reductions of two or more orders of magnitude. Whether he’s working on pocketable microscopes, mosquito or plankton monitors, or an autonomous malaria diagnostic platform, Prakash always includes cost and access as key aspects of his engineering. He calls this philosophy “frugal science.”

Why should we think about science frugally?

Manu Prakash: To me, when we are trying to ask and solve problems and puzzles, it becomes important: In whose hands are we putting these solutions? A frugal approach to solving the problem is the difference between 1 percent of the population or billions of people having access to that solution.

Lack of access creates these kinds of barriers in people’s minds, where they think they can or cannot approach a kind of problem. It’s important that we as scientists or just citizens of this world create an environment that feels that anybody has a chance to make important inventions and discoveries if they put their heart to it. The entrance to all that is dependent on tools, but those tools are just inaccessible.

How did you first encounter the idea of “frugal science”?

Prakash: I grew up in India and lived with very little access to things. And I got my Ph.D. at MIT. I was thinking about this stark difference in worlds that I had seen and lived in, so when I started my lab, it was almost a commitment to [asking]: What does it mean when we make access one of the critical dimensions of exploration? So, I think a lot of the work I do is primarily driven by curiosity, but access brings another layer of intellectual curiosity.

How do you identify a problem that might benefit from frugal science?

Prakash: Frankly, it’s hard to find a problem that would not benefit from access. The question to ask is “Where are the neglected problems that we as a society have failed to tackle?” We do a lot of work in diagnostics. A lot [of our solutions] beat the conventional methods that are neither cost effective nor any good. It’s not about cutting corners; it’s about deeply understanding the problem—better solutions at a fraction of the cost. It does require invention. For that order of magnitude change, you really have to start fresh.

Where does your involvement with an invention end?

Prakash: Inventions are part of our soul. Your involvement never ends. I just designed the 415th version of Foldscope [a low-cost “origami” microscope]. People only know it as version 3. We created Foldscope a long time ago; then I realized that nobody was going to provide access to it. So we went back and invented the manufacturing process for Foldscope to scale it. We made the first 100,000 Foldscopes in the lab, which led to millions of Foldscopes being deployed.

So it’s continuous. If people are scared of this, they should never invent anything [laughs], because once you invent something, it’s a lifelong project. You don’t put it aside; the project doesn’t put you aside. You can try to, but that’s not really possible if your heart is in it. You always see problems. Nothing is ever perfect. That can be ever consuming. It’s hard. I don’t want to minimize this process in any way or form.




ca

Scary Stories: Establishing a Field Amid Skepticism



In the spirit of the Halloween season, IEEE Spectrum presents a pair of stories that—although grounded in scientific truth rather than the macabre—were no less harrowing for those who lived them. In today’s installment, Robert Langer had to push back against his field’s conventional wisdom to pioneer a drug-delivery mechanism vital to modern medicine.

Nicknamed the Edison of Medicine, Robert Langer is one of the world’s most-cited researchers, with over 1,600 published papers, 1,400 patents, and a top-dog role as one of MIT’s nine prestigious Institute Professors. Langer pioneered the now-ubiquitous drug delivery systems used in modern cancer treatments and vaccines, indirectly saving countless lives throughout his 50-year career.

But, much like Edison and other inventors, Langer’s big ideas were initially met with skepticism from the scientific establishment.

He came up in the 1970s as a chemical engineering postdoc working in the lab of Dr. Judah Folkman, a pediatric surgeon at the Boston Children’s Hospital. Langer was tasked with solving what many believed was an impossible problem—isolating angiogenesis inhibitors to halt cancer growth. Folkman’s vision of stopping tumors from forming their own self-sustaining blood vessels was compelling enough, but few believed it possible.

Langer encountered both practical and social challenges before his first breakthrough. One day, a lab technician accidentally spilled six months’ worth of samples onto the floor, forcing him to repeat the painstaking process of dialyzing extracts. Those months of additional work steered Langer’s development of novel microspheres that could deliver large molecules of medicine directly to tumors.

In the 1970s, Langer developed these tiny microspheres to release large molecules through solid materials, a groundbreaking proof-of-concept for drug delivery.Robert Langer

Langer then submitted the discovery to prestigious journals and was invited to speak at a conference in Michigan in 1976. He practiced the 20-minute presentation for weeks, hoping for positive feedback from respected materials scientists. But when he stepped off the podium, a group approached him and said bluntly, “We don’t believe anything you just said.” They insisted that macromolecules were simply too large to pass through solid materials, and his choice of organic solvents would destroy many inputs. Conventional wisdom said so.

Nature published Langer’s paper three months later, demonstrating for the first time that non-inflammatory polymers could enable the sustained release of proteins and other macromolecules. The same year, Science published his isolation mechanism to restrict tumor growth.

Langer and Folkman’s research paved the way for modern drug delivery.MIT and Boston Children’s Hospital

Even with impressive publications, Langer still struggled to secure funding for his work in controlling macromolecule delivery, isolating the first angiogenesis inhibitors, and testing their behavior. His first two grant proposals were rejected on the same day, a devastating blow for a young academic. The reviewers doubted his experience as “just an engineer” who knew nothing about cancer or biology. One colleague tried to cheer him up, saying, “It’s probably good those grants were rejected early in your career. Since you’re not supporting any graduate students, you don’t have to let anyone go.” Langer thought the colleague was probably right, but the rejections still stung.

His patent applications, filed alongside Folkman at the Boston Children’s Hospital, were rejected five years in a row. After all, it’s difficult to prove you’ve got something good if you’re the only one doing it. Langer remembers feeling disappointed but not crushed entirely. Eventually, other scientists cited his findings and expanded upon them, giving Langer and Folkman the validation needed for intellectual property development. As of this writing, the pair’s two studies from 1976 have been cited nearly 2,000 times.

As the head of MIT’s Langer Lab, he often shares these same stories of rejection with early-career students and researchers. He leads a team of over 100 undergrads, grad students, postdoctoral fellows, and visiting scientists, all finding new ways to deliver genetically engineered proteins, DNA, and RNA, among other research areas. Langer’s reputation is further bolstered by the many successful companies he co-founded or advised, like mRNA leader Moderna, which rose to prominence after developing its widely used COVID-19 vaccine.

Langer sometimes thinks back to those early days—the shattered samples, the cold rejections, and the criticism from senior scientists. He maintains that “Conventional wisdom isn’t always correct, and it’s important to never give up—(almost) regardless of what others say.”




ca

What My Daughter’s Harrowing Alaska Airlines Flight Taught Me About Healthcare

As a leader who has committed much of his career to improving healthcare — an industry that holds millions of people’s lives in its hands — I took from this terrifying incident a new guiding principle. Healthcare needs to pursue a zero-failure rate.

The post What My Daughter’s Harrowing Alaska Airlines Flight Taught Me About Healthcare appeared first on MedCity News.




ca

Acadia Pharma Sells Voucher for Speedier FDA Drug Review for $150M

Acadia Pharmaceuticals did not disclose the buyer of the priority review voucher. The biotech received the voucher last year alongside the regulatory decision that made its drug Daybue the first FDA-approved treatment for the rare disease Rett syndrome.

The post Acadia Pharma Sells Voucher for Speedier FDA Drug Review for $150M appeared first on MedCity News.




ca

4 Areas Within Mental Health Care that Give Executives Hope

Mental health experts are hopeful about the de-stigmatization of behavioral health, the promise of AI and other areas, they shared at a recent conference.

The post 4 Areas Within Mental Health Care that Give Executives Hope appeared first on MedCity News.




ca

AI is Revolutionizing Healthcare, But Are We Ready for the Ethical Challenges? 

Navigating the regulatory and ethical requirements of different medical data providers across many different countries, as well as safeguarding patient privacy, is a mammoth task that requires extra resources and expertise.  

The post AI is Revolutionizing Healthcare, But Are We Ready for the Ethical Challenges?  appeared first on MedCity News.




ca

CVS Health Exec: Payers Need to Stop Making Behavioral Health Providers Jump Through Hoops In Order to Participate in Value-Based Care

Value-based care contracting is especially difficult for behavioral health providers, Taft Parsons III, chief psychiatric officer at CVS Health/Aetna, pointed out during a conference this week.

The post CVS Health Exec: Payers Need to Stop Making Behavioral Health Providers Jump Through Hoops In Order to Participate in Value-Based Care appeared first on MedCity News.




ca

Through Early Discussions About Elder Care, Doctors Can Empower Seniors to Age in Place

The vast majority of older adults want to age at home. To support that goal, doctors should encourage them to consider their care options — long before they need assistance.

The post Through Early Discussions About Elder Care, Doctors Can Empower Seniors to Age in Place appeared first on MedCity News.




ca

The Startup Economy is Turbulent. Here’s How Founders Can Recognize and Avoid Common Pitfalls

While startups in highly regulated industries like healthcare and finance are almost certain to face heightened scrutiny, there are controllable factors that can offset these challenges.

The post The Startup Economy is Turbulent. Here’s How Founders Can Recognize and Avoid Common Pitfalls appeared first on MedCity News.




ca

Driving Genetic Testing Adoption and Improved Patient Care through Health Data Intelligence

By fostering collaboration and seamless data integration into healthcare systems, the industry is laying the groundwork for a future in which “personalized medicine” is so commonplace within clinical practice that we will just start calling it “medicine.”

The post Driving Genetic Testing Adoption and Improved Patient Care through Health Data Intelligence appeared first on MedCity News.




ca

‘Serial Killing’ Cell Therapy From Autolus Lands FDA Approval in Blood Cancer

Autolus Therapeutics’ Aucatzyl is now FDA approved for treating advanced cases of B-cell precursor acute lymphoblastic leukemia. While it goes after the same target as Gilead Sciences’ Tecartus, Autolus engineered its CAR T-therapy with properties that could improve safety, efficacy, and durability.

The post ‘Serial Killing’ Cell Therapy From Autolus Lands FDA Approval in Blood Cancer appeared first on MedCity News.




ca

Inside Providence’s Health Equity & Medicaid Strategy

Whitney Haggerson — vice president of health equity and Medicaid at Providence — discussed the significance of her role, as well as how her health system is working to give all employees, regardless of title, the skills needed to help reduce health inequities.

The post Inside Providence’s Health Equity & Medicaid Strategy appeared first on MedCity News.




ca

How Can Healthcare Organizations Earn Trust with Marginalized Communities?

Access to care isn’t enough. Healthcare organizations need to build trust in order to reach underserved communities, experts said on a recent panel.

The post How Can Healthcare Organizations Earn Trust with Marginalized Communities? appeared first on MedCity News.




ca

Closing Staffing Gaps in Healthcare by Utilizing Diverse Pipelines of Contingent Talent

By adopting a contingent workforce model and investing in the right data tools to power better informed decision-making and talent strategy, healthcare organizations can begin to address staffing challenges and turn their talent goals into reality. 

The post Closing Staffing Gaps in Healthcare by Utilizing Diverse Pipelines of Contingent Talent appeared first on MedCity News.




ca

How One Massachusetts Maternal Mental Health Program Scaled Across the Country

During a recent panel, experts discussed the Massachusetts Child Psychiatry Access Program (MCPAP) for Moms and how it achieved scale.

The post How One Massachusetts Maternal Mental Health Program Scaled Across the Country appeared first on MedCity News.




ca

Unlocking the Future of Radioligand Therapy: From Discovery to Delivering at Scale

As radiopharmaceuticals enter a new phase, industry leaders must rethink external services and internal capabilities to master the complexities of delivering advanced therapies.

The post Unlocking the Future of Radioligand Therapy: From Discovery to Delivering at Scale appeared first on MedCity News.




ca

Neurogene Gene Therapy Shows Signs of Efficacy in Small Study, But an Adverse Event Spooks Investors

Neurogene’s Rett syndrome gene therapy has preliminary data supporting safety and efficacy of the one-time treatment. But a late-breaking report of a serious complication in a patient who received the high dose sent shares of the biotech downward.

The post Neurogene Gene Therapy Shows Signs of Efficacy in Small Study, But an Adverse Event Spooks Investors appeared first on MedCity News.




ca

Fort Health Secures $5.5M to Expand Access to Integrated Pediatric Mental Health Care

Fort Health’s $5.5 million in funding was led by Twelve Below and Vanterra and included participation from Redesign Health, Blue Venture Fund and True Wealth Ventures.

The post Fort Health Secures $5.5M to Expand Access to Integrated Pediatric Mental Health Care appeared first on MedCity News.





ca

Medications for Opioid Use Disorder Improve Patient Outcomes

In 2018, opioid overdoses in the United States caused one death every 11 minutes, resulting in nearly 47,000 fatalities. The most effective treatments for opioid use disorder (OUD) are three medications approved by the Food and Drug Administration (FDA): methadone, buprenorphine, and naltrexone.




ca

Relaxed Federal Regulations Amid Pandemic Can Help Jails Better Treat Opioid Use Disorder

Few correctional facilities in the United States have treatment programs for individuals with opioid use disorder (OUD), despite clear evidence that certain medications reduce the risk of overdose and death. Even in facilities where treatment is available, the COVID-19 pandemic has complicated efforts to provide such care.




ca

Clinical Lab Tests Need Stronger FDA Oversight to Improve Patient Safety

In vitro diagnostics (IVDs) play an indispensable role in modern medicine. Health care providers routinely rely on these tests—which analyze samples such as blood or saliva—to help diagnose conditions and guide potentially life-altering treatment decisions. In 2017, for example, clinicians ordered blood tests during about 45% of emergency room visits in the United States, according to the Centers...




ca

Extended Medicaid Coverage Would Help Postpartum Patients With Treatment for Opioid Use Disorder

Between 1999 and 2014, opioid use disorder (OUD) among pregnant women more than quadrupled, risking the health of the women—before and after giving birth—and their infants. As states grapple with COVID-19’s exacerbation of the opioid crisis, several are taking innovative steps to address the needs of high-risk groups, including low-income, postpartum patients with OUD.




ca

Tools to Boost Beneficial Bacteria Can Help Poultry Farms Fight Salmonella

Chicken products cause an estimated 1 in 7 of the nation’s human Salmonella illnesses each year, partly because the pathogen can easily contaminate the environments where birds are raised. To reduce the risk that contaminated meat will reach consumers, poultry companies need measures that control the bacterium on farms where chickens are bred and raised.




ca

Primary Care Providers Can Help Steer People to Opioid Addiction Treatment

The United States is grappling with two severe health crises: the COVID-19 pandemic and an opioid epidemic that appears to be worsening as more people deal with stress and isolation as they face increased barriers to medical care. Preliminary numbers for 2020 show that overdose deaths were outpacing the record-setting number of more than 71,000 fatalities in 2019.




ca

Standard Technology Presents Opportunities for Medical Record Data Extraction

Technology has revolutionized the way people live their lives. Individuals can use smartphones to access their bank account, shop from almost any store, and connect with friends and family around the globe. In fact, these personal devices have tethered communities together during the coronavirus pandemic, allowing many people to maintain much of their lives remotely.