health and food

Jerry Matczak

Jerry Matczak passed away suddenly last Thursday at the much-too-young age of 54.

I can say, without exaggeration, that Jerry embodied pretty much everything I aspire to be in my professional life. The MedCityNews headline called him a “social media guru”, but in reality he was temperamentally the exact opposite of a "guru":

He was constantly curious; it seemed that every conversation I had with him was composed mainly of questions. Many of us try to be “listen first, talk second” types, but Jerry was a “listen first, ask questions, listen some more, then talk” type.

He also never stopped trying to figure out how to improve whatever he was working on. He participated in a lot of pilot projects, which means he was a part of a lot of projects that didn’t meet their objectives – but I never witnessed Jerry being the least bit negative or frustrated. Every project was just another opportunity to learn more.

Mostly, though, Jerry was remarkable in his ability to connect with patients, even patients who were deeply distrustful of his employer and industry. If nothing else, I hope you read the words of two such patients, coming from very different places, with remarkably similar reactions to Jerry:


Jerry, thank you for your service and your example. I carry it with me.





health and food

Establishing efficacy - without humans?

The decade following passage of FDAAA has been one of easing standards for drug approvals in the US, most notably with the advent of “breakthrough” designation created by FDASIA in 2012 and the 21st Century Cures Act in 2016.

Although, as of this writing, there is no nominee for FDA Commissioner, it appears to be safe to say that the current administration intends to accelerate the pace of deregulation, mostly through further lowering of approval requirements. In fact, some of the leading contenders for the position are on record as supporting a return to pre-Kefauver-Harris days, when drug efficacy was not even considered for approval.

Build a better mouse model, and pharma will
beat a path to your door - no laws needed.

In this context, it is at least refreshing to read a proposal to increase efficacy standards. This comes from two bioethicists at McGill University, who make the somewhat-startling case for a higher degree of efficacy evaluation before a drug begins any testing in humans.
We contend that a lack of emphasis on evidence for the efficacy of drug candidates is all too common in decisions about whether an experimental medicine can be tested in humans. We call for infrastructure, resources and better methods to rigorously evaluate the clinical promise of new interventions before testing them on humans for the first time.
The author propose some sort of centralized clearinghouse to evaluate efficacy more rigorously. It is unclear what they envision this new multispecialty review body’s standards for green-lighting a drug to enter human testing. Instead they propose three questions:
  • What is the likelihood that the drug will prove clinically useful?
  • Assume the drug works in humans. What is the likelihood of observing the preclinical results?
  • Assume the drug does not work in humans. What is the likelihood of observing the preclinical results?
These seem like reasonable questions, I suppose – and are likely questions that are already being asked of preclinical data. They certainly do not rise to the level of providing a clear standard for regulatory approval, though perhaps it’s a reasonable place to start.

The most obvious counterargument here is one that the authors curiously don’t pick up on at all: if we had the ability to accurately (or even semiaccurately) predict efficacy preclinically, pharma sponsors would already be doing it. The comment notes: “More-thorough assessments of clinical potential before trials begin could lower failure rates and drug-development costs.” And it’s hard not to agree: every pharmaceutical company would love to have even an incrementally-better sense of whether their early pipeline drugs will be shown to work as hoped.

The authors note
Commercial interests cannot be trusted to ensure that human trials are launched only when the case for clinical potential is robust. We believe that many FIH studies are launched on the basis of flimsy, underscrutinized evidence.
However, they do not produce any evidence that industry is in any way deliberately underperforming their preclinical work, merely that preclinical efficacy is often difficult to reproduce and is poorly correlated with drug performance in humans.

Pharmaceutical companies have many times more candidate compounds than they can possibly afford to put into clinical trials. Figuring out how to lower failure rates – or at least the total cost of failure - is a prominent industry obsession, and efficacy remains the largest source of late-stage trial failure. This quest to “fail faster” has resulted in larger and more expensive phase 2 trials, and even to increased efficacy testing in some phase 1 trials. And we do this not because of regulatory pressure, but because of hopes that these efforts will save overall costs. So it seems beyond probable that companies would immediately invest more in preclinical efficacy testing, if such testing could be shown to have any real predictive power. But generally speaking, it does not.

As a general rule, we don’t need regulations that are firmly aligned with market incentives, we need regulations if and when we think those incentives might run counter to the general good. In this case, there are already incredibly strong market incentives to improve preclinical assessments. Where companies have attempted to do something with limited success, it would seem quixotic to think that regulatory fiat will accomplish more.

(One further point. The authors try to link the need for preclinical efficacy testing to the 2016 Bial tragedy. This seems incredibly tenuous: the authors speculate that perhaps trial participants would not have been harmed and killed if Bial had been required to produce more evidence of BIA102474’s clinical efficacy before embarking on their phase 1 trials. But that would have been entirely coincidental in this case: if the drug had in fact more evidence of therapeutic promise, the tragedy still would have happened, because it had nothing at all to do with the drug’s efficacy.

This is to some extent a minor nitpick, since the argument in favor of earlier efficacy testing does not depend on a link to Bial. However, I bring it up because a) the authors dedicate the first four paragraphs of their comment to the link, and b) there appears to be a minor trend of using the death and injuries of that trial to justify an array of otherwise-unrelated initiatives. This seems like a trend we should discourage.)

[Update 2/23: I posted this last night, not realizing that only a few hours earlier, John LaMattina had published on this same article. His take is similar to mine, in that he is suspicious of the idea that pharmaceutical companies would knowingly push ineffective drugs up their pipeline.]

Kimmelman, J., & Federico, C. (2017). Consider drug efficacy before first-in-human trials Nature, 542 (7639), 25-27 DOI: 10.1038/542025a




health and food

The Streetlight Effect and 505(b)(2) approvals

It is a surprisingly common peril among analysts: we don’t have the data to answer the question we’re interested in, so we answer a related question where we do have data. Unfortunately, the new answer turns out to shed no light on the original interesting question.

This is sometimes referred to as the Streetlight Effect – a phenomenon aptly illustrated by Mutt and Jeff over half a century ago:


This is the situation that the Tufts Center for the Study of Drug Development seems to have gotten itself into in its latest "Impact Report".  It’s worth walking through the process of how an interesting question ends up in an uninteresting answer.

So, here’s an interesting question:
My company owns a drug that may be approvable through FDA’s 505(b)(2) pathway. What is the estimated time and cost difference between pursuing 505(b)(2) approval and conventional approval?
That’s "interesting", I suppose I should add, for a certain subset of folks working in drug development and commercialization. It’s only interesting to that peculiar niche, but for those people I suspect it’s extremely interesting - because it is a real situation that a drug company may find itself in, and there are concrete consequences to the decision.

Unfortunately, this is also a really difficult question to answer. As phrased, you'd almost need a randomized trial to answer it. Let’s create a version which is less interesting but easier to answer:
What are the overall development time and cost differences between drugs seeking approval via 505(b)(2) and conventional pathways?
This is much easier to answer, as pharmaceutical companies could look back on development times and costs of all their compounds, and directly compare the different types. It is, however, a much less useful question. Many new drugs are simply not eligible for 505(b)(2) approval. If those drugs
Extreme qualitative differences of 505(b)(2) drugs.
Source: Thomson Reuters analysis via RAPS
are substantially different in any way (riskier, more novel, etc.), then they will change the comparison in highly non-useful ways. In fact, in 2014, only 1 drug classified as a New Molecular Entity (NME) went through 505(b)(2) approval, versus 32 that went through conventional approval. And in fact, there are many qualities that set 505(b)(2) drugs apart.

So we’re likely to get a lot of confounding factors in our comparison, and it’s unclear how the answer would (or should) guide us if we were truly trying to decide which route to take for a particular new drug. It might help us if we were trying to evaluate a large-scale shift to prioritizing 505(b)(2) eligible drugs, however.

Unfortunately, even this question is apparently too difficult to answer. Instead, the Tufts CSDD chose to ask and answer yet another variant:
What is the difference in time that it takes the FDA for its internal review process between 505(b)(2) and conventionally-approved drugs?
This question has the supreme virtue of being answerable. In fact, I believe that all of the data you’d need is contained within the approval letter that FDA posts publishes for each new approved drug.

But at the same time, it isn’t a particularly interesting question anymore. The promise of the 505(b)(2) pathway is that it should reduce total development time and cost, but on both those dimensions, the report appears to fall flat.
  • Cost: This analysis says nothing about reduced costs – those savings would mostly come in the form of fewer clinical trials, and this focuses entirely on the FDA review process.
  • Time: FDA review and approval is only a fraction of a drug’s journey from patent to market. In fact, it often takes up less than 10% of the time from initial IND to approval. So any differences in approval times will likely easily be overshadowed by differences in time spent in development. 
But even more fundamentally, the problem here is that this study gives the appearance of providing an answer to our original question, but in fact is entirely uninformative in this regard. The accompanying press release states:
The 505(b)(2) approval pathway for new drug applications in the United States, aimed at avoiding unnecessary duplication of studies performed on a previously approved drug, has not led to shorter approval times.
This is more than a bit misleading. The 505(b)(2) statute does not in any way address approval timelines – that’s not it’s intent. So showing that it hasn’t led to shorter approval times is less of an insight than it is a natural consequence of the law as written.

Most importantly, showing that 505(b)(2) drugs had a longer average approval time than conventionally-approved drugs in no way should be interpreted as adding any evidence to the idea that those drugs were slowed down by the 505(b)(2) process itself. Because 505(b)(2) drugs are qualitatively different from other new molecules, this study can’t claim that they would have been developed faster had their owners initially chosen to go the route of conventional approval. In fact, such a decision might have resulted in both increased time in trials and increased approval time.

This study simply is not designed to provide an answer to the truly interesting underlying question.

[Disclosure: the above review is based entirely on a CSDD press release and summary page. The actual report costs $125, which is well in excess of this blog’s expense limit. It is entirely possible that the report itself contains more-informative insights, and I’ll happily update that post if that should come to my attention.]




health and food

Retention metrics, simplified

[Originally posted on First Patient In]

In my experience, most clinical trials do not suffer from significant retention issues. This is a testament to the collaborative good will of most patients who consent to participate, and to the patient-first attitude of most research coordinators.

However, in many trials – especially those that last more than a year – the question of whether there is a retention issue will come up at some point while the trial’s still going. This is often associated with a jump in early terminations, which can occur as the first cohort of enrollees has been in the trial for a while.

It’s a good question to ask midstream: are we on course to have as many patients fully complete the trial as we’d originally anticipated?

However, the way we go about answering the question is often flawed and confusing. Here’s an example: a sponsor came to us with what they thought was a higher rate of early terminations than expected. The main problem? They weren't actually sure.

Here’s their data. Can you tell?

Original retention graph. Click to enlarge.
If you can, please let me know how! While this chart is remarkably ... full of numbers, it provides no actual insight into when patients are dropping out, and no way that I can tell to project eventual total retention.

In addition, measuring the “retention rate” as a simple ratio of active to terminated patients will not provide an accurate benchmark until the trial is almost over. Here's why: patients tend to drop out later in a trial, so as long as you’re enrolling new patients, your retention rate will be artificially high. When enrollment ends, your retention rate will appear to drop rapidly – but this is only because of the artificial lift you had earlier.

In fact, that was exactly the problem the sponsor had: when enrollment ended, the retention rate started dropping. It’s good to be concerned, but it’s also important to know how to answer the question.

Fortunately, there is a very simple way to get a clear answer in most cases – one that’s probably already in use by your  biostats team around the corner: the Kaplan-Meier “survival” curve.

Here is the same study data, but patient retention is simply depicted as a K-M graph. The key difference is that instead of calendar dates, we used the relative measure of time in the trial for each patient. That way we can easily spot where the trends are.


In this case, we were able to establish quickly that patient drop-outs were increasing at a relatively small constant rate, with a higher percentage of drops coinciding with the one-year study visit. Most importantly, we were able to very accurately predict the eventual number of patients who would complete the trial. And it only took one graph!







health and food

REMOTE Redux: DTP trials are still hard

Maybe those pesky sites are good for something after all. 

It's been six years since Pfizer boldly announced the launch of its "clinical trial in a box". The REMOTE trial was designed to be entirely online, and involved no research sites: study information and consent was delivered via the web, and medications and diaries were shipped directly to patients' homes.

Despite the initial fanfare, within a month REMOTE's registration on ClinicalTrials.gov was quietly reduced from 600 to 283. The smaller trial ended not with a bang but a whimper, having randomized only 18 patients in over a year of recruiting.

Still, the allure of direct to patient clinical trials remains strong, due to a confluence of two factors. First, a frenzy of interest in running "patient centric clinical trials". Sponsors are scrambling to show they are doing something – anything – to show they have shifted to a patient-centered mindset. We cannot seem to agree what this means (as a great illustration of this, a recent article in Forbes on "How Patients Are Changing Clinical Trials" contained no specific examples of actual trials that had been changed by patients), but running a trial that directly engages patients wherever they are seems like it could work.

The less-openly-discussed other factor leading to interest in these DIY trials is sponsors' continuing willingness to heap almost all of the blame for slow-moving studies onto their research sites. If it’s all the sites’ fault – the reasoning goes – then cutting them out of the process should result in trials that are both faster and cheaper. (There are reasons to be skeptical about this, as I have discussed in the past, but the desire to drop all those pesky sites is palpable.)

However, while a few proof-of-concept studies have been done, there really doesn't seem to have been another trial to attempt a full-blown direct-to-patient clinical trial. Other pilots have been more successful, but had fairly lightweight protocols. For all its problems, REMOTE was a seriously ambitious project that attempted to package a full-blown interventional clinical trial, not an observational study.

In this context, it's great to see published results of the TAPIR Trial in vasculitis, which as far as I can tell is the first real attempt to run a DIY trial of a similar magnitude to REMOTE.

TAPIR was actually two parallel trials, identical in every respect except for their sites: one trial used a traditional group of 8 sites, while the other was virtual and recruited patients from anywhere in the country. So this was a real-time, head-to-head assessment of site performance.

And the results after a full two years of active enrollment?

  • Traditional sites: 49 enrolled
  • Patient centric: 10 enrolled
Even though we’re six years later, and online/mobile communications are even more ubiquitous, we still see the exact same struggle to enroll patients.

Maybe it’s time to stop blaming the sites? To be fair, they didn’t exactly set the world on fire – and I’m guessing the total cost of activating the 8 sites significantly exceeded the costs of setting up the virtual recruitment and patient logistics. But still, the site-less, “patient centric” approach once again came up astonishingly short.


Krischer J, Cronholm PF, Burroughs C, McAlear CA, Borchin R, Easley E, Davis T, Kullman J, Carette S, Khalidi N, Koening C, Langford CA, Monach P, Moreland L, Pagnoux C, Specks U, Sreih AG, Ytterberg S, Merkel PA, & Vasculitis Clinical Research Consortium. (2017). Experience With Direct-to-Patient Recruitment for Enrollment Into a Clinical Trial in a Rare Disease: A Web-Based Study. Journal of medical Internet research, 19 (2) PMID: 28246067




health and food

For good sleep and good health, regulate your exposure to light

Your daily light exposure impacts your health. A new study finds that too much light at night and not enough natural light during the day can be harmful. This story first aired on Morning Edition on Nov. 4, 2024.




health and food

Hospitals face months of IV fluid shortages after Helene damages N.C. factory

Hospitals have been forced to innovate with new ways of hydrating patients and giving them medications, after a key factory that produces IV fluid bags flooded during Hurricane Helene. (This story first aired on Morning Edition on Nov. 7, 2024.)




health and food

A human bird flu case is thought to be found in Canada for the first time

A person has tested positive in British Columbia, Canadian health officials said, though the results must be sent to another lab for confirmation.




health and food

What does a 2nd Trump term mean for the Affordable Care Act?

President-elect Donald Trump tried unsuccessfully to get rid of the Affordable Care Act during his first term. What action will he take this time around?




health and food

More young people are surviving cancer. Then they face a life altered by it

More people are getting cancer in their 20s, 30s, and 40s, and surviving, thanks to rapid advancement in care. Many will have decades of life ahead of them, which means they face greater and more complex challenges in survivorship. Lourdes Monje is navigating these waters at age 29.




health and food

Patrick Dempsey aims to raise awareness of cancer disparities and encourage screening

NPR's Leila Fadel talks with actor Patrick Dempsey about his efforts to raise money for cancer treatment and prevention.




health and food

Remarkably resilient refugees: A teen on his own, a woman who was raped

Sudan's civil war has displaced 10 million citizens. Here are profiles of two young people from the most vulnerable groups: an unaccompanied minor caring for twin brothers, a woman who was raped.




health and food

Online yoga classes prove helpful for back pain in new study

Participant reported relief from chronic low back pain and reduced need for pain-relief medications.




health and food

With Trump coming into power, the NIH is in the crosshairs

The National Institutes of Health, the crown jewel of biomedical research in the U.S., could face big changes under the new Trump administration, some fueled by pandemic-era criticisms of the agency.




health and food

Chronic itch is miserable. Scientists are just scratching the surface

Journalist Annie Lowrey has a rare disease that causes a near-constant itch that doesn't respond to most treatments. She likens the itchiness to a car alarm: "You can't stop thinking about it."




health and food

Tiny Biosensor Unlocks the Secrets of Sweat



Sweat: We all do it. It plays an essential role in controlling body temperature by cooling the skin through evaporation. But it can also carry salts and other molecules out of the body in the process. In medieval Europe, people would lick babies; if the skin was salty, they knew that serious illness was likely. (We now know that salty skin can be an indicator for cystic fibrosis.)

Scientists continue to study how the materials in sweat can reveal details about an individual’s health, but often they must rely on gathering samples from subjects during strenuous exercise in order to get samples that are sufficiently large for analysis.

Now researchers in China have developed a wearable sensor system that can collect and process small amounts of sweat while providing continuous detection. They have named the design a “skin-interfaced intelligent graphene nanoelectronic” patch, or SIGN for short. The researchers, who described their work in a paper published in Advanced Functional Materials, did not respond to IEEE Spectrum’s interview requests.

The SIGN sensor patch relies on three separate components to accomplish its task. First, the sweat must be transported from the skin into microfluidic chambers. Next, a special membrane removes impurities from the fluid. Finally, this liquid is delivered to a bioreceptor that can be tuned to detect different metabolites.

The transport system relies on a combination of hydrophilic (water-attracting) and hydrophobic (water-repelling) materials. This system can move aqueous solutions along microchannels, even against gravity. This makes it possible to transport small samples with precision, regardless of the device’s orientation.

The fluid is transported to a Janus membrane, where impurities are blocked. This means that the sample that reaches the sensor is more likely to produce accurate results.

Finally, the purified sweat arrives at a flexible biosensor. This graphene sensor is activated by enzymes designed to detect the desired biomarker. The result is a transistor that can accurately measure the amount of the biomarker in the sample.

At its center, the system has a membrane that removes impurities from sweat and a biosensor that detects biomarkers.Harbin Institute of Technology/Shenyang Aerospace University

One interesting feature of the SIGN patch is that it can provide continuous measurements. The researchers tested the device through multiple cycles of samples with known concentrations of a target biomarker, and it was about as accurate after five cycles as it was after just one. This result suggests that it could be worn over an extended period without having to be replaced.

Continuous measurements can provide useful longitudinal data. However, Tess Skyrme, a senior technology analyst at the research firm IDTechEx, points out that continuous devices can have very different sampling rates. “Overall, the right balance of efficient, comfortable, and granular data collection is necessary to disrupt the market,” she says, noting that devices also need to optimize “battery life, calibration, and data accuracy.”

The researchers have focused on lactate—a metabolite that can be used to assess a person’s levels of exercise and fatigue—as the initial biomarker to be detected. This function is of particular interest to athletes, but it can also be used to monitor the health status of workers in jobs that require strenuous physical activity, especially in hazardous or extreme working conditions.

Not all experts are convinced that biomarkers in sweat can provide accurate health data. Jason Heikenfeld, director of the Novel Device Lab at the University of Cincinnati, has pivoted his research on wearable biosensing from sweat to the interstitial fluid between blood vessels and cells. “Sweat glucose and lactate are way inferior to measures that can be made in interstitial fluid with devices like glucose monitors,” he tells Spectrum.

The researchers also developed a package to house the sensor. It’s designed to minimize power consumption, using a low-power microcontroller, and it includes a Bluetooth communications chip to transmit data wirelessly from the SIGN patch. The initial design provides for 2 hours of continuous use without charging, or up to 20 hours in standby mode.




health and food

Sea Turtle Ears Inspire a New Heart Monitor Design



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Sea turtles are remarkable creatures for a number of reasons, including the way they hear underwater—not through openings in the form of ears, but by detecting vibrations directly through the skin covering their auditory system. Inspired by this ability to detect sound through skin, researchers in China have created a heart-monitoring system, which initial tests in humans suggest may be a viable for monitoring heartbeats.

A key way in which doctors monitor heart health involves “listening” to the heartbeat, either using a stethoscope or more sophisticated technology, like echocardiograms. However, these approaches require a visit to a specialist, and so researchers have been keen to develop alternative, lower cost solutions that people can use at home, which could also allow for more frequent testing and monitoring.

Junbin Zang, a lecturer at the North University of China, and his colleagues specialize in creating heart-monitoring technologies. Their interest was piqued when they learned about the inner workings of the sea turtle’s auditory system, which is able to detect low-frequency signals, especially in the 300- to 400-hertz range.

“Heart sounds are also low-frequency signals, so the low-frequency characteristics of the sea turtle’s ear have provided us with great inspiration,” explains Zang.

At a glance, it looks like turtles don’t have ears. Their auditory system instead lies under a layer of skin and fat, through which it picks up vibrations. As with humans, a small bone in the ear vibrates as sounds hit it, and as it oscillates, those pulses are converted to electrical signals that are sent to the brain for processing and interpretation.

iStock

But sea turtles have a unique, slender T-shaped conduit that encapsulates their ear bones, restricting the movement of the similarly T-shaped ear bones to only vibrate in a perpendicular manner. This design provides their auditory system with high sensitivity to vibrations.

Zang and his colleagues set out to create a heart monitoring system with similar features. They created a T-shaped heart-sound sensor that imitates the ear bones of sea turtles using a tiny MEMS cantilever beam sensor. As sound hits the sensor, the vibrations cause deformations in its beam, and the fluctuations in the voltage resistance are then translated into electrical signals.

The researchers first tested the sensor’s ability to detect sound in lab tests, and then tested the sensor’s ability to monitor heartbeats in two human volunteers in their early 20s. The results, described in a study published 1 April in IEEE Sensors Journal, show that the sensor can effectively detect the two phases of a heartbeat.

“The sensor exhibits excellent vibration characteristics,” Zang says, noting that it has a higher vibration sensitivity compared to other accelerometers on the market.

However, the sensor currently picks up a significant amount of background noise, which Zang says his team plans to address in future work. Ultimately, they are interested in integrating this novel bioinspired sensor into devices they have previously created—including portable handheld and wearable versions, and a relatively larger version for use in hospitals—for the simultaneous detection of electrocardiogram and phonocardiogram signals.

This article appears in the July 2024 print issue as “Sea Turtles Inspire Heart-Monitor Design.”




health and food

MRI Sheds Its Shielding and Superconducting Magnets



Magnetic resonance imaging (MRI) has revolutionized healthcare by providing radiation-free, non-invasive 3-D medical images. However, MRI scanners often consume 25 kilowatts or more to power magnets producing magnetic fields up to 1.5 tesla. These requirements typically limits scanners’ use to specialized centers and departments in hospitals.

A University of Hong Kong team has now unveiled a low-power, highly simplified, full-body MRI device. With the help of artificial intelligence, the new scanner only requires a compact 0.05 T magnet and can run off a standard wall power outlet, requiring only 1,800 watts during operation. The researchers say their new AI-enabled machine can produce clear, detailed images on par with those from high-power MRI scanners currently used in clinics, and may one day help greatly improve access to MRI worldwide.

To generate images, MRI applies a magnetic field to align the poles of the body’s protons in the same direction. An MRI scanner then probes the body with radio waves, knocking the protons askew. When the radio waves turn off, the protons return to their original alignment, transmitting radio signals as they do so. MRI scanners receive these signals, converting them into images.

More than 150 million MRI scans are conducted worldwide annually, according to the Organization for Economic Cooperation and Development. However, despite five decades of development, clinical MRI procedures remain out of reach for more than two-thirds of the world’s population, especially in low- and middle-income countries. For instance, whereas the United States has 40 scanners per million inhabitants, in 2016 there were only 84 MRI units serving West Africa’s population of more than 370 million.

This disparity largely stems from the high costs and specialized settings required for standard MRI scanners. They use powerful superconducting magnets that require a lot of space, power, and specialized infrastructure. They also need rooms shielded from radio interference, further adding to hardware costs, restricting their mobility, and hampering their availability in other medical settings.

Scientists around the globe have already been exploring low-cost MRI scanners that operate at ultra-low-field (ULF) strengths of less than 0.1 T. These devices may consume much less power and prove potentially portable enough for bedside use. Indeed, as the Hong Kong team notes, MRI development initially focused on low fields of about 0.05 T, until the introduction of the first whole-body 1.5 T superconducting scanner by General Electric in 1983.

The new MRI scanner (top left) is smaller than conventional scanners, and does away with bulky RF shielding and superconducting magnetics. The new scanner’s imaging resolution is on par with conventional scanners (bottom).Ed X. Wu/The University of Hong Kong

Current ULF MRI scanners often rely on AI to help reconstruct images from what signals they gather using relatively weak magnetic fields. However, until now, these devices were limited to solely imaging the brain, extremities, or single organs, Udunna Anazodo, an assistant professor of neurology and neurosurgery at McGill University in Montreal who did not take part in the work, notes in a review of the new study.

The Hong Kong team have now developed a whole-body ULF MRI scanner in which patients are placed between two permanent neodymium ferrite boron magnet plates—one above the body and the other below. Although these permanent magnets are far weaker than superconductive magnets, they are low-cost, readily available, and don’t require liquid helium or to be cooled to superconducting temperatures. In addition, the amount of energy ULF MRI scanners deposit into the body is roughly one-thousandth that from conventional scanners, making heat generation during imaging much less of a concern, Anazodo notes in her review. ULF MRI is also much quieter than regular MRI, which may help with pediatric scanning, she adds.

The new machine consists of two units, each roughly the size of a hospital gurney. One unit houses the MRI device, while the other supports the patient’s body as it slides into the scanner.

To account for radio interference from both the outside environment and the ULF MRI’s own electronics, the scientists deployed 10 small sensor coils around the scanner and inside the electronics cabinet to help the machine detect potentially disruptive radio signals. They also employed deep learning AI methods to help reconstruct images even in the presence of strong noise. They say this eliminates the need for shielding against radio waves, making the new device far more portable than conventional MRI.

In tests on 30 healthy volunteers, the device captured detailed images of the brain, spine, abdomen, heart, lung, and extremities. Scanning each of these targets took eight minutes or less for image resolutions of roughly 2 by 2 by 8 cubic millimeters. In Anazodo’s review, she notes the new machine produced image qualities comparable to those of conventional MRI scanners.

“It’s the beginning of a multidisciplinary endeavor to advance an entirely new class of simple, patient-centric and computing-powered point-of-care diagnostic imaging device,” says Ed Wu, a professor and chair of biomedical engineering at the University of Hong Kong.

The researchers used standard off-the-shelf electronics. All in all, they estimate hardware costs at about US $22,000. (According to imaging equipment company Block Imaging in Holt, Michigan, entry-level MRI scanners start at $225,000, and advanced premium machines can cost $500,000 or more.)

The prototype scanner’s magnet assembly is relatively heavy, weighing about 1,300 kilograms. (This is still lightweight compared to a typical clinical MRI scanner, which can weigh up to 17 tons, according to New York University’s Langone Health center.) The scientists note that optimizing the hardware could reduce the magnet assembly’s weight to about 600 kilograms, which would make the entire scanner mobile.

The researchers note their new device is not meant to replace conventional high-magnetic-field MRI. For instance, a 2023 study notes that next-generation MRI scanners using powerful 7 T magnets could yield a resolution of just 0.35 millimeters. Instead, ULF MRI can complement existing MRI by going to places that can’t host standard MRI devices, such as intensive care units and community clinics.

In an email, Anazodo adds this new Hong Kong work is just one of a number of exciting ULF MRI scanners under development. For instance, she notes that Gordon Sarty at the University of Saskatchewan and his colleagues are developing that device that is potentially even lighter, cheaper and more portable than the Hong Kong machine, which they are researching for use in whole-body imaging on the International Space Station.

Wu and his colleagues detailed their findings online 10 May in the journal Science.

This article appears in the July 2024 print issue as “Compact MRI Ditches Superconducting Magnets.”




health and food

“Snake-like” Probe Images Arteries from Within



Neurosurgeon Vitor Mendes Pereira has grown accustomed to treating brain aneurysms with only blurry images for guidance.

Equipped with a rough picture of the labyrinthine network of arteries in the brain, he does his best to insert mesh stents or coils of platinum wire—interventions intended to promote clotting and to seal off a bulging blood vessel.

The results are not always perfect. Without a precise window into the arterial architecture at the aneurysm site, Pereira says that he and other neurovascular specialists occasionally misplace these implants, leaving patients at a heightened risk of stroke, clotting, inflammation, and life-threatening ruptures. But a new fiber-optic imaging probe offers hope for improved outcomes.

Pereira et al./Science Translational Medicine

According to Pereira’s early clinical experience, the technology—a tiny snake-like device that winds its way through the intricate maze of brain arteries and, using spirals of light, captures high-resolution images from the inside-out—provides an unprecedented level of structural detail that enhances the ability of clinicians to troubleshoot implant placement and better manage disease complications.

“We can see a lot more information that was not accessible before,” says Pereira, director of endovascular research and innovation at St. Michael’s Hospital in Toronto. “This is, for us, an incredible step forward.”

And not just for brain aneurysms. In a report published today in Science Translational Medicine, Pereira and his colleagues describe their first-in-human experience using the platform to guide treatment for 32 people with strokes, artery hardening, and various other conditions arising from aberrant blood vessels in the brain.

Whereas before, with technologies such as CT scans, MRIs, ultrasounds, and x-rays, clinicians had a satellite-like view of the brain’s vascular network, now they have a Google Street View-like perspective, complete with in-depth views of artery walls, plaques, immune cell aggregates, implanted device positions, and more.

“The amount of detail you could get you would never ever see with any other imaging modality,” says Adnan Siddiqui, a neurosurgeon at the University at Buffalo, who was not involved in the research. “This technology holds promise to be able to really transform the way we evaluate success or failure of our procedures, as well as to diagnose complications before they occur.”

A Decade of Innovation

The new fiber-optic probe is flexible enough to snake through the body’s arteries and provide previously unavailable information to surgeons.Pereira et al./Science Translational Medicine

The new imaging platform is the brainchild of Giovanni Ughi, a biomedical engineer at the University of Massachusetts’ Chan Medical School in Worcester. About a decade ago, he set out to adapt a technique called optical coherence tomography (OCT) for imaging inside the brain’s arteries.

OCT relies on the backscattering of near-infrared light to create cross-sectional images with micrometer-scale spatial resolution. Although OCT had long been used in clinical settings to generate pictures from the back of the eye and from inside the arteries that supply blood to the heart, the technology had proven difficult to adapt for brain applications owing to several technical challenges.

One major challenge is that the fiber-optic probes used in the technology are typically quite stiff, making them too rigid to twist and bend through the convoluted passageways of the brain’s vasculature. Additionally, the torque cables—traditionally used to rotate the OCT lens to image surrounding vessels and devices in three dimensions as the probe retracts—were too large to fit inside the catheters that are telescopically advanced into the brain’s arteries to address blockages or other vascular issues.

“We had to invent a new technology,” Ughi explains. “Our probe had to be very, very flexible, but also very, very small to be compatible with the clinical workflow.”

To achieve these design criteria, Ughi and his colleagues altered the properties of the glass at the heart of their fiber-optic cables, devised a new system of rotational control that does away with torque cables, miniaturized the imaging lens, and made a number of other engineering innovations.

The end result: a slender probe, about the size of a fine wire, that spins 250 times per second, snapping images as it glides back through the blood vessel. Researchers flush out blood cells with a tablespoon of liquid, then manually or automatically retract the probe, revealing a section of the artery about the length of a lip balm tube.

St. Michael’s Foundation

Clinical Confirmation

After initial testing in rabbits, dogs, pigs, and human cadavers, Ughi’s team sent the device to two clinical groups: Pereira’s in Toronto and Pedro Lylyk’s at the Sagrada Familia Clinic in Buenos Aires, Argentina. Across the two groups, neurosurgeons treated the 32 participants in the latest study, snaking the imaging probe through the patients’ groins or wrists and into their brains.

The procedure was safe and well-tolerated across different anatomies, underlying disease conditions, and the complexity of prior interventions. Moreover, the information provided frequently led to actionable insights—in one case, prompting clinicians to prescribe anti-platelet drugs when hidden clots were discovered; in another, aiding in the proper placement of stents that were not flush against the arterial wall.

“We were successful in every single case,” Ughi says. “So, this was a huge confirmation that the technology is ready to move forward.”

“We can see a lot more information that was not accessible before.” —Vitor Mendes Pereira, St. Michael’s Hospital

A startup called Spryte Medical aims to do just that. According to founder and CEO David Kolstad, the company is in discussions with regulatory authorities in Europe, Japan, and the United States to determine the steps necessary to bring the imaging probe to market.

At the same time, Spryte—with Ughi as senior director of advanced development and software engineering—is working on machine learning software to automate the image analysis process, thus simplifying diagnostics and treatment planning for clinicians.

Bolstered by the latest data, cerebrovascular specialists like Siddiqui now say they are chomping at the bit to get their hands on the imaging probe once it clears regulatory approval.

“I’m really impressed,” Siddiqui says. “This is a tool that many of us who do these procedures wish they had.”




health and food

Apps Put a Psychiatrist in Your Pocket



Nearly every day since she was a child, Alex Leow, a psychiatrist and computer scientist at the University of Illinois Chicago, has played the piano. Some days she plays well, and other days her tempo lags and her fingers hit the wrong keys. Over the years, she noticed a pattern: How well she plays depends on her mood. A bad mood or lack of sleep almost always leads to sluggish, mistake-prone music.

In 2015, Leow realized that a similar pattern might be true for typing. She wondered if she could help people with psychiatric conditions track their moods by collecting data about their typing style from their phones. She decided to turn her idea into an app.

After conducting a pilot study, in 2018 Leow launched BiAffect, a research app that aims to understand mood-related symptoms of bipolar disorder through keyboard dynamics and sensor data from users’ smartphones. Now in use by more than 2,700 people who have volunteered their data to the project, the app tracks typing speed and accuracy by swapping the phone’s onscreen keyboard with its own nearly identical one.

The software then generates feedback for users, such as a graph displaying hourly keyboard activity. Researchers get access to the donated data from users’ phones, which they use to develop and test machine learning algorithms that interpret data for clinical use. One of the things Leow’s team has observed: When people are manic—a state of being overly excited that accompanies bipolar disorder—they type “ferociously fast,” says Leow.

Compared to a healthy user [top], a person experiencing symptoms of bipolar disorder [middle] or depression [bottom] may use their phone more than usual and late at night. BiAffect measures phone usage and orientation to help track those symptoms. BiAffect

BiAffect is one of the few mental-health apps that take a passive approach to collecting data from a phone to make inferences about users’ mental states. (Leow suspects that fewer than a dozen are currently available to consumers.) These apps run in the background on smartphones, collecting different sets of data not only on typing but also on the user’s movements, screen time, call and text frequency, and GPS location to monitor social activity and sleep patterns. If an app detects an abrupt change in behavior, indicating a potentially hazardous shift in mental state, it could be set up to alert the user, a caretaker, or a physician.

Such apps can’t legally claim to treat or diagnose disease, at least in the United States. Nevertheless, many researchers and people with mental illness have been using them as tools to track signs of depression, schizophrenia, anxiety, and bipolar disorder. “There’s tremendous, immediate clinical value in helping people feel better today by integrating these signals into mental-health care,” says John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center, in Boston. Globally, one in 8 people live with a mental illness, including 40 million with bipolar disorder.

These apps differ from most of the more than 10,000 mental-health and mood apps available, which typically ask users to actively log how they’re feeling, help users connect to providers, or encourage mindfulness. The popular apps Daylio and Moodnotes, for example, require journaling or rating symptoms. This approach requires more of the user’s time and may make these apps less appealing for long-term use. A 2019 study found that among 22 mood-tracking apps, the median user-retention rate was just 6.1 percent at 30 days of use.

App developers are trying to avoid the pitfalls of previous smartphone-psychiatry startups, some of which oversold their capabilities before validating their technologies.

But despite years of research on passive mental-health apps, their success is far from guaranteed. App developers are trying to avoid the pitfalls of previous smartphone psychiatry startups, some of which oversold their capabilities before validating their technologies. For example, Mindstrong was an early startup with an app that tracked taps, swipes, and keystrokes to identify digital biomarkers of cognitive function. The company raised US $160 million in funding from investors, including $100 million in 2020 alone, and went bankrupt in February 2023.

Mindstrong may have folded because the company was operating on a different timeline from the research, according to an analysis by the health-care news website Stat. The slow, methodical pace of science did not match the startup’s need to return profits to its investors quickly, the report found. Mindstrong also struggled to figure out the marketplace and find enough customers willing to pay for the service. “We were first out of the blocks trying to figure this out,” says Thomas Insel, a psychiatrist who cofounded Mindstrong.

Now that the field has completed a “hype cycle,” Torous says, app developers are focused on conducting the research needed to prove their apps can actually help people. “We’re beginning to put the burden of proof more on those developers and startups, as well as academic teams,” he says. Passive mental-health apps need to prove they can reliably parse the data they’re collecting, while also addressing serious privacy concerns.

Passive sensing catches mood swings early

Mood Sensors

Seven metrics apps use to make inferences about your mood

All icons: Greg Mably

Keyboard dynamics: Typing speed and accuracy can indicate a lot about a person’s mood. For example, people who are manic often type extremely fast.

Accelerometer: This sensor tracks how the user is oriented and moving. Lying in bed would suggest a different mood than going for a run.

Calls and texts: The frequency of text messages and phone conversations signifies a person’s social isolation or activity, which indicates a certain mood.

GPS location: Travel habits signal a person’s activity level and routine, which offer clues about mood. For example, a person experiencing depression may spend more time at home.

Mic and voice: Mood can affect how a person speaks. Microphone-based sensing tracks the rhythm and inflection of a person’s voice.

Sleep: Changes in sleep patterns signify a change in mood. Insomnia is a common symptom of bipolar disorder and can trigger or worsen mood disturbances.

Screen time: An increase in the amount of time a person spends on a phone can be a sign of depressive symptoms and can interfere with sleep.

A crucial component of managing psychiatric illness is tracking changes in mental states that can lead to more severe episodes of the disease. Bipolar disorder, for example, causes intense swings in mood, from extreme highs during periods of mania to extreme lows during periods of depression. Between 30 and 50 percent of people with bipolar disorder will attempt suicide at least once in their lives. Catching early signs of a mood swing can enable people to take countermeasures or seek help before things get bad.

But detecting those changes early is hard, especially for people with mental illness. Observations by other people, such as family members, can be subjective, and doctor and counselor sessions are too infrequent.

That’s where apps come in. Algorithms can be trained to spot subtle deviations from a person’s normal routine that might indicate a change in mood—an objective measure based on data, like a diabetic tracking blood sugar. “The ability to think objectively about my own thinking is really key,” says retired U.S. major general Gregg Martin, who has bipolar disorder and is an advisor for BiAffect.

The data from passive sensing apps could also be useful to doctors who want to see objective data on their patients in between office visits, or for people transitioning from inpatient to outpatient settings. These apps are “providing a service that doesn’t exist,” says Colin Depp, a clinical psychologist and professor at the University of California, San Diego. Providers can’t observe their patients around the clock, he says, but smartphone data can help close the gap.

Depp and his team have developed an app that uses GPS data and microphone-based sensing to determine the frequency of conversations and make inferences about a person’s social interactions and isolation. The app also tracks “location entropy,” a metric of how much a user moves around outside of routine locations. When someone is depressed and mostly stays home, location entropy decreases.

Depp’s team initially developed the app, called CBT2go, as a way to test the effectiveness of cognitive behavioral therapy in between therapy sessions. The app can now intervene in real time with people experiencing depressive or psychotic symptoms. This feature helps people identify when they feel lonely or agitated so they can apply coping skills they’ve learned in therapy. “When people walk out of the therapist’s office or log off, then they kind of forget all that,” Depp says.

Another passive mental-health-app developer, Ellipsis Health in San Francisco, uses software that takes voice samples collected during telehealth calls to gauge a person’s level of depression, anxiety, and stress symptoms. For each set of symptoms, deep-learning models analyze the person’s words, rhythms, and inflections to generate a score. The scores indicate the severity of the person’s mental distress, and are based on the same scales used in standard clinical evaluations, says Michael Aratow, cofounder and chief medical officer at Ellipsis.

Aratow says the software works for people of all demographics, without needing to first capture baseline measures of an individual’s voice and speech patterns. “We’ve trained the models in the most difficult use cases,” he says. The company offers its platform, including an app for collecting the voice data, through health-care providers, health systems, and employers; it’s not directly available to consumers.

In the case of BiAffect, the app can be downloaded for free by the public. Leow and her team are using the app as a research tool in clinical trials sponsored by the U.S. National Institutes of Health. These studies aim to validate whether the app can reliably monitor mood disorders, and determine whether it could also track suicide risk in menstruating women and cognition in people with multiple sclerosis.

BiAffect’s software tracks behaviors like hitting the backspace key frequently, which suggests more errors, and an increase in typing “@” symbols and hashtags, which suggest more social media use. The app combines this typing data with information from the phone’s accelerometer to determine how the user is oriented and moving—for example, whether the user is likely lying down in bed—which yields more clues about mood.

Ellipsis Health analyzes audio captured during telehealth visits to assign scores for depression, anxiety, and stress.Ellipsis Health

The makers of BiAffect and Ellipsis Health don’t claim their apps can treat or diagnose disease. If app developers want to make those claims and sell their product in the United States, they would first have to get regulatory approval from the U.S. Food and Drug Administration. Getting that approval requires rigorous and large-scale clinical trials that most app makers don’t have the resources to conduct.

Digital-health software depends on quality clinical data

The sensing techniques upon which passive apps rely—measuring typing dynamics, movement, voice acoustics, and the like—are well established. But the algorithms used to analyze the data collected by the sensors are still being honed and validated. That process will require considerably more high-quality research among real patient populations.

Greg Mably

For example, clinical studies that include control or placebo groups are crucial and have been lacking in the past. Without control groups, companies can say their technology is effective “compared to nothing,” says Torous at Beth Israel.

Torous and his team aim to build software that is backed by this kind of quality evidence. With participants’ consent, their app, called mindLAMP, passively collects data from their screen time and their phone’s GPS and accelerometer for research use. It’s also customizable for different diseases, including schizophrenia and bipolar disorder. “It’s a great starting point. But to bring it into the medical context, there’s a lot of important steps that we’re now in the middle of,” says Torous. Those steps include conducting clinical trials with control groups and testing the technology in different patient populations, he says.

How the data is collected can make a big difference in the quality of the research. For example, the rate of sampling—how often a data point is collected—matters and must be calibrated for the behavior being studied. What’s more, data pulled from real-world environments tends to be “dirty,” with inaccuracies collected by faulty sensors or inconsistencies in how phone sensors initially process data. It takes more work to make sense of this data, says Casey Bennett, an assistant professor and chair of health informatics at DePaul University, in Chicago, who uses BiAffect data in his research.

One approach to addressing errors is to integrate multiple sources of data to fill in the gaps—like combining accelerometer and typing data. In another approach, the BiAffect team is working to correlate real-world information with cleaner lab data collected in a controlled environment where researchers can more easily tell when errors are introduced.

Who participates in the studies matters too. If participants are limited to a particular geographic area or demographic, it’s unclear whether the results can be applied to the broader population. For example, a night-shift worker will have different activity patterns from those with nine-to-five jobs, and a city dweller may have a different lifestyle from residents of rural areas.

After the research is done, app developers must figure out a way to integrate their products into real-world medical contexts. One looming question is when and how to intervene when a change in mood is detected. These apps should always be used in concert with a professional and not as a replacement for one, says Torous. Otherwise, the app’s assessments could be dangerous and distressing to users, he says.

When mood tracking feels like surveillance

No matter how well these passive mood-tracking apps work, gaining trust from potential users may be the biggest stumbling block. Mood tracking could easily feel like surveillance. That’s particularly true for people with bipolar or psychotic disorders, where paranoia is part of the illness.

Keris Myrick, a mental-health advocate, says she finds passive mental-health apps “both cool and creepy.” Myrick, who is vice president of partnerships and innovation at the mental-health-advocacy organization Inseparable, has used a range of apps to support her mental health as a person with schizophrenia. But when she tested one passive sensing app, she opted to use a dummy phone. “I didn’t feel safe with an app company having access to all of that information on my personal phone,” Myrick says. While she was curious to see if her subjective experience matched the app’s objective measurements, the creepiness factor prevented her from using the app enough to find out.

Keris Myrick, a mental-health advocate, says she finds passive mental-health apps “both cool and creepy.”

Beyond users’ perception, maintaining true digital privacy is crucial. “Digital footprints are pretty sticky these days,” says Katie Shilton, an associate professor at the University of Maryland focused on social-data science. It’s important to be transparent about who has access to personal information and what they can do with it, she says.

“Once a diagnosis is established, once you are labeled as something, that can affect algorithms in other places in your life,” Shilton says. She cites the misuse of personal data in the Cambridge Analytica scandal, in which the consulting firm collected information from Facebook to target political advertising. Without strong privacy policies, companies producing mental-health apps could similarly sell user data—and they may be particularly motivated to do so if an app is free to use.

Conversations about regulating mental-health apps have been ongoing for over a decade, but a Wild West–style lack of regulation persists in the United States, says Bennett of DePaul University. For example, there aren’t yet protections in place to keep insurance companies or employers from penalizing users based on data collected. “If there aren’t legal protections, somebody is going to take this technology and use it for nefarious purposes,” he says.

Some of these concerns may be mediated by confining all the analysis to a user’s phone, rather than collecting data in a central repository. But decisions about privacy policies and data structures are still up to individual app developers.

Leow and the BiAffect team are currently working on a new internal version of their app that incorporates natural-language processing and generative AI extensions to analyze users’ speech. The team is considering commercializing this new version in the future, but only following extensive work with industry partners to ensure strict privacy safeguards are in place. “I really see this as something that people could eventually use,” Leow says. But she acknowledges that researchers’ goals don’t always align with the desires of the people who might use these tools. “It is so important to think about what the users actually want.”

This article appears in the July 2024 print issue as “The Shrink in Your Pocket.”




health and food

Noninvasive Spinal Stimulation Gets a (Current) Boost



In 2010, Melanie Reid fell off a horse and was paralyzed below the shoulders.

“You think, ‘I am where I am; nothing’s going to change,’ ” she said, but many years after her accident, she participated in a medical trial of a new, noninvasive rehabilitative device that can deliver more electrical stimulation than similar devices without harming the user. For Reid, use of the device has led to small improvements in her ability to use her hands, and meaningful changes to her daily life.

“Everyone thinks with spinal injury all you want to do is be able to walk again, but if you’re a tetraplegic or quadriplegic, what matters most is working hands,” said Reid, a columnist for The Times, as part of a press briefing. “There’s no miracles in spinal injury, but tiny gains can be life-changing.”

For the study, Reid used a new noninvasive therapeutic device produced by Onward Medical. The device, ARC-EX (“EX” indicating “external”), uses electrodes placed along the spine near the site of injury—in the case of quadriplegia, the neck—to promote nerve activity and growth during physical-therapy exercises. The goal is to not only increase motor function while the device is attached and operating, but the long-term effectiveness of rehabilitation drills. A study focused on arm and hand abilities in patients with quadriplegia was published 20 May in Nature Medicine.

Researchers have been investigating electrical stimulation as a treatment for spinal cord injury for roughly 40 years, but “one of the innovations in this system is using a very high-frequency waveform,” said coauthor Chet Moritz, a neurotechnologist at the University of Washington. The ARC-EX uses a 10-kilohertz carrier frequency overlay, which researchers think may numb the skin beneath the electrode, allowing patients to tolerate five times as much amperage as from similar exploratory devices. For Reid, this manifested as a noticeable “buzz,” which felt strange, but not painful.

The study included 60 participants across 14 sites around the world. Each participant undertook two months of standard physical therapy, followed by two months of therapy combined with the ARC-EX. Although aspects of treatment such as electrode placement were fairly standardized, the current amplitude was personalized to each patient, and sometimes individual exercises, said Moritz.

The ARC-EX uses a 10-kilohertz current to provider stronger stimulation for people with spinal cord injuries.

Over 70 percent of patients showed an increase in at least one measurement of both strength and function between standard therapy and ARC-EX therapy. These changes also meant that 87 percent of study participants noted some improvement in quality of life in a followup questionnaire. No major safety concerns tied to the device or rehabilitation process were reported.

Onward will seek approval from the U.S. Food and Drug Administration for the device by the end of 2024, said study coauthor Grégoire Courtine, a neuroscientist and cofounder of Onward Medical. Onward is also working on an implantable spinal stimulator called ARC-IM; other prosthetic approaches, such as robotic exoskeletons, are being investigated elsewhere. ARC-EX was presented as a potentially important cost-accessible, noninvasive treatment option, especially in the critical window for recovery a year or so after a spinal cord injury. However, the price to insurers or patients of a commercial version is still subject to negotiation.

The World Health Organization says there are over 15 million people with spinal cord injuries. Moritz estimates that around 90 percent of patients, even many with no movement in their hands, could benefit from the new therapy.

Dimitry Sayenko, who studies spinal cord injury recovery at Houston Methodist and was not involved in the study, praised the relatively large sample size and clear concern for patient safety. But he stresses that the mechanisms underlying spinal stimulation are not well understood. “So far it’s literally plug and play,” said Sayenko. “We don’t understand what’s happening under the electrodes for sure—we can only indirectly assume or speculate.”

The new study supports the idea that noninvasive spinal cord stimulation can provide some benefit to some people but was not designed to help predict who will benefit, precisely how people will benefit, or how to optimize care. The study authors acknowledged the limited scope and need for further research, which might help turn currently “tiny gains” into what Sayenko calls “larger, more dramatic, robust effects.”




health and food

Bionic Eye Gets a New Lease on Life



The future of an innovative retinal implant and dozens of its users just got brighter, after Science, a bioelectronics startup run by Neuralink’s cofounder, Max Hodak, acquired Pixium’s technology at the last minute.

Pixium Vision, whose Prima system to tackle vision loss is implanted in 47 people across Europe and the United States, was in danger of disappearing completely until Science stepped in to buy the French company’s assets in April, for an undisclosed amount.

Pixium has been developing Prima for a decade, building on work by Daniel Palanker, a professor of ophthalmology at Stanford University. The 2-by-2-millimeter square implant is surgically implanted under the retina, where it turns infrared data from camera-equipped glasses into pulses of electricity. These replace signals generated by photoreceptor rods and cones, which are damaged in people suffering from age-related macular degeneration (AMD).

Early feasibility studies in the E.U. and the United States suggested Prima was safe and potentially effective, but Pixium ran out of money last November before the final results of a larger, multiyear pivotal trial in Europe.

“It’s very important to us to avoid another debacle like Argus II.”

With the financial and legal clock ticking down, the trial data finally arrived in March this year. “And the results from that were just pretty stunning,” says Max Hodak, Science’s founder and CEO, in his first interview since the acquisition.

Although neither Pixium nor Science has yet released the full dataset, Hodak shared with IEEE Spectrum videos of three people using Prima, each of them previously unable to read or recognize faces due to AMD. The videos show them slowly but fluently reading a hardback book, filling in a crossword puzzle, and playing cards.

“This is legit ‘form vision’ that I don’t think any device has ever done,” says Hodak. Form vision is the ability to recognize visual elements as parts of a larger object. “It’s this type of data that convinced us. And from there we were like, this should get to patients.”

As well as buying the Prima technology, Hodak says that Science will hire the majority of Pixium’s 35 engineering and regulatory staff, in a push to get the technology approved in Europe as quickly as possible.

The Prima implant receives visual data and is powered by near-infrared signals beamed from special spectacles.Pixium

Another priority is supporting existing Prima patients, says Lloyd Diamond, Pixium’s outgoing CEO. “It’s very important to us to avoid another debacle like Argus II,” he says, referring to another retinal implant whose manufacturer went out of business in 2022, leaving users literally in the dark.

Diamond is excited to be working with Science, which is based in Silicon Valley with a chip foundry in North Carolina. “They have a very deep workforce in software development, in electronic development, and in biologic research,” he says. “And there are probably only a few foundries in the world that could manufacture an implant such as ours. Being able to internalize part of that process is a very big advantage.”

Hodak hopes that a first-generation Prima product could quickly be upgraded with a wide-angle camera and the latest electronics. “We think that there’s one straight shrink, where we’ll move to smaller pixels and get higher visual acuity,” he says. “After that, we’ll probably move to a 3D electrode design, where we’ll be able to get closer to single-cell resolution.” That could deliver even sharper artificial vision.

In parallel, Science will continue Pixium’s discussions with the FDA in the United States about advancing a clinical trial there.

The success of Prima is critical, says Hodak, who started Science in 2021 after leaving Neuralink, a brain-computer interface company he cofounded with Elon Musk. “Elon can do whatever he wants for as long as he wants, but we need something that can finance future development,” he says. “Prima is big enough in terms of impact to patients and society that it is capable of helping us finance the rest of our ambitions.”

These include a next-generation Prima device, which Hodak says he is already talking about with Palanker, and a second visual prosthesis, currently called the Science Eye. This will tackle retinitis pigmentosa, a condition affecting peripheral vision—the same condition targeted by Second Sight’s ill-fated Argus II device.

“The Argus II just didn’t work that well,” says Hodak. “In the end, it was a pure bridge to nowhere.” Like the Argus II and Prima, the Science Eye relies on camera glasses and an implant, but with the addition of optogenetic therapy. This uses a genetically engineered virus to deliver a gene to specific optic nerve cells in the retina, making them light-sensitive at a particular wavelength. A tiny implanted display with a resolution sharper than an iPhone screen then enables fine control over the newly sensitized cells.

That system is still undergoing animal trials, but Hodak is almost ready to pull the trigger on its first human clinical studies, likely in Australia and New Zealand.

“In the long term, I think precision optogenetics will be more powerful than Prima’s electrical stimulation,” he says. “But we’re agnostic about which approach works to restore vision.”

One thing he does believe vehemently, unlike Musk, is that the retina is the best place to put an implant. Neuralink and Cortigent (the successor company of Second Sight) are both working on prosthetics that target the brain’s visual cortex.

“There’s a lot that you can do in cortex, but vision is not one of them,” says Hodak. He thinks the visual cortex is too complex, too distributed, and too difficult to access surgically to be useful.

“As long as the optic nerve is intact, the retina is the ideal place to think about restoring vision to the brain,” he says. “This is all a question of effect size. If someone has been in darkness for a decade, with no light, no perception, and you can give them any type of visual stimulus, they’re going to be into it. The Pixium patients can intuitively read, and that was really what convinced us that this was worth picking up and pursuing.”




health and food

Noise Cancellation for Your Brain



Elemind, a 5-year-old startup based in Cambridge, Mass., today unveiled a US $349 wearable for neuromodulation, the company’s first product. According to cofounder and CEO Meredith Perry, the technology tracks the oscillation of brain waves using electroencephalography (EEG) sensors that detect the electrical activity of the brain and then influence those oscillations using bursts of sound delivered via bone conduction.

Elemind’s first application for this wearable aims to suppress alpha waves to help induce sleep. There are other wearables on the market that monitor brain waves and, through biofeedback, encourage users to actively modify their alpha patterns. Elemind’s headband appears to be the first device to use sound to directly influence the brain waves of a passive user.

In a clinical trial, says Perry [no relation to author], 76 percent of subjects fell asleep more quickly. Those who did see a difference averaged 48 percent less time to progress from awake to asleep. The results were similar to those of comparable trials of pharmaceutical sleep aids, Perry indicated.

“For me,” Perry said, “it cuts through my rumination, quiets my thinking. It’s like noise cancellation for the brain.”

I briefly tested Elemind’s headband in May. I found it comfortable, with a thick cushioned band that sits across the forehead connected to a stretchy elastic loop to keep it in place. In the band are multiple EEG electrodes, a processor, a three-axis accelerometer, a rechargeable lithium-polymer battery, and custom electronics that gather the brain’s electrical signals, estimate their phase, and generate pink noise through a bone-conduction speaker. The whole thing weighs about 60 grams—about as much as a small kiwi fruit.

My test conditions were far from optimal for sleep: early afternoon, a fairly bright conference room, a beanbag chair as bed, and a vent blowing. And my test lasted just 4 minutes. I can say that I didn’t find the little bursts of pink noise (white noise without the higher frequencies) unpleasant. And since I often wear an eye mask, feeling fabric on my face wasn’t disturbing. It wasn’t the time or place to try for sound sleep, but I—and the others in the room—noted that after 2 minutes I was yawning like crazy.

How Elemind tweaks brain waves

What was going on in my brain? Briefly, different brain states are associated with different frequencies of waves. Someone who is relaxed with eyes closed but not asleep produces alpha waves at around 10 hertz. As they drift off to sleep, the alpha waves are supplanted by theta waves, at around 5 Hz. Eventually, the delta waves of deep sleep show up at around 1 Hz.

Ryan Neely, Elemind’s vice president of science and research, explains: “As soon as you put the headband on,” he says, “the EEG system starts running. It uses straightforward signal processing with bandpass filtering to isolate the activity in the 8- to 12-Hz frequency range—the alpha band.”

“Then,” Neely continues, “our algorithm looks at the filtered signal to identify the phase of each oscillation and determines when to generate bursts of pink noise.”

To help a user fall asleep more quickly [top], bursts of pink noise are timed to generate a brain response that is out of phase with alpha waves and so suppresses them. To enhance deep sleep [bottom], the pink noise is timed to generate a brain response that is in phase with delta waves.Source: Elemind

These auditory stimuli, he explains, create ripples in the waves coming from the brain. Elemind’s system tries to align these ripples with a particular phase in the wave. Because there is a gap between the stimulus and the evoked response, Elemind tested its system on 21 people and calculated the average delay, taking that into account when determining when to trigger a sound.

To induce sleep, Elemind’s headband targets the trough in the alpha wave, the point at which the brain is most excitable, Neely says.

“You can think of the alpha rhythm as a gate for communication between different areas of the brain,” he says. “By interfering with that communication, that coordination between different brain areas, you can disrupt patterns, like the ruminations that keep you awake.”

With these alpha waves suppressed, Neely says, the slower oscillations, like the theta waves of light sleep, take over.

Elemind doesn’t plan to stop there. The company plans to add an algorithm that addresses delta waves, the low-frequency 0.5- to 2-Hz waves characteristic of deep sleep. Here, Elemind’s technology will attempt to amplify this pattern with the intent of improving sleep quality.

Is this safe? Yes, Neely says, because auditory stimulation is self-limiting. “Your brain waves have a natural space they can occupy,” he explains, “and this stimulation just moved it within that natural space, unlike deep-brain stimulation, which can move the brain activity outside natural parameters.”

Going beyond sleep to sedation, memory, and mental health

Applications may eventually go beyond inducing and enhancing sleep. Researchers at the University of Washington and McGill University have completed a clinical study to determine if Elemind’s technology can be used to increase the pain threshold of subjects undergoing sedation. The results are being prepared for peer review.

Elemind is also working with a team involving researchers at McGill and the Leuven Brain Institute to determine if the technology can enhance memory consolidation in deep sleep and perhaps have some usefulness for people with mild cognitive impairment and other memory disorders.

Neely would love to see more applications investigated in the future.

“Inverse alpha stimulation [enhancing instead of suppressing the signal] could increase arousal,” he says. “That’s something I’d love to look into. And looking into mental-health treatment would be interesting, because phase coupling between the different brain regions appears to be an important factor in depression and anxiety disorders.”

Perry, who previously founded the wireless power startup UBeam, cofounded Elemind with four university professors with expertise in neuroscience, optogenetics, biomedical engineering, and artificial intelligence. The company has $12 million in funding to date and currently has 13 employees.

Preorders at $349 start today for beta units, and Elemind expects to start general sales later this year. The company will offer customers an optional membership at $7 to $13 monthly that will allow cloud storage of sleep data and access to new apps as they are released.




health and food

Microneedle Glucose Sensors Keep Monitoring Skin-Deep



For people with diabetes, glucose monitors are a valuable tool to monitor their blood sugar. The current generation of these biosensors detect glucose levels with thin, metallic filaments inserted in subcutaneous tissue, the deepest layer of the skin where most body fat is stored.

Medical technology company Biolinq is developing a new type of glucose sensor that doesn’t go deeper than the dermis, the middle layer of skin that sits above the subcutaneous tissue. The company’s “intradermal” biosensors take advantage of metabolic activity in shallower layers of skin, using an array of electrochemical microsensors to measure glucose—and other chemicals in the body—just beneath the skin’s surface.

Biolinq just concluded a pivotal clinical trial earlier this month, according to CEO Rich Yang, and the company plans to submit the device to the U.S. Food and Drug Administration for approval at the end of the year. In April, Biolinq received US $58 million in funding to support the completion of its clinical trials and subsequent submission to the FDA.

Biolinq’s glucose sensor is “the world’s first intradermal sensor that is completely autonomous,” Yang says. While other glucose monitors require a smartphone or other reader to collect and display the data, Biolinq’s includes an LED display to show when the user’s glucose is within a healthy range (indicated by a blue light) or above that range (yellow light). “We’re providing real-time feedback for people who otherwise could not see or feel their symptoms,” Yang says. (In addition to this real-time feedback, the user can also load long-term data onto a smartphone by placing it next to the sensor, like Abbott’s FreeStyle Libre, another glucose monitor.)

More than 2,000 microsensor components are etched onto each 200-millimeter silicon wafer used to manufacture the biosensors.Biolinq

Biolinq’s hope is that its approach could lead to sustainable changes in behavior on the part of the individual using the sensor. The device is intentionally placed on the upper forearm to be in plain sight, so users can receive immediate feedback without manually checking a reader. “If you drink a glass of orange juice or soda, you’ll see this go from blue to yellow,” Yang explains. That could help users better understand how their actions—such as drinking a sugary beverage—change their blood sugar and take steps to reduce that effect.

Biolinq’s device consists of an array of microneedles etched onto a silicon wafer using semiconductor manufacturing. (Other glucose sensors’ filaments are inserted with an introducer needle.) Each chip has a small 2-millimeter by 2-millimeter footprint and contains seven independent microneedles, which are coated with membranes through a process similar to electroplating in jewelry making. One challenge the industry has faced is ensuring that microsensors do not break at this small scale. The key engineering insight Biolinq introduced, Yang says, was using semiconductor manufacturing to build the biosensors. Importantly, he says, silicon “is harder than titanium and steel at this scale.”

Miniaturization allows for sensing closer to the surface of the skin, where there is a high level of metabolic activity. That makes the shallow depth ideal for monitoring glucose, as well as other important biomarkers, Yang says. Due to this versatility, combined with the use of a sensor array, the device in development can also monitor lactate, an important indicator of muscle fatigue. With the addition of a third data point, ketones (which are produced when the body burns fat), Biolinq aims to “essentially have a metabolic panel on one chip,” Yang says.

Using an array of sensors also creates redundancy, improving the reliability of the device if one sensor fails or becomes less accurate. Glucose monitors tend to drift over the course of wear, but with multiple sensors, Yang says that drift can be better managed.

One downside to the autonomous display is the drain on battery life, Yang says. The battery life limits the biosensor’s wear time to 5 days in the first-generation device. Biolinq aims to extend that to 10 days of continuous wear in its second generation, which is currently in development, by using a custom chip optimized for low-power consumption rather than off-the-shelf components.

The company has collected nearly 1 million hours of human performance data, along with comparators including commercial glucose monitors and venous blood samples, Yang says. Biolinq aims to gain FDA approval first for use in people with type 2 diabetes not using insulin and later expand to other medical indications.

This article appears in the August 2024 print issue as “Glucose Monitor Takes Page From Chipmaking.”




health and food

Is AI Search a Medical Misinformation Disaster?



Last month when Google introduced its new AI search tool, called AI Overviews, the company seemed confident that it had tested the tool sufficiently, noting in the announcement that “people have already used AI Overviews billions of times through our experiment in Search Labs.” The tool doesn’t just return links to Web pages, as in a typical Google search, but returns an answer that it has generated based on various sources, which it links to below the answer. But immediately after the launch users began posting examples of extremely wrong answers, including a pizza recipe that included glue and the interesting fact that a dog has played in the NBA.

Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory.

While the pizza recipe is unlikely to convince anyone to squeeze on the Elmer’s, not all of AI Overview’s extremely wrong answers are so obvious—and some have the potential to be quite harmful. Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory and has a new book out about the online propagandists who “turn lies into reality.” She has studied the spread of medical misinformation via social media, so IEEE Spectrum spoke to her about whether AI search is likely to bring an onslaught of erroneous medical advice to unwary users.

I know you’ve been tracking disinformation on the Web for many years. Do you expect the introduction of AI-augmented search tools like Google’s AI Overviews to make the situation worse or better?

Renée DiResta: It’s a really interesting question. There are a couple of policies that Google has had in place for a long time that appear to be in tension with what’s coming out of AI-generated search. That’s made me feel like part of this is Google trying to keep up with where the market has gone. There’s been an incredible acceleration in the release of generative AI tools, and we are seeing Big Tech incumbents trying to make sure that they stay competitive. I think that’s one of the things that’s happening here.

We have long known that hallucinations are a thing that happens with large language models. That’s not new. It’s the deployment of them in a search capacity that I think has been rushed and ill-considered because people expect search engines to give them authoritative information. That’s the expectation you have on search, whereas you might not have that expectation on social media.

There are plenty of examples of comically poor results from AI search, things like how many rocks we should eat per day [a response that was drawn for an Onion article]. But I’m wondering if we should be worried about more serious medical misinformation. I came across one blog post about Google’s AI Overviews responses about stem-cell treatments. The problem there seemed to be that the AI search tool was sourcing its answers from disreputable clinics that were offering unproven treatments. Have you seen other examples of that kind of thing?

DiResta: I have. It’s returning information synthesized from the data that it’s trained on. The problem is that it does not seem to be adhering to the same standards that have long gone into how Google thinks about returning search results for health information. So what I mean by that is Google has, for upwards of 10 years at this point, had a search policy called Your Money or Your Life. Are you familiar with that?

I don’t think so.

DiResta: Your Money or Your Life acknowledges that for queries related to finance and health, Google has a responsibility to hold search results to a very high standard of care, and it’s paramount to get the information correct. People are coming to Google with sensitive questions and they’re looking for information to make materially impactful decisions about their lives. They’re not there for entertainment when they’re asking a question about how to respond to a new cancer diagnosis, for example, or what sort of retirement plan they should be subscribing to. So you don’t want content farms and random Reddit posts and garbage to be the results that are returned. You want to have reputable search results.

That framework of Your Money or Your Life has informed Google’s work on these high-stakes topics for quite some time. And that’s why I think it’s disturbing for people to see the AI-generated search results regurgitating clearly wrong health information from low-quality sites that perhaps happened to be in the training data.

So it seems like AI overviews is not following that same policy—or that’s what it appears like from the outside?

DiResta: That’s how it appears from the outside. I don’t know how they’re thinking about it internally. But those screenshots you’re seeing—a lot of these instances are being traced back to an isolated social media post or a clinic that’s disreputable but exists—are out there on the Internet. It’s not simply making things up. But it’s also not returning what we would consider to be a high-quality result in formulating its response.

I saw that Google responded to some of the problems with a blog post saying that it is aware of these poor results and it’s trying to make improvements. And I can read you the one bullet point that addressed health. It said, “For topics like news and health, we already have strong guardrails in place. In the case of health, we launched additional triggering refinements to enhance our quality protections.” Do you know what that means?

DiResta: That blog posts is an explanation that [AI Overviews] isn’t simply hallucinating—the fact that it’s pointing to URLs is supposed to be a guardrail because that enables the user to go and follow the result to its source. This is a good thing. They should be including those sources for transparency and so that outsiders can review them. However, it is also a fair bit of onus to put on the audience, given the trust that Google has built up over time by returning high-quality results in its health information search rankings.

I know one topic that you’ve tracked over the years has been disinformation about vaccine safety. Have you seen any evidence of that kind of disinformation making its way into AI search?

DiResta: I haven’t, though I imagine outside research teams are now testing results to see what appears. Vaccines have been so much a focus of the conversation around health misinformation for quite some time, I imagine that Google has had people looking specifically at that topic in internal reviews, whereas some of these other topics might be less in the forefront of the minds of the quality teams that are tasked with checking if there are bad results being returned.

What do you think Google’s next moves should be to prevent medical misinformation in AI search?

DiResta: Google has a perfectly good policy to pursue. Your Money or Your Life is a solid ethical guideline to incorporate into this manifestation of the future of search. So it’s not that I think there’s a new and novel ethical grounding that needs to happen. I think it’s more ensuring that the ethical grounding that exists remains foundational to the new AI search tools.




health and food

Bath Engineers Bet on Dirt for Micropower



A thimbleful of soil can contain a universe of microorganisms, up to 10 billion by some estimates. Now a group of researchers in Bath, United Kingdom, are building prototype technologies that harvest electrons exhaled by some micro-species.

The idea is to power up low-yield sensors and switches, and perhaps help farmers digitally optimize crop yields to meet increasing demand and more and more stressful growing conditions. There could be other tasks, too, that might make use of a plant-and-forget, low-yield power source—such as monitoring canals for illegal waste dumping.

The research started small, based out of the University of Bath, with field-testing in a Brazilian primary school classroom and a green pond near it—just before the onset of the pandemic.

“We had no idea what the surroundings would be. We just packed the equipment we needed and went,” says Jakub Dziegielowski, a University of Bath, U.K. chemical engineering Ph.D. student. “And the pond was right by the school—it was definitely polluted, very green, with living creatures in it, and definitely not something I’d feel comfortable drinking from. So it got the job done.”

The experiments they did along with kids from the school and Brazilian researchers that summer of 2019 were aimed at running water purifiers. It did so. However, it also wasn’t very efficient, compared to, say, a solar panel.

So work has moved on in the Bath labs: in the next weeks, Dziegielowski will both turn 29 and graduate with his doctorate. And he, along with two other University of Bath advisors and colleagues recently launched a spinoff company—it’s called Bactery—to perfect a prototype for a network of soil microbial fuel cells for use in agriculture.

A microbial fuel cell is a kind of power plant that converts chemical energy stored in organic molecules into electrical energy, using microbes as a catalyst. It’s more often used to refer to liquid-based systems, Dziegielowski says. Organics from wastewater serve as the energy source, and the liquid stream mixes past the electrodes.

A soil microbial fuel cell, however, has one of its electrodes—the anode, which absorbs electrons—in the dirt. The other electrode, the cathode, is exposed to air. Batteries work because ions move through an electrolyte between electrodes to complete a circuit. In this case, the soil itself acts as the electrolyte—as well as source of the catalytic microbes, and as the source of the fuel.

The Bath, U.K.-based startup Bactery has developed a set up fuel cells powered by microbes in the soil—with, in the prototype pictured here, graphite mats as electrodes. University of Bath

Fields full of Watts

In a primary school in the fishing village of Icapuí on Brazil’s semi-arid northeastern coast, the group made use of basic components: graphite felt mats acting as electrodes, and nylon pegs to maintain spacing and alignment between them. (Bactery is now developing new kinds of casing.)

By setting up the cells in a parallel matrix, the Icapuí setup could generate 38 milliwatts per square meter. In work since, the Bath group’s been able to reach 200 milliwatts per square meter.

Electroactive bacteria—also called exoelectrogens or electricigens—take in soluble iron or acids or sugar and exhale electrons. There are dozens of species of microbes that can do this, including bacteria belonging to genera such as Geobacter and Shewanella. There are many others.

But 200 milliwatts per square meter is not a lot of juice: enough to charge a mobile phone, maybe, or keep an LED nightlight going—or, perhaps, serve as a power source for sensors or irrigation switches. “As in so many things, it comes down to the economics,” says Bruce Logan, an environmental engineer at Penn State who wrote a 2007 book, Microbial Fuel Cells.

A decade ago Palo Alto engineers launched the MudWatt, a self-contained kit that could light a small LED. It’s mostly marketed as a school science project. But even now, some 760 million people do not have reliable access to electricity. “In remote areas, soil microbial fuel cells with higher conversion and power management efficiencies would fare better than batteries,” says Sheela Berchmans, a retired chief scientist of the Central Electrochemical Research Institute in Tamil Nadu, India.

Korneel Rabaey, professor in the department of biotechnology at the University of Ghent, in Belgium, says electrochemical micro-power sources—a category that now includes the Bactery battery—is gaining buzz in resource recovery, for uses such as extracting pollutants from wastewater, with electricity as a byproduct. “You can think of many applications that don’t require a lot of power,” he says, “But where sensors are important.”




health and food

The Best Bionic Leg Yet



For the first time, a small group of patients with amputations below the knee were able to control the movements of their prosthetic legs through neural signals—rather than relying on programmed cycles for all or part of a motion—and resume walking with a natural gait. The achievement required a specialized amputation surgery combined with a non-invasive surface electrode connection to a robotic prosthetic lower leg. A study describing the technologies was published today in the journal Nature Medicine.

“What happens then is quite miraculous. The patients that have this neural interface are able to walk at normal speeds; and up and down steps and slopes; and maneuver obstacles really without thinking about it. It’s natural. It’s involuntary,” said co-author Hugh Herr, who develops bionic prosthetics at the MIT Media Lab. “Even though their limb is made of titanium and silicone—all these various electromechanical components—the limb feels natural and it moves naturally, even without conscious thought.”

The approach relies on surgery at the amputation site to create what the researchers call an agonist-antagonist myoneural Interface, or AMI. The procedure involves connecting pairs of muscles (in the case of below-the-knee amputation, two pairs), as well as the introduction of proprietary synthetic elements.

The interface creates a two-way connection between body and machine. Muscle-sensing electrodes send signals to a small computer in the prosthetic limb that interprets them as angles and forces for joints at the ankle and ball of the foot. It also sends information back about the position of the artificial leg, restoring a sense of where the limb is in space, also known as proprioception.

Video 1 www.youtube.com

“The particular mode of control is far beyond what anybody else has come up with,” said Daniel Ferris, a neuromechanical engineer at the University of Florida; Ferris was not involved in the study, but has worked on neural interfaces for controlling lower limb prostheses. “It’s a really novel idea that they’ve built on over the last eight years that’s showing really positive outcomes for better bionic lower legs.” The latest publication is notable for a larger participant pool than previous studies, with seven treatment patients and seven control patients with amputations and typical prosthetic legs.

To test the bionic legs, patients were asked to walk on level ground at different speeds; up and down slopes and stairs; and to maneuver around obstacles. The AMI users had a more natural gait, more closely resembling movement by someone using a natural limb. More naturalistic motion can improve freedom of movement, particularly over challenging terrain, but in other studies researchers have also noted reduced energetic costs, reduced stress on the body, and even social benefits for some amputees.

Co-author Hyungeun Song, a postdoctoral researcher at MIT, says the group was surprised by the efficiency of the bionic setup. The prosthetic interface sent just 18 percent of the typical amount of information that’s sent from a limb to the spine, yet it was enough to allow patients to walk with what was considered a normal gait.

Next Steps for the Bionic Leg

AMI amputations have now become the standard at Brigham and Women’s Hospital in Massachusetts, where co-author Matthew Carty works. And because of patient benefits in terms of pain and ease of using even passive (or non-robotic) prosthetics, this technique—or something similar—could spread well beyond the current research setting. To date, roughly 60 people worldwide have received AMI surgery above or below either an elbow or knee.

In principle, Herr said, someone with a previously amputated limb, such as himself, could undergo AMI rehabilitation, and he is strongly considering the procedure. More than 2 million Americans are currently living with a lost limb, according to the Amputee Coalition, and nearly 200,000 lower legs are amputated each year in the United States.

On the robotics side, there are already commercial leg prosthetics that could be made compatible with the neural interface. The area in greatest need of development is the connection between amputation site and prosthesis. Herr says commercialization of that interface might be around five years away.

Herr says his long-term goal is neural integration and embodiment: the sense that a prosthetic is part of the body, rather than a tool. The new study “is a critical step forward—pun intended.”




health and food

Origami Helps Implant Sensors in Bio-Printed Tissue



In the United States alone, more than 100,000 people currently need a lifesaving organ transplant. Instead of waiting for donors, one way to solve this crisis in the future is to assemble replacement organs with bio-printing—3D printing that uses inks containing living cells. Scientists in Israel have found that origami techniques could help fold sensors into bio-printed materials to help determine whether they are behaving safely and properly.

Although bio-printing something as complex as a human organ is still a distant possibility, there are a host of near-term applications for the technique. For example, in drug research, scientists can bio-print living, three-dimensional tissues with which to examine the effects of various compounds.

Ideally, researchers would like to embed sensors within bio-printed items to keep track of how well they are behaving. However, the three-dimensional nature of bio-printed objects makes it difficult to lodge sensors within them in a way that can monitor every part of the structures.

“It will, hopefully in the future, allow us to monitor and assess 3D biostructures before we would like to transplant them.” —Ben Maoz, Tel Aviv University

Now scientists have developed a 3D platform inspired by origami that can help embed sensors in bio-printed objects in precise locations. “It will, hopefully in the future, allow us to monitor and assess 3D biostructures before we would like to transplant them,” says Ben Maoz, a professor of biomedical engineering at Tel Aviv University in Israel.

The new platform is a silicone rubber device that can fold around a bio-printed structure. The prototype holds a commercial array of 3D electrodes to capture electrical signals. It also possesses other electrodes that can measure electrical resistance, which can reveal how permeable cells are to various medications. A custom 3D software model can tailor the design of the origami and all the electrodes so that the sensors can be placed in specific locations in the bio-printed object.

The scientists tested their device on bio-printed clumps of brain cells. The research team also grew a layer of cells onto the origami that mimicked the blood-brain barrier, a cell layer that protects the brain from undesirable substances that the body’s blood might be carrying. By folding this combination of origami and cells onto the bio-printed structures, Maoz and his colleagues were able to monitor neural activity within the brain cells and see how their synthetic blood-brain barrier might interfere with medications intended to treat brain diseases.

Maoz says the new device can incorporate many types of sensors beyond electrodes, such as temperature or acidity sensors. It can also incorporate flowing liquid to supply oxygen and nutrients to cells, the researchers note.

Currently, this device “will mainly be used for research and not for clinical use,” Maoz says. Still, it could “significantly contribute to drug development—assessing drugs that are relevant to the brain.”

The researchers say they can use their origami device with any type of 3D tissue. For example, Maoz says they can use it on bio-printed structures made from patient cells “to help with personalized medicine and drug development.”

The origami platform could also help embed devices that can modify bio-printed objects. For instance, many artificially grown tissues function better if they are placed under the kinds of physical stresses they might normally experience within the body, and the origami platform could integrate gadgets that can exert such mechanical forces on bio-printed structures. “This can assist in accelerating tissue maturation, which might be relevant to clinical applications,” Maoz says.

The scientists detailed their findings in the 26 June issue of Advanced Science.




health and food

Next-Gen Brain Implant Uses a Graphene Chip



A Barcelona-based startup called Inbrain Neuroelectronics has produced a novel brain implant made of graphene and is gearing up for its first in-human test this summer.

The technology is a type of brain-computer interface. BCIs have garnered interest because they record signals from the brain and transmit them to a computer for analysis. They have been used for medical diagnostics, as communication devices for people who can’t speak, and to control external equipment, including robotic limbs. But Inbrain intends to transform its BCI technology into a therapeutic tool for patients with neurological issues such as Parkinson’s disease.

Because Inbrain’s chip is made of graphene, the neural interface has some interesting properties, including the ability to be used to both record from and stimulate the brain. That bidirectionality comes from addressing a key problem with the metallic chips typically used in BCI technology: Faradaic reactions. Faradaic reactions are a particular type of electrochemical processes that occurs between a metal electrode and an electrolyte solution. As it so happens, neural tissue is largely composed of aqueous electrolytes. Over time, these Faradaic reactions reduce the effectiveness of the metallic chips.

That’s why Inbrain replaced the metals typically used in such chips with graphene, a material with great electrical conductivity. “Metals have Faraday reactions that actually make all the electrons interact with each other, degrading their effectiveness...for transmitting signals back to the brain,” said Carolina Aguilar, CEO and cofounder of Inbrain.

Because graphene is essentially carbon and not a metal, Aguilar says the chip can inject 200 times as much charge without creating a Faradic reaction. As a result, the material is stable over the millions of pulses of stimulation required of a therapeutic tool. While Inbrain is not yet testing the chip for brain stimulation, the company expects to reach that goal in due time.

The graphene-based chip is produced on a wafer using traditional semiconductor technology, according to Aguilar. At clean-room facilities, Inbrain fabricates a 10-micrometer-thick chip. The chip consists of what Aguilar terms “graphene dots” (not to be confused with graphene quantum dots) that range in size from 25 to 300 micrometers. “This micrometer scale allows us to get that unique resolution on the decoding of the signals from the brain, and also provides us with the micrometric stimulation or modulation of the brain,” added Aguilar.

Testing the Graphene-Based BCI

The first test of the platform in a human patient will soon be performed at the University of Manchester, in England, where it will serve as an interface during the resection of a brain tumor. When resecting a tumor, surgeons must ensure that they don’t damage areas like the brain’s language centers so the patient isn’t impaired after the surgery. “The chip is positioned during the tumor resection so that it can read, at a very high resolution, the signals that tell the surgeon where there is a tumor and where there is not a tumor,” says Aguilar. That should enable the surgeons to extract the tumor with micrometric precision while preserving functional areas like speech and cognition.

Aguilar added, “We have taken this approach for our first human test because it is a very reliable and quick path to prove the safety of graphene, but also demonstrate the potential of what it can do in comparison to metal technology that is used today.”

Aguilar stresses that the Inbrain team has already tested the graphene-based chip’s biocompatibility. “We have been working for the last three years in biocompatibility through various safety studies in large animals,” said Aguilar. “So now we can have these green lights to prove an additional level of safety with humans.”

While this test of the chip at Manchester is aimed at aiding in brain tumor surgery, the same technology could eventually be used to help Parkinson’s patients. Toward this aim, Inbrain’s system was granted Breakthrough Device Designation last September from the U.S. Food & Drug Administration as an adjunctive therapy for treating Parkinson’s disease. “For Parkinson’s treatment, we have been working on different preclinical studies that have shown reasonable proof of superiority versus current commercial technology in the [reduction] of Parkinson’s disease symptoms,” said Aguilar.

For treating Parkinson’s, Inbrain’s chip connects with the nigrostriatal pathway in the brain that is critical for movements. The chip will first decode the intention message from the brain that triggers a step or the lifting of the arm—something that a typical BCI can do. But Inbrain’s chip, with its micrometric precision, can also decode pathological biomarkers related to Parkinson’s symptoms, such as tremors, rigidity, and freezing of the gait.

By determining these biomarkers with great precision, Inbrain’s technology can determine how well a patient’s current drug regimen is working. In this first iteration of the Inbrain chip, it doesn’t treat the symptoms of Parkinson’s directly, but instead makes it possible to better target and reduce the amount of drugs that are used in treatment.

“Parkinson’s patients take huge amounts of drugs that have to be changed over time just to keep up with the growing resistance patients develop to the power of the drug,” said Aguilar. “We can reduce it at least 50 percent and hopefully in the future more as our devices become precise.”




health and food

Biocompatible Mic Could Lead to Better Cochlear Implants



Cochlear implants—the neural prosthetic cousins of standard hearing aids—can be a tremendous boon for people with profound hearing loss. But many would-be users are turned off by the device’s cumbersome external hardware, which must be worn to process signals passing through the implant. So researchers have been working to make a cochlear implant that sits entirely inside the ear, to restore speech and sound perception without the lifestyle restrictions imposed by current devices.

A new biocompatible microphone offers a bridge to such fully internal cochlear implants. About the size of a grain of rice, the microphone is made from a flexible piezoelectric material that directly measures the sound-induced motion of the eardrum. The tiny microphone’s sensitivity matches that of today’s best external hearing aids.

Cochlear implants create a novel pathway for sounds to reach the brain. An external microphone and processor, worn behind the ear or on the scalp, collect and translate incoming sounds into electrical signals, which get transmitted to an electrode that’s surgically implanted in the cochlea, deep within the inner ear. There, the electrical signals directly stimulate the auditory nerve, sending information to the brain to interpret as sound.

But, says Hideko Heidi Nakajima, an associate professor of otolaryngology at Harvard Medical School and Massachusetts Eye and Ear, “people don’t like the external hardware.” They can’t wear it while sleeping, or while swimming or doing many other forms of exercise, and so many potential candidates forgo the device altogether. What’s more, incoming sound goes directly into the microphone and bypasses the outer ear, which would otherwise perform the key functions of amplifying sound and filtering noise. “Now the big idea is instead to get everything—processor, battery, microphone—inside the ear,” says Nakajima. But even in clinical trials of fully internal designs, the microphone’s sensitivity—or lack thereof—has remained a roadblock.

Nakajima, along with colleagues from MIT, Harvard, and Columbia University, fabricated a cantilever microphone that senses the motion of a bone attached behind the eardrum called the umbo. Sound entering the ear canal causes the umbo to vibrate unidirectionally, with a displacement 10 times as great as other nearby bones. The tip of the “UmboMic” touches the umbo, and the umbo’s movements flex the material and produce an electrical charge through the piezoelectric effect. These electrical signals can then be processed and transmitted to the auditory nerve. “We’re using what nature gave us, which is the outer ear,” says Nakajima.

Why a cochlear implant needs low-noise, low-power electronics

Making a biocompatible microphone that can detect the eardrum’s minuscule movements isn’t easy, however. Jeff Lang, a professor of electrical engineering at MIT who jointly led the work, points out that only certain materials are tolerated by the human body. Another challenge is shielding the device from internal electronics to reduce noise. And then there’s long-term reliability. “We’d like an implant to last for decades,” says Lang.

In tests of the implantable microphone prototype, a laser beam measures the umbo’s motion, which gets transferred to the sensor tip. JEFF LANG & HEIDI NAKAJIMA

The researchers settled on a triangular design for the 3-by-3-millimeter sensor made from two layers of polyvinylidene fluoride (PVDF), a biocompatible piezoelectric polymer, sandwiched between layers of flexible, electrode-patterned polymer. When the cantilever tip bends, one PVDF layer produces a positive charge and the other produces a negative charge—taking the difference between the two cancels much of the noise. The triangular shape provides the most uniform stress distribution within the bending cantilever, maximizing the displacement it can undergo before it breaks. “The sensor can detect sounds below a quiet whisper,” says Lang.

Emma Wawrzynek, a graduate student at MIT, says that working with PVDF is tricky because it loses its piezoelectric properties at high temperatures, and most fabrication techniques involve heating the sample. “That’s a challenge especially for encapsulation,” which involves encasing the device in a protective layer so it can remain safely in the body, she says. The group had success by gradually depositing titanium and gold onto the PVDF while using a heat sink to cool it. That approach created a shielding layer that protects the charge-sensing electrodes from electromagnetic interference.

The other tool for improving a microphone’s performance is, of course, amplifying the signal. “On the electronics side, a low-noise amp is not necessarily a huge challenge to build if you’re willing to spend extra power,” says Lang. But, according to MIT graduate student John Zhang, cochlear implant manufacturers try to limit power for the entire device to 5 milliwatts, and just 1 mW for the microphone. “The trade-off between noise and power is hard to hit,” Zhang says. He and fellow student Aaron Yeiser developed a custom low-noise, low-power charge amplifier that outperformed commercially available options.

“Our goal was to perform better than or at least equal the performance of high-end capacitative external microphones,” says Nakajima. For leading external hearing-aid microphones, that means sensitivity down to a sound pressure level of 30 decibels—the equivalent of a whisper. In tests of the UmboMic on human cadavers, the researchers implanted the microphone and amplifier near the umbo, input sound through the ear canal, and measured what got sensed. Their device reached 30 decibels over the frequency range from 100 hertz to 6 kilohertz, which is the standard for cochlear implants and hearing aids and covers the frequencies of human speech. “But adding the outer ear’s filtering effects means we’re doing better [than traditional hearing aids], down to 10 dB, especially in speech frequencies,” says Nakajima.

Plenty of testing lies ahead, at the bench and on sheep before an eventual human trial. But if their UmboMic passes muster, the team hopes that it will help more than 1 million people worldwide go about their lives with a new sense of sound.

The work was published on 27 June in the Journal of Micromechanics and Microengineering.




health and food

Startups Launch Life-Saving Tech for the Opioid Crisis



Tech startups are stepping up to meet the needs of 60 million people worldwide who use opioids, representing about 1 percent of the world’s adult population. In the United States, deaths involving synthetic opioids have risen 1,040 percent from 2013 to 2019. The COVID-19 pandemic and continued prevalence of fentanyl have since worsened the toll, with an estimated 81,083 fatal overdoses in 2023 alone.

Innovations include biometric monitoring systems that help doctors determine proper medication dosages, nerve stimulators that relieve withdrawal symptoms, wearable and ingestible systems that watch for signs of an overdose, and autonomous drug delivery systems that could prevent overdose deaths.

Helping Patients Get the Dosage They Need

For decades, opioid blockers and other medications that suppress cravings have been the primary treatment tool for opioid addiction. However, despite its clinical dominance, this approach remains underutilized. In the United States, only about 22 percent of the 2.5 million adults with opioid use disorder receive medication-assisted therapy such as methadone, Suboxone, and similar drugs.

Determining patients’ ideal dosage during the early stages of treatment is crucial for keeping them in recovery programs. The shift from heroin to potent synthetic opioids, like fentanyl, has complicated this process, as the typical recommended medication doses can be too low for those with a high fentanyl tolerance.

A North Carolina-based startup is developing a predictive algorithm to help clinicians tailor these protocols and track real-time progress with biometric data. OpiAID, which is currently working with 1,000 patients across three clinical sites, recently launched a research pilot with virtual treatment provider Bicycle Health. Patients taking Suboxone will wear a Samsung Galaxy Watch6 to measure their heart rate, body movements, and skin temperature. OpiAID CEO David Reeser says clinicians can derive unique stress indications from this data, particularly during withdrawal. (He declined to share specifics on how the algorithm works.)

“Identifying stress biometrically plays a role in how resilient someone will be,” Reeser adds. “For instance, poor heart rate variability during sleep could indicate that a patient may be more susceptible that day. In the presence of measurable amounts of withdrawal, the potential for relapse on illicit medications may be more likely.”

Nerve Stimulators Provide Opioid Withdrawal Relief

While OpiAID’s software solution relies on monitoring patients, electrical nerve stimulation devices take direct action. These behind-the-ear wearables distribute electrodes at nerve endings around the ear and send electrical pulses to block pain signals and relieve withdrawal symptoms like anxiety and nausea.

The U.S. Food and Drug Administration (FDA) has cleared several nerve stimulator devices, such as DyAnsys’ Drug Relief, which periodically administers low-level electrical pulses to the ear’s cranial nerves. Others include Spark Biomedical’s Sparrow system and NET Recovery’s NETNeuro device.

Masimo’s behind-the-ear Bridge device costs US $595 for treatment providers.Masimo

Similarly, Masimo’s Bridge relieves withdrawal symptoms by stimulating the brain and spinal cord via electrodes. The device is intended to help patients initiating, transitioning into, or tapering off medication-assisted treatment. In a clinical trial, Bridge reduced symptom severity by 85 percent in the first hour and 97 percent by the fifth day. A Masimo spokesperson said the company’s typical customers are treatment providers and correctional facilities, though it’s also seeing interest from emergency room physicians.

Devices Monitor Blood Oxygen to Prevent Overdose Deaths

In 2023, the FDA cleared Masimo’s Opioid Halo device to monitor blood oxygen levels and alert emergency contacts if it detects opioid-induced respiratory depression, the leading cause of overdose deaths. The product includes a pulse oximeter cable and disposable sensors connected to a mobile app.

Opioid Halo utilizes Masimo’s signal extraction technology, first developed in the 1990s, which improves upon conventional oxygen monitoring techniques by filtering out artifacts caused by blood movement. Masimo employs four signal-processing engines to distinguish the true signal from noise that can lead to false alarms; for example, they distinguish between arterial blood and low-oxygen venous blood.

Masimo’s Opioid Halo system is available over-the-counter without a prescription. Masimo

Opioid Halo is available over-the-counter for US $250. A spokesperson says sales have continued to show promise as more healthcare providers recommend it to high-risk patients.

An Ingestible Sensor to Watch Over Patients

Last year, in a first-in-human clinical study, doctors used an ingestible sensor to monitor vital signs from patients’ stomachs. Researchers analyzed the breathing patterns and heart rates of 10 sleep study patients at West Virginia University. Some participants had episodes of central sleep apnea, which can be a proxy for opioid-induced respiratory depression. The capsule transmitted this data wirelessly to external equipment linked to the cloud.

Celero’s Rescue-Rx capsule would reside in a user’s stomach for one week.Benjamin Pless/Celero Systems

“To our knowledge, this is the first time anyone has demonstrated the ability to accurately monitor human cardiac and respiratory signals from an ingestible device,” says Benjamin Pless, one of the study’s co-authors. “This was done using very low-power circuitry including a radio, microprocessor, and accelerometer along with software for distinguishing various physiological signals.”

Pless and colleagues from MIT and Harvard Medical School started Celero Systems to commercialize a modified version of that capsule, one that will also release an opioid antagonist after detecting respiratory depression. Pless, Celero’s CEO, says the team has successfully demonstrated the delivery of nalmefene, an opioid antagonist similar to Narcan, to rapidly reverse overdoses.

Celero’s next step is integrating the vitals-monitoring feature for human trials. The company’s final device, Rescue-Rx, is intended to stay in the stomach for one week before passing naturally. Pless says Rescue-Rx’s ingestible format will make the therapy cheaper and more accessible than wearable autoinjectors or implants.

Celero’s capsule can detect vital signs from within the stomach. www.youtube.com

Autonomous Delivery of Overdose Medication

Rescue-Rx isn’t the only autonomous drug-delivery project under development. A recent IEEE Transactions on Biomedical Circuits and Systems paper introduced a wrist-worn near-infrared spectroscopy sensor to detect low blood oxygen levels related to an overdose.

Purdue University biomedical engineering professor Hugh Lee and graduate student Juan Mesa, who both co-authored the study, say that while additional human experiments are necessary, the findings represent a valuable tool in counteracting the epidemic. “Our wearable device consistently detected low-oxygenation events, triggered alarms, and activated the circuitry designed to release the antidote through the implantable capsule,” they wrote in an email.

Lee and Purdue colleagues founded Rescue Biomedical to commercialize the A2D2 system, which includes a wristband and an implanted naloxone capsule that releases the drug if oxygen levels drop below 90 percent. Next, the team will evaluate the closed-loop system in mice.

This story was updated on 27 August 2024 to correct the name of Masimo’s Opioid Halo device.



  • Blood oxygen monitoring
  • Electrical nerve stimulation
  • Opioid addiction treatment
  • Opioids
  • Biometrics

health and food

Superconducting Wire Sets New Current Capacity Record



UPDATE 31 OCTOBER 2024: No. 1 no longer. The would-have-been groundbreaking study published in Nature Communications by Amit Goyal et al. claiming the world’s highest-performing high-temperature superconducting wires yet has been retracted by the authors.

The journal’s editorial statement that now accompanies the paper says that after publication, an error in the calculation of the reported performance was identified. All of the study’s authors agreed with the retraction.

The researchers were first alerted to the issue by Evgeny Talantsev at the Mikheev Institute of Metal Physics in Ekaterinburg, Russia, and Jeffery Tallon at the Victoria University of Wellington in New Zealand. In a 2015 study, the two researchers had suggested upper limits for thin-film superconductors, and Tallon notes follow-up papers showed these limits held for more than 100 known superconductors. “The Goyal paper claimed current densities 2.5 times higher, so it was immediately obvious to us that there was a problem here,” he says.

Upon request, Goyal and his colleagues “very kindly agreed to release their raw data and did so quickly,” Tallon says. He and Talantsev discovered a mistake in the conversion of magnetization units.

“Most people who had been in the game for a long time would be fully conversant with the units conversion because the instruments all deliver magnetic data in [centimeter-gram-second] gaussian units, so they always have to be converted to [the International System of Units],” Tallon says. “It has always been a little tricky, but students are asked to take great care and check their numbers against other reports to see if they agree.”

In a statement, Goyal notes he and his colleagues “intend to continue to push the field forward” by continuing to explore ways to enhance wire performance using nanostructural modifications. —Charles Q. Choi

Original article from 17 August, 2024 follows:

Superconductors have for decades spurred dreams of extraordinary technological breakthroughs, but many practical applications for them have remained out of reach. Now a new study reveals what may be the world’s highest-performing high-temperature superconducting wires yet, ones that carry 50 percent as much current as the previous record-holder. Scientists add this advance was achieved without increased costs or complexity to how superconducting wires are currently made.

Superconductors conduct electricity with zero resistance. Classic superconductors work only at super-cold temperatures below 30 degrees Kelvin. In contrast, high-temperature superconductors can operate at temperatures above 77 K, which means they can be cooled to superconductivity using comparatively inexpensive and less burdensome cryogenics built around liquid nitrogen coolant.

Regular electrical conductors all resist electron flow to some degree, resulting in wasted energy. The fact that superconductors conduct electricity without dissipating energy has long lead to dreams of significantly more efficient power grids. In addition, the way in which rivers of electric currents course through them means superconductors can serve as powerful electromagnets, for applications such as maglev trains, better MRI scanners for medicine, doubling the amount of power generated from wind turbines, and nuclear fusion power plants.

“Today, companies around the world are fabricating kilometer-long, high-temperature superconductor wires,” says Amit Goyal, SUNY Distinguished Professor and SUNY Empire Innovation Professor at the University of Buffalo in New York.

However, many large-scale applications for superconductors may stay fantasies until researchers can find a way to fabricate high-temperature superconducting wires in a more cost-effective manner.

In the new research, scientists have created wires that have set new records for the amount of current they can carry at temperatures ranging from 5 K to 77 K. Moreover, fabrication of the new wires requires processes no more complex or costly than those currently used to make high-temperature superconducting wires.

“The performance we have reported in 0.2-micron-thick wires is similar to wires almost 10 times thicker,” Goyal says.

At 4.2 K, the new wires carried 190 million amps per square centimeter without any externally applied magnetic field. This is some 50 percent better than results reported in 2022 and a full 100 percent better than ones detailed in 2021, Goyal and his colleagues note. At 20 K and under an externally applied magnetic field of 20 tesla—the kind of conditions envisioned for fusion applications—the new wires may carry about 9.3 million amps per square centimeter, roughly 5 times greater than present-day commercial high-temperature superconductor wires, they add.

Another factor key to the success of commercial high-temperature superconductor wires is pinning force—the ability to keep magnetic vortices pinned in place within the superconductors that could otherwise interfere with electron flow. (So in that sense higher pinning force values are better here—more conducive to the range of applications expected for such high-capacity, high-temperature superconductors.) The new wires showed record-setting pinning forces of more than 6.4 trillion newtons at 4.3 K under a 7 tesla magnetic field. This is more than twice as much as results previously reported in 2022.

The new wires are based on rare-earth barium copper oxide (REBCO). The wires use nanometer-sized columns of insulating, non-superconducting barium zirconate at nanometer-scale spacings within the superconductor that can help pin down magnetic vortices, allowing for higher supercurrents.

The researchers made these gains after a few years spent optimizing deposition processes, Goyal says. “We feel that high-temperature superconductor wire performance can still be significantly improved,” he adds. “We have several paths to get to better performance and will continue to explore these routes.”

Based on these results, high-temperature superconductor wire manufacturers “will hopefully further optimize their deposition conditions to improve the performance of their wires,” Goyal says. “Some companies may be able to do this in a short time.”

The hope is that superconductor companies will be able to significantly improve performance without too many changes to present-day manufacturing processes. “If high-temperature superconductor wire manufacturers can even just double the performance of commercial high-temperature superconductor wires while keeping capital equipment costs the same, it could make a transformative impact to the large-scale applications of superconductors,” Goyal says.

The scientists detailed their findings on 7 August in the journal Nature Communications.

This story was updated on 19 August 2024 to correct Amit Goyal’s title and affiliation.




health and food

New Device Listens for Blood Pressure



Blood pressure is one of the critical vital signs for health, but standard practice can only capture a snapshot, using a pressure cuff to squeeze arteries. Continuous readings are available, but only by inserting a transducer directly into an artery via a needle and catheter. Thanks to researchers at Caltech, however, it may soon be possible to measure blood pressure continuously at just about any part of the body.

In a paper published in July in PNAS Nexus, the researchers describe their resonance sonomanometry (RSM) approach to reading blood pressure. This new technology uses ultrasound to measure the dimensions of artery walls. It also uses sound waves to find resonant frequencies that can reveal the pressure within those walls via arterial wall tension. This information is sufficient to calculate the absolute pressure within the artery at any moment, without the need for calibration.

This last factor is important, as other non-invasive approaches only provide relative changes in blood pressure. They require periodic calibration using readings from a traditional pressure cuff. The RSM technology eliminates the need for calibration, making continuous readings more reliable.

How resonance sonomanometry works

The researchers’ RSM system uses an ultrasound transducer to measure the dimensions of the artery. It also transmits sound waves at different frequencies. The vibrations cause the arterial walls to move in and out in response, creating a distinct pattern of motion. When the resonant frequency is transmitted, the top and bottom of the artery will move in and out in unison.

This resonant frequency can be used to determine the tension of the artery walls. The tension in the walls is directly correlated with the fluid pressure of the blood within the artery. As a result, the blood pressure can be calculated at any instant based on the dimensions of the artery and its resonant frequency.

The researchers have validated this approach with both mockups and human subjects. They first tested the technology on an arterial model that used a thin-walled rubber tubing and a syringe to vary the pressure. They tested this mockup using multiple pressures and tubing of different diameters.

The researchers then took measurements with human subjects at their carotid arteries (located in the neck), using a standard pressure cuff to take intermittent measurements. The RSM technology was successful, and subsequently was also demonstrated on axillary (shoulder), brachial (arm), and femoral (leg) arteries. The readings were so clear that the researchers mention that they might even be able to detect blood pressure changes related to respiration and its impact on thoracic pressure.

Unlike traditional pressure cuff approaches, RSM provides data during the entire heartbeat cycle, and not just the systolic and diastolic extremes (In other words, the two numbers you receive during a traditional blood pressure measurement). And the fact that RSM works with different-sized arteries means that it should be applicable across different body sizes and types. Using ultrasound also eliminates possible complications such as skin coloration that can affect light-based devices.

The researchers tested their ultrasound-based blood pressure approach on subjects’ carotid arteries.Esperto Medical

“I’m a big fan of continuous monitoring; a yearly blood pressure reading in the doctor’s office is insufficient for decision making,” says Nick van Terheyden, M.D., the digital health leader with Iodine Software, a company providing machine learning technologies to improve healthcare insights. “A new approach based on good old rules of math and physics is an exciting development.”

The Caltech researchers have created a spinoff company, Esperto Medical, to develop a commercial product using RSM technology. The company has created a transducer module that is smaller than a deck of cards, making it practical to incorporate into a wearable armband. They hope to miniaturize the hardware to the point that it could be incorporated into a wrist-worn device. According to Raymond Jimenez, Esperto Medical’s chief technology officer, “this technology poses the potential to unlock accurate, calibration-free [blood pressure measurements] everywhere—in the clinic, at the gym, and even at home.”

It appears that there’s a significant market for such a product. “92 percent of consumers who intend to buy a wearable device are willing to pay extra for a health-related feature, and blood pressure ranks first among such features,” says Elizabeth Parks, the president of Internet of Things consulting firm Parks Associates.

In the future, rather than relying on arm-squeezing blood pressure cuffs, smart watches may be able to directly monitor blood pressure throughout the day, just as they already do for heart rate and other vital signs.




health and food

Cat's Eye Camera Can See Through Camouflage



Did that rock move, or is it a squirrel crossing the road? Tracking objects that look a lot like their surroundings is a big problem for many autonomous vision systems. AI algorithms can solve this camouflage problem, but they take time and computing power. A new camera designed by researchers in South Korea provides a faster solution. The camera takes inspiration from the eyes of a cat, using two modifications that let it distinguish objects from their background, even at night.

“In the future … a variety of intelligent robots will require the development of vision systems that are best suited for their specific visual tasks,” says Young Min Song, a professor of electrical engineering and computer science at Gwangju Institute of Science and Technology and one of the camera’s designers. Song’s recent research has been focused on using the “perfectly adapted” eyes of animals to enhance camera hardware, allowing for specialized cameras for different jobs. For example, fish eyes have wider fields of view as a consequence of their curved retinas. Cats may be common and easy to overlook, he says, but their eyes actually offer a lot of inspiration.

This particular camera copied two adaptations from cats’ eyes: their vertical pupils and a reflective structure behind their retinas. Combined, these allowed the camera to be 10 percent more accurate at distinguishing camouflaged objects from their backgrounds and 52 percent more efficient at absorbing incoming light.

Using a vertical pupil to narrow focus

While conventional cameras can clearly see the foreground and background of an image, the slitted pupils of a cat focus directly on a target, preventing it from blending in with its surroundings. Kim et al./Science Advances

In conventional camera systems, when there is adequate light, the aperture—the camera’s version of a pupil—is small and circular. This structure allows for a large depth of field (the distance between the closest and farthest objects in focus), clearly seeing both the foreground and the background. By contrast, cat eyes narrow to a vertical pupil during the day. This shifts the focus to a target, distinguishing it more clearly from the background.

The researchers 3D printed a vertical slit to use as an aperture for their camera. They tested the vertical slit using seven computer vision algorithms designed to track moving objects. The vertical slit increased contrast between a target object and its background, even if they were visually similar. It beat the conventional camera on five of the seven tests. For the two tests it performed worse than the conventional camera, the accuracies of the two cameras were within 10 percent of each other.

Using a reflector to gather additional light

Cats can see more clearly at night than conventional cameras due to reflectors in their eyes that bring extra light to their retinas.Kim et al./Science Advances

Cat eyes have an in-built reflector, called a tapetum lucidum, which sits behind the retina. It reflects light that passes through the retina back at it, so it can process both the incoming light and reflected light, giving felines superior night vision. You can see this biological adaptation yourself by looking at a cat’s eyes at night: they will glow.

The researchers created an artificial version of this biological structure by placing a silver reflector under each photodiode in the camera. Photodiodes without a reflector generated current when more than 1.39 watts per square meter of light fell on them, while photodiodes with a reflector activated with 0.007 W/m2 of light. That means the photodiode could generate an image with about 1/200th the light.

Each photodiode was placed above a reflector and joined by metal electrodes to create a curved image sensor.Kim et al./Science Advances

To decrease visual aberrations (imperfections in the way the lens of the camera focuses light), Song and his team opted to create a curved image sensor, like the back of the human eye. In such a setup, a standard image sensor chip won’t work, because it’s rigid and flat. Instead it often relies on many individual photodiodes arranged on a curved substrate. A common problem with such curved sensors is that they require ultrathin silicon photodiodes, which inherently absorb less light than a standard imager’s pixels. But reflectors behind each photodiode in the artificial cat’s eye compensated for this, enabling the researchers to create a curved imager without sacrificing light absorption.

Together, vertical slits and reflectors led to a camera that could see more clearly in the dark and isn’t fooled by camouflage. “Applying these two characteristics to autonomous vehicles or intelligent robots could naturally improve their ability to see objects more clearly at night and to identify specific targets more accurately,” says Song. He foresees this camera being used for self-driving cars or drones in complex urban environments.

Song’s lab is continuing to work on using biological solutions to solve artificial vision problems. Currently, they are developing devices that mimic how brains process images, hoping to one day combine them with their biologically-inspired cameras. The goal, says Song, is to “mimic the neural systems of nature.”

Song and his colleague’s work was published this week in the journal Science Advances.

This article appears in the November 2024 print issue.




health and food

Stretchy Wearables Can Now Heal Themselves



If you’ve ever tried to get a bandage to stick to your elbow, you understand the difficulty in creating wearable devices that attach securely to the human body. Add digital electronic circuitry, and the problem becomes more complicated. Now include the need for the device to fix breaks and damage automatically—and let’s make it biodegradable while we’re at it—and many researchers would throw up their hands in surrender.

Fortunately, an international team led by researchers at Korea University Graduate School of Converging Science and Technology (KU-KIST) persevered, and has developed conductor materials that it claims are stretchable, self-healing, and biocompatible. Their project was described this month in the journal Science Advances.

The biodegradable conductor offers a new approach to patient monitoring and delivering treatments directly to the tissues and organs where they are needed. For example, a smart patch made of these materials could measure motion, temperature, and other biological data. The material could also be used to create sensor patches that can be implanted inside the body, and even mounted on the surface of internal organs. The biocompatible materials can be designed to degrade after a period of time, eliminating the need for an invasive procedure to remove the sensor later.

“This new technology is a glimpse at the future of remote healthcare,” says Robert Rose, CEO of Rose Strategic Partners, LLC. “Remote patient monitoring is an industry still in its early stages, but already we are seeing the promise of what is not only possible, but close on the horizon. Imagine a device implanted at a surgical site to monitor and report your internal healing progress. If it is damaged, the device can heal itself, and when the job is done, it simply dissolves. It sounds like science fiction, but it’s now science fact.”

Self-healing elastics

After being cut a ribbonlike film was able to heal itself in about 1 minute.Suk-Won Hwang

The system relies on two different layers of flexible material, both self-healing: one is for conduction and the other is an elastomer layer that serves as a substrate to support the sensors and circuitry needed to collect data. The conductor layer is based on a substance known by the acronym PEDOT:PSS, which is short for Poly(3,4-ethylenedioxythiophene) polystyrene sulfonate. It’s a conductive polymer widely used in making flexible displays and touch panels, as well as wearable devices. To increase the polymer’s conductivity and self-healing properties, the research team used additives including polyethylene glycol and glycol, which helped increase conductivity as well as the material’s ability to automatically repair damage such as cuts or tears.

In order to conform to curved tissues and survive typical body motion, the substrate layer must be extremely flexible. The researchers based it on elastomers that can match the shape of curved tissues, such as skin or individual organs.

These two layers stick to each other, thanks to chemical bonds that can connect the polymer chains of the plastic films in each layer. Combined, these materials create a system that is flexible and stretchable. In testing, the researchers showed that the materials could survive stretching up to 500 percent.

The self-healing function arises from the material’s ability to reconnect to itself when cut or otherwise damaged. This self-healing feature is based on a chemical process called disulfide metathesis. In short, polymer molecules containing pairs of linked sulfur atoms, called disulfides, have the ability to reform themselves after being severed. The phenomenon arises from a chemical process called disulfide-disulfide shuffling reactions, in which disulfide bonds in the molecule break and then reform, not necessarily between the original partners. According to the KU-KIST researchers, after being cut, their material was able to recover conductivity in its circuits within about two minutes without any intervention. The material was also tested for bending, twisting, and its ability to function both in air and under water.

This approach offers many advantages over other flexible electronics designs. For example, silver nanowires and carbon nanotubes have been used as the basis for stretchable devices, but they can be brittle and lack the self-healing properties of the KU-KIST materials. Other materials such as liquid metals can self-heal, but they are typically difficult to handle and integrate into wearable circuitry.

As a demonstration, the team created a multifunction sensor that included humidity, temperature, and pressure sensors that was approximately 4.5 square centimeters. In spite of being cut in four separate locations, it was able to heal itself and continue to provide sensor readings.

Implant tested in a rat

To take the demonstration a step further, the researchers created a 1.8-cm2 device that was attached to a rat’s bladder. The device was designed to wrap around the bladder and then adhere to itself, so no adhesives or sutures were required to attach the sensor onto the bladder. The team chose the bladder for their experiments because, under normal conditions, its size can change by 300 percent.

The device incorporated both electrodes and pressure sensors, which were able to detect changes in the bladder pressure. The electrodes could detect bladder voiding, through electromyography signals, as well as stimulate the bladder to induce urination. As with the initial demonstration, intentional damage to the device’s circuitry healed on its own, without intervention.

The biocompatible and biodegradable nature of the materials is important because it means that devices fabricated with them can be worn on the skin, as well as implanted within the body. The fact that the materials are biodegradable means that implants would not need a second surgical procedure to remove them. They could be left in place after serving their purpose, and they would be absorbed by the body.

According to Suk-Won Hwang, assistant professor at KU-KIST, a few hurdles remain on the path to commercialization. “We need to test the biocompatibility of some of the materials used in the conductor and substrate layers. While scalable production appears to be feasible, the high cost of disulfide derivatives might make the technology too expensive, aside from some special applications,” he says. “Biocompatibility testing and material synthesis optimization will take one to two years, at least.”




health and food

A Bendy RISC-V Processor



For the first time, scientists have created a flexible programmable chip that is not made of silicon. The new ultralow-power 32-bit microprocessor from U.K.-based Pragmatic Semiconductor and its colleagues can operate while bent, and can run machine learning workloads. The microchip’s open-source RISC-V architecture suggests it might cost less than a dollar, putting it in a position to power wearable healthcare electronics, smart package labels, and other inexpensive items, its inventors add.

For example, “we can develop an ECG patch that has flexible electrodes attached to the chest and a flexible microprocessor connected to flexible electrodes to classify arrhythmia conditions by processing the ECG data from a patient,” says Emre Ozer, senior director of processor development at Pragmatic, a flexible chip manufacturer in Cambridge, England. Detecting normal heart rhythms versus an arrhythmia “is a machine learning task that can run in software in the flexible microprocessor,” he says.

Flexible electronics have the potential for any application requiring interactions with soft materials, such as devices worn on or implanted within the body. Those applications could include on-skin computers, soft robotics, and brain-machine interfaces. But, conventional electronics are made of rigid materials such as silicon.

Open-source, Flexible, and Fast Enough

Pragmatic sought to create a flexible microchip that cost significantly less to make than a silicon processor. The new device, named Flex-RV, is a 32-bit microprocessor based on the metal-oxide semiconductor indium gallium zinc oxide (IGZO).

Attempts to create flexible devices from silicon require special packaging for the brittle microchips to protect them from the mechanical stresses of bending and stretching. In contrast, pliable thin-film transistors made from IGZO can be made directly at low temperatures onto flexible plastics, leading to lower costs.

The new microchip is based on the RISC-V instruction set. (RISC stands for reduced instruction set computer.) First introduced in 2010, RISC-V aims to enable smaller, lower-power, better-performing processors by slimming down the core set of instructions they can execute.

“Our end goal is to democratize computing by developing a license-free microprocessor,” Ozer says.

RISC-V’s is both free and open-source, letting chip designer dodge the costly licensing fees associated with proprietary architectures such as x86 and Arm. In addition, proprietary architectures offer limited opportunities to customize them, as adding new instructions is generally restricted. In contrast, RISC-V encourages such changes.

A bent Flex-RV microprocessor runs a program to print ‘Hello World’. Pragmatic Semiconductor

“We chose the Serv designed by Olof Kindgren... as the open source 32-bit RISC-V CPU when we designed Flex-RV,” Ozer says. “Serv is the smallest RISC-V processor in the open-source community.”

Other processors have been built using flexible semiconductors, such as Pragmatic’s 32-bit PlasticARM and an ultracheap microcontroller designed by engineers in Illinois. Unlike these earlier devices, Flex-RV is programmable and can run compiled programs written in high-level languages such as C. In addition, the open-source nature of RISC-V also let the researchers equip Flex-RV with a programmable machine learning hardware accelerator, enabling artificial intelligence applications.

Each Flex-RV microprocessor has a 17.5 square millimeter core and roughly 12,600 logic gates. The research team found Flex-RV could run as fast as 60 kilohertz while consuming less than 6 milliwatts of power.

All previous flexible non-silicon microprocessors were tested solely on the wafers they were made on. In contrast, Flex-RV was tested on flexible printed circuit boards, which let the researchers see how well it operated when flexed. The Pragmatic team found that Flex-RV could still execute programs correctly when bent to a curve with a radius of 3 millimeters. Performance varied between a 4.3 percent slowdown to a 2.3 percent speedup depending on the way it was bent. “Further research is needed to understand how bending conditions such as direction, orientation and angle impact performance at macro and micro scales,” Ozer says.

Silicon microchips can run at gigahertz speeds, much faster than Flex-RV, but that shouldn’t be a problem, according to Ozer. “Many sensors—for example, temperature, pressure, odor, humidity, pH, and so on—in the flexible electronics world typically operate very slowly at hertz or kilohertz regimes,” he says. “These sensors are used in smart packaging, labels and wearable healthcare electronics, which are the emerging applications for which flexible microprocessors will be useful. Running the microprocessor at 60 kHz would be more than enough to meet the requirements of these applications.”

Ozer and his team suggest each Flex-RV might cost less than a dollar. Although Ozer did not want to say how much less than a dollar it might cost, he says they are confident such low costs are possible “thanks to low-cost flexible chip fabrication technology by Pragmatic and a license-free RISC-V technology.”

The scientists detailed their findings online 25 September in the journal Nature.




health and food

Neuralink’s Blindsight Device Is Likely to Disappoint



Neuralink’s visual prosthesis Blindsight has been designated a breakthrough device by the U.S. Food and Drug Administration, which potentially sets the technology on a fast track to approval.

In confirming the news, an FDA spokesperson emphasized that the designation does not mean that Blindsight is yet considered safe or effective. Technologies in the program have potential to improve the current standard of care and are novel compared to what’s available on the market, but the devices still have to go through full clinical trials before seeking FDA approval.

Still, the announcement is a sign that Neuralink is moving closer to testing Blindsight in human patients. The company is recruiting people with vision loss for studies in the United States, Canada, and the United Kingdom.

Visual prostheses work by capturing visual information with a video camera, typically attached to glasses or a headset. Then a processor converts the data to an electrical signal that can be relayed to the nervous system. Retinal implants have been a common approach, with electrodes feeding the signal to nerves in the retina, at the back of the eye, from where it travels on to the brain. But Blindsight uses a brain implant to send the signal directly to neurons in the visual cortex.

In recent years, other companies developing artificial vision prosthetics have reached clinical research trials or beyond, only to struggle financially, leaving patients without support. Some of these technologies live on with new backing: Second Sight’s Orion cortical implant project is now in a clinical trial with Cortigent, and Pixium Vision’s Prima system is now owned by Science, with ex-Neuralink founder Max Hodak at the helm. No company has yet commercialized a visual prosthetic that uses a brain implant.

Elon Musk’s Claims About Blindsight

Very little information about Blindsight is publicly available. As of this writing, there is no official Blindsight page on the Neuralink website, and Neuralink did not respond to requests for comment. It’s also unclear how exactly Blindsight relates to a brain-computer interface that Neuralink has already implanted in two people with paralysis, who use their devices to control computer cursors.

Experts who spoke with IEEE Spectrum felt that, if judged against the strong claims made by Neuralink’s billionaire co-founder Elon Musk, Blindsight will almost certainly disappoint. However, some were still open to the possibility that Neuralink could successfully bring a device to market that can help people with vision loss, albeit with less dramatic effects on their sense of sight. While Musk’s personal fortune could help Blindsight weather difficulties that would end other projects, experts did not feel it was a guarantee of success.

After Neuralink announced on X (formerly Twitter) that Blindsight had received the breakthrough device designation, Musk wrote:

The Blindsight device from Neuralink will enable even those who have lost both eyes and their optic nerve to see.

Provided the visual cortex is intact, it will even enable those who have been blind from birth to see for the first time.

To set expectations correctly, the vision will be at first be [sic] low resolution, like Atari graphics, but eventually it has the potential be [sic] better than natural vision and enable you to see in infrared, ultraviolet or even radar wavelengths, like Geordi La Forge.

Musk included a picture of La Forge, a character from the science-fiction franchise Star Trek who wears a vision-enhancing visor.

Experts Puncture the Blindsight Hype

“[Musk] will build the best cortical implant we can build with current technology. It will not produce anything like normal vision. [Yet] it might produce vision that can transform the lives of blind people,” said Ione Fine, a computational neuroscientist at the University of Washington, who has written about the potential limitations of cortical implants, given the complexity of the human visual system. Fine previously worked for the company Second Sight.

A successful visual prosthetic might more realistically be thought of as assistive technology than a cure for blindness. “At best, we’re talking about something that’s augmentative to a cane and a guide dog; not something that replaces a cane and a guide dog,” said Philip Troyk, a biomedical engineer at the Illinois Institute of Technology.

Restoring natural vision is beyond the reach of today’s technology. But among Musks recent claims, Troyk says that a form of infrared sensing is plausible and has already been tested with one of his patients, who used it for help locating people within a room. That patient has a 400-electrode device implanted in the visual cortex as part of a collaborative research effort called the Intracortical Visual Prosthesis Project (ICVP). By comparison, Blindsight may have more than 1,000 electrodes, if it’s a similar device to Neuralink’s brain-computer interface.

Experts say they’d like more information about Neuralink’s visual prosthetic. “I’m leery about the fact that they are very superficial in their description of the devices,” said Gislin Dagnelie, a vision scientist at Johns Hopkins University who has been involved in multiple clinical trials for vision prosthetics, including a Second Sight retinal implant, and who is currently collaborating on the ICVP. “There’s no clear evaluation or pre-clinical work that has been published,” says Dagnelie. “It’s all based on: ‘Trust us, we’re Neuralink.’”

In the short term, too much hype could mislead clinical trial participants. It could also degrade interest in small but meaningful advancements in visual prosthetics. “Some of the [Neuralink] technology is exciting, and has potential,” said Troyk. “The way the messaging is being done detracts from that, potentially.”




health and food

Electrical Stitches Speed Wound Healing in Rats



Surgical stitches that generate electricity can help wounds heal faster in rats, a new study from China finds.

In the body, electricity helps the heart beat, causes muscles to contract, and enables the body to communicate with the brain. Now scientists are increasingly using electricity to promote healing with so-called electroceuticals. These electrotherapies often seek to mimic the electrical signals the body naturally uses to help new cells migrate to wounds to support the healing process.

In the new study, researchers focused on sutures, which are used to close wounds and surgical incisions. Despite the way in which medical devices have evolved rapidly over the years, sutures are generally limited in capability, says Zhouquan Sun, a doctoral candidate at Donghua University in Shanghai. “This observation led us to explore integrating advanced therapeutics into sutures,” Sun says.

Prior work sought to enhance sutures by adding drugs or growth factors to the stitches. However, most of these drugs either had insignificant effects on healing, or triggered side-effects such as allergic reactions or nausea. Growth factors in sutures often degraded before they could have any effect, or failed to activate entirely.

The research team that created the new sutures previously developed fibers for electronics for nearly 10 years for applications such as sensors. “This is our first attempt to apply fiber electronics in the biomedical field,” says Chengyi Hou, a professor of materials science and engineering at Donghua University.

Making Electrical Sutures Work

The new sutures are roughly 500 microns wide, or about five times the width of the average human hair. Like typical sutures, the new stitches are biodegradable, avoiding the need for doctors to remove the stitches and potentially cause more damage to a wound.

Each suture is made of a magnesium filament core wrapped in poly(lactic-co-glycolic) acid (PLGA) nanofibers, a commercially available, inexpensive, biodegradable polymer used in sutures. The suture also includes an outer sheath made of polycaprolactone (PCL), a biodegradable polyester and another common suture material.

Previously, electrotherapy devices were often bulky and expensive, and required wires connected to an external battery. The new stitches are instead powered by the triboelectric effect, the most common cause of static electricity. When two different materials repeatedly touch and then separate—in the case of the new suture, its core and sheath—the surface of one material can steal electrons from the surface of the other. This is why rubbing feet on a carpet or a running a comb through hair can build up electric charge.

A common problem sutures face is how daily movements may cause strain that reduce their efficacy. The new stitches take advantage of these motions to help generate electricity that helps wounds heal.

The main obstacle the researchers had to surmount was developing a suture that was both thin and strong enough to serve in medicine. Over the course of nearly two years, they tinkered with the molecular weights of the polymers they used and refined their fiber spinning technology to reduce their suture’s diameter while maintaining strength, Sun says.

In lab experiments on rats, the sutures generated about 2.3 volts during normal exercise. The scientists found the new sutures could speed up wound healing by 50 percent over the course of 10 days compared to conventional sutures. They also significantly lowered bacteria levels even without the use of daily wound disinfectants, suggesting they could reduce the risk of post-operation infections.

“Future research may delve deeper into the molecular mechanisms of how electrical stimulation facilitated would healing,” says Hui Wang, a chief physician at Shanghai Sixth People’s Hospital.

Further tests are needed in clinical settings to assess how effective these sutures are in humans. If such experiments prove successful, “this bioabsorbable electrically stimulating suture could change how we treat injuries in the future,” Hou says.

The scientists detailed their findings online 8 October in the journal Nature Communications.




health and food

Bluetooth Microscope Reveals the Inner Workings of Mice



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Any imaging technique that allows scientists to observe the inner workings of a living organism, in real-time, provides a wealth of information compared to experiments in a test tube. While there are many such imaging approaches in existence, they require test subjects—in this case rodents—to be tethered to the monitoring device. This limits the ability of animals under study to roam freely during experiments.

Researchers have recently designed a new microscope with a unique feature: It’s capable of transmitting real-time imaging from inside live mice via Bluetooth to a nearby phone or laptop. Once the device has been further miniaturized, the wireless connection will allow mice and other test subject animals to roam freely, making it easier to observe them in a more natural state.

“To the best of our knowledge, this is the first Bluetooth wireless microscope,” says Arvind Pathak, a professor at the Johns Hopkins University School of Medicine.

Through a series of experiments, Pathak and his colleagues demonstrate how the novel wireless microscope, called BLEscope, offers continuous monitoring of blood vessels and tumors in the brains of mice. The results are described in a study published 24 September in IEEE Transactions on Biomedical Engineering.

Microscopes have helped shed light on many biological mysteries, but the devices typically require that cells be removed from an organism and studied in a test tube. Any opportunity to study the biological process as it naturally occurs in the in the body (“in vivo”) tends to offer more useful and thorough information.

Several different miniature microscopes designed for in vivo experiments in animals exist. However, Pathak notes that these often require high power consumption or a wire to be tethered to the device to transmit the data—or both—which may restrict an animal’s natural movements and behavior.

“To overcome these hurdles, [Johns Hopkins University Ph.D. candidate] Subhrajit Das and our team designed an imaging system that operates with ultra-low power consumption—below 50 milliwatts—while enabling wireless data transmission and continuous, functional imaging at spatial resolutions of 5 to 10 micrometers in [rodents],” says Pathak.

The researchers created BLEscope using an off-the-shelf, low-power image sensor and microcontroller, which are integrated on a printed circuit board. Importantly, it has two LED lights of different colors—green and blue—that help create contrast during imaging.

“The BLE protocol enabled wireless control of the BLEscope, which then captures and transmits images wirelessly to a laptop or phone,” Pathak explains. “Its low power consumption and portability make it ideal for remote, real-time imaging.”

Pathak and his colleagues tested BLEscope in live mice through two experiments. In the first scenario, they added a fluorescent marker into the blood of mice and used BLEscope to characterize blood flow within the animals’ brains in real-time. In the second experiment, the researchers altered the oxygen and carbon dioxide ratios of the air being breathed in by mice with brain tumors, and were able to observe blood vessel changes in the fluorescently marked tumors.

“The BLEscope’s key strength is its ability to wirelessly conduct high-resolution, multi-contrast imaging for up to 1.5 hours, without the need for a tethered power supply,” Pathak says.

However, Pathak points out that the current prototype is limited by its size and weight. BLEscope will need to be further miniaturized, so that it doesn’t interfere with animals’ abilities to roam freely during experiments.

“We’re planning to miniaturize the necessary electronic components onto a flexible light-weight printed circuit board, which would reduce weight and footprint of the BLEscope to make it suitable for use on freely moving animals,” says Pathak.

This story was updated on 14 October 2024, to correct a statement about the size of the BLEscope.




health and food

This Eyewear Offers a Buckshot Method to Monitor Health



Emteq Labs wants eyewear to be the next frontier of wearable health technology.

The Brighton, England-based company introduced today its emotion-sensing eyewear, Sense. The glasses contain nine optical sensors distributed across the rims that detect subtle changes in facial expression with more than 93 percent accuracy when paired with Emteq’s current software. “If your face moves, we can capture it,” says Steen Strand, whose appointment as Emteq’s new CEO was also announced today. With that detailed data, “you can really start to decode all kinds of things.” The continuous data could help people uncover patterns in their behavior and mood, similar to an activity or sleep tracker.

Emteq is now aiming to take its tech out of laboratory settings with real-world applications. The company is currently producing a small number of Sense glasses, and they’ll be available to commercial partners in December.

The announcement comes just weeks after Meta and Snap each unveiled augmented reality glasses that remain in development. These glasses are “far from ready,” says Strand, who led the augmented reality eyewear division while working at Snap from 2018 to 2022. “In the meantime, we can serve up lightweight eyewear that we believe can deliver some really cool health benefits.”

Fly Vision Vectors

While current augmented reality (AR) headsets have large battery packs to power the devices, glasses require a lightweight design. “Every little bit of power, every bit of weight, becomes critically important,” says Strand. The current version of Sense weighs 62 grams, slightly heavier than the Ray-Ban Meta smart glasses, which weigh in at about 50 grams.

Because of the weight constraints, Emteq couldn’t use the power-hungry cameras typically used in headsets. With cameras, motion is detected by looking at how pixels change between consecutive images. The method is effective, but captures a lot of redundant information and uses more power. The eyewear’s engineers instead opted for optical sensors that efficiently capture vectors when points on the face move due to the underlying muscles. These sensors were inspired by the efficiency of fly vision. “Flies are incredibly efficient at measuring motion,” says Emteq founder and CSO Charles Nduka. “That’s why you can’t swat the bloody things. They have a very high sample rate internally.”

Sense glasses can capture data as often as 6,000 times per second. The vector-based approach also adds a third dimension to a typical camera’s 2D view of pixels in a single plane.

These sensors look for activation of facial muscles, and the area around the eyes is an ideal spot. While it’s easy to suppress or force a smile, the upper half of our face tends to have more involuntary responses, explains Nduka, who also works as a plastic surgeon in the United Kingdom. However, the glasses can also collect information about the mouth by monitoring the cheek muscles that control jaw movements, conveniently located near the lower rim of a pair of glasses. The data collected is then transmitted from the glasses to pass through Emteq’s algorithms in order to translate the vector data into usable information.

In addition to interpreting facial expressions, Sense can be used to track food intake, an application discovered by accident when one of Emteq’s developers was wearing the glasses while eating breakfast. By monitoring jaw movement, the glasses detect when a user chews and how quickly they eat. Meanwhile, a downward-facing camera takes a photo to log the food, and uses a large language model to determine what’s in the photo, effectively making food logging a passive activity. Currently, Emteq is using an instance of OpenAI’s GPT-4 large language model to accomplish this, but the company has plans to create their own algorithm in the future. Other applications, including monitoring physical activity and posture, are also in development.

One Platform, Many Uses

Nduka believes Emteq’s glasses represent a “fundamental technology,” similar to how the accelerometer is used for a host of applications in smartphones, including managing screen orientation, tracking activity, and even revealing infrastructure damage.

Similarly, Emteq has chosen to develop the technology as a general facial data platform for a range of uses. “If we went deep on just one, it means that all the other opportunities that can be helped—especially some of those rarer use cases—they’d all be delayed,” says Nduka. For example, Nduka is passionate about developing a tool to help those with facial paralysis. But a specialized device for those patients would have high unit costs and be unaffordable for the target user. Allowing more companies to use Emteq’s intellectual property and algorithms will bring down cost.

In this buckshot approach, the general target for Sense’s potential use cases is health applications. “If you look at the history of wearables, health has been the primary driver,” says Strand. The same may be true for eyewear, and he says there’s potential for diet and emotional data to be “the next pillar of health” after sleep and physical activity.

How the data is delivered is still to be determined. In some applications, it could be used to provide real-time feedback—for instance, vibrating to remind the user to slow down eating. Or, it could be used by health professionals only to collect a week’s worth of at-home data for patients with mental health conditions, which Nduka notes largely lack objective measures. (As a medical device for treatment of diagnosed conditions, Sense would have to go through a more intensive regulatory process.) While some users are hungry for more data, others may require a “much more gentle, qualitative approach,” says Strand. Emteq plans to work with expert providers to appropriately package information for users.

Interpreting the data must be done with care, says Vivian Genaro Motti, an associate professor at George Mason University who leads the Human-Centric Design Lab. What expressions mean may vary based on cultural and demographic factors, and “we need to take into account that people sometimes respond to emotions in different ways,” Motti says. With little regulation of wearable devices, she says it’s also important to ensure privacy and protect user data. But Motti raises these concerns because there is a promising potential for the device. “If this is widespread, it’s important that we think carefully about the implications.”

Privacy is also a concern to Edward Savonov, a professor of electrical and computer engineering at the University of Alabama, who developed a similar device for dietary tracking in his lab. Having a camera mounted on Emteq’s glasses could pose issues, both for the privacy of those around a user and a user’s own personal information. Many people eat in front of their computer or cell phone, so sensitive data may be in view.

For technology like Sense to be adopted, Sazonov says questions about usability and privacy concerns must first be answered. “Eyewear-based technology has potential for a great future—if we get it right.”




health and food

Dean Kamen Says Inventing Is Easy, but Innovating Is Hard



This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

Over the past 20 years, technological advances have enabled inventors to go from strength to strength. And yet, according to the legendary inventor Dean Kamen, innovation has stalled. Kamen made a name for himself with inventions including the first portable insulin pump for diabetics, an advanced wheelchair that can climb steps, and the Segway mobility device. Here, he talks about his plan for enabling innovators.

How has inventing changed since you started in the 1990s?

Dean Kamen: Kids all over the world can now be inventing in the world of synthetic biology the way we played with Tinkertoys and Erector Sets and Lego. I used to put pins and smelly formaldehyde in frogs in high school. Today in high school, kids will do experiments that would have won you the Nobel Prize in Medicine 40 years ago. But none of those kids are likely in any short time to be on the market with a pharmaceutical that will have global impact. Today, while invention is getting easier and easier, I think there are some aspects of innovation that have gotten much more difficult.

Can you explain the difference?

Kamen: Most people think those two words mean the same thing. Invention is coming up with an idea or a thing or a process that has never been done that way before. [Thanks to] more access to technology and 3D printers and simulation programs and virtual ways to make things, the threshold to be able to create something new and different has dramatically lowered.

Historically, inventions were only the starting point to get to innovation. And I’ll define an innovation as something that reached a scale where it impacted a piece of the world, or transformed it: the wheel, steam, electricity, Internet. Getting an invention to the scale it needs to be to become an innovation has gotten easier—if it’s software. But if it’s sophisticated technology that requires mechanical or physical structure in a very competitive world? It’s getting harder and harder to do due to competition, due to global regulatory environments.

[For example,] in proteomics [the study of proteins] and genomics and biomedical engineering, the invention part is, believe it or not, getting a little easier because we know so much, because there are development platforms now to do it. But getting a biotech product cleared by the Food and Drug Administration is getting more expensive and time consuming, and the risks involved are making the investment community much more likely to invest in the next version of Angry Birds than curing cancer.

A lot of ink has been spilled about how AI is changing inventing. Why hasn’t that helped?

Kamen: AI is an incredibly valuable tool. As long as the value you’re looking for is to be able to collect massive amounts of data and being able to process that data effectively. That’s very different than what a lot of people believe, which is that AI is inventing and creating from whole cloth new and different ideas.

How are you using AI to help with innovation?

Kamen: Every medical school has incredibly brilliant professors and grad students with petri dishes. “Look, I can make nephrons. We can grow people a new kidney. They won’t need dialysis.” But they only have petri dishes full of the stuff. And the scale they need is hundreds and hundreds of liters.

I started a not-for-profit called ARMI—the Advanced Regenerative Manufacturing Institute—to help make it practical to manufacture human cells, tissues, and organs. We are using artificial intelligence to speed up our development processes and eliminate going down frustratingly long and expensive [dead-end] paths. We figure out how to bring tissue manufacturing to scale. We build the bioreactors, sensor technologies, robotics, and controls. We’re going to put them together and create an industry that can manufacture hundreds of thousands of replacement kidneys, livers, pancreases, lungs, blood, bone, you name it.

So ARMI’s purpose is to help would-be innovators?

Kamen: We are not going to make a product. We’re not even going to make a whole company. We’re going to create baseline core technologies that will enable all sorts of products and companies to emerge to create an entire new industry. It will be an innovation in health care that will lower costs because cures are much cheaper than chronic treatments. We have to break down the barriers so that these fantastic inventions can become global innovations.

This article appears in the November 2024 print issue as “The Inventor’s Inventor.”




health and food

Crop Parasites Can Be Deterred by “Electric Fences”



Imagine you’re a baby cocoa plant, just unfurling your first tentative roots into the fertile, welcoming soil.

Somewhere nearby, a predator stirs. It has no ears to hear you, no eyes to see you. But it knows where you are, thanks in part to the weak electric field emitted by your roots.

It is microscopic, but it’s not alone. By the thousands, the creatures converge, slithering through the waterlogged soil, propelled by their flagella. If they reach you, they will use fungal-like hyphae to penetrate and devour you from the inside. They’re getting closer. You’re a plant. You have no legs. There’s no escape.

But just before they fall upon you, they hesitate. They seem confused. Then, en masse, they swarm off in a different direction, lured by a more attractive electric field. You are safe. And they will soon be dead.

If Eleonora Moratto and Giovanni Sena get their way, this is the future of crop pathogen control.

Many variables are involved in the global food crisis, but among the worst are the pests that devastate food crops, ruining up to 40 percent of their yield before they can be harvested. One of these—the little protist in the example above, an oomycete formally known as Phytophthora palmivorahas a US $1 billion appetite for economic staples like cocoa, palm, and rubber.

There is currently no chemical defense that can vanquish these creatures without poisoning the rest of the (often beneficial) organisms living in the soil. So Moratto, Sena, and their colleagues at Sena’s group at Imperial College London settled on a non-traditional approach: They exploited P. palmivora’s electric sense, which can be spoofed.

All plant roots that have been measured to date generate external ion flux, which translates into a very weak electric field. Decades of evidence suggests that this signal is an important target for predators’ navigation systems. However, it remains a matter of some debate how much their predators rely on plants’ electrical signatures to locate them, as opposed to chemical or mechanical information. Last year, Moratto and Sena’s group found that P. palmivora spores are attracted to the positive electrode of a cell generating current densities of 1 ampere per square meter. “The spores followed the electric field,” says Sena, suggesting that a similar mechanism helps them find natural bioelectric fields emitted by roots in the soil.

That got the researchers wondering: Might such an artificial electric field override the protists’ other sensory inputs, and scramble their compasses as they tried to use plant roots’ much weaker electrical output?

To test the idea, the researchers developed two ways to protect plant roots using a constant vertical electric field. They cultivated two common snacks for P. palmivoraa flowering plant related to cabbage and mustard, and a legume often used as a livestock feed plant—in tubes in a hydroponic solution.

Two electric-field configurations were tested: A “global” vertical field [left] and a field generated by two small nearby electrodes. The global field proved to be slightly more effective.Eleonora Moratto

In the first assay, the researchers sandwiched the plant roots between rows of electrodes above and below, which completely engulfed them in a “global” vertical field. For the second set, the field was generated using two small electrodes a short distance away from the plant, creating current densities on the order of 10 A/m2. Then they unleashed the protists.

With respect to the control group, both methods successfully diverted a significant portion of the predators away from the plant roots. They swarmed the positive electrode, where—since zoospores can’t survive for longer than about 2 to 3 hours without a host—they presumably starved to death. Or worse. Neil Gow, whose research presented some of the first evidence for zoospore electrosensing, has other theories about their fate. “Applied electrical fields generate toxic products and steep pH gradients near and around the electrodes due to the electrolysis of water,” he says. “The tropism towards the electrode might be followed by killing or immobilization due to the induced pH gradients.”

Not only did the technique prevent infestation, but some evidence indicates that it may also mitigate existing infections. The researchers published their results in August in Scientific Reports.

The global electric field was marginally more successful than the local. However, it would be harder to translate from lab conditions into a (literal) field trial in soil. The local electric field setup would be easy to replicate: “All you have to do is stick the little plug into the soil next to the crop you want to protect,” says Sena.

Moratto and Sena say this is a proof of concept that demonstrates a basis for a new, pesticide-free way to protect food crops. (Sena likens the technique to the decoys used by fighter jets to draw away incoming missiles by mimicking the signals of the original target.) They are now looking for funding to expand the project. The first step is testing the local setup in soil; the next is to test the approach on Phytophthora infestans, a meaner, scarier cousin of P. palmivora.

P. infestans attacks a more varied diet of crops—you may be familiar with its work during the Irish potato famine. The close genetic similarities imply another promising candidate for electrical pest control. This investigation, however, may require more funding. P. infestans research can be undertaken only under more stringent laboratory security protocols.

The work at Imperial ties into the broader—and somewhat charged—debate around electrostatic ecology; that is, the extent to which creatures including ticks make use of heretofore poorly understood electrical mechanisms to orient themselves and in other ways enhance their survival. “Most people still aren’t aware that naturally occurring electricity can play an ecological role,” says Sam England, a behavioral ecologist with Berlin’s Natural History Museum. “So I suspect that once these electrical phenomena become more well known and understood, they will inspire a greater number of practical applications like this one.”




health and food

Gandhi Inspired a New Kind of Engineering



This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

The teachings of Mahatma Gandhi were arguably India’s greatest contribution to the 20th century. Raghunath Anant Mashelkar has borrowed some of that wisdom to devise a frugal new form of innovation he calls “Gandhian engineering.” Coming from humble beginnings, Mashelkar is driven to ensure that the benefits of science and technology are shared more equally. He sums up his philosophy with the epigram “more from less for more.” This engineer has led India’s preeminent R&D organization, the Council of Scientific and Industrial Research, and he has advised successive governments.

What was the inspiration for Gandhian engineering?

Raghunath Anant Mashelkar: There are two quotes of Gandhi’s that were influential. The first was, “The world has enough for everyone’s need, but not enough for everyone’s greed.” He was saying that when resources are exhaustible, you should get more from less. He also said the benefits of science must reach all, even the poor. If you put them together, it becomes “more from less for more.”

My own life experience inspired me, too. I was born to a very poor family, and my father died when I was six. My mother was illiterate and brought me to Mumbai in search of a job. Two meals a day was a challenge, and I walked barefoot until I was 12 and studied under streetlights. So it also came from my personal experience of suffering because of a lack of resources.

How does Gandhian engineering differ from existing models of innovation?

Mashelkar: Conventional engineering is market or curiosity driven, but Gandhian engineering is application and impact driven. We look at the end user and what we want to achieve for the betterment of humanity.

Most engineering is about getting more from more. Take an iPhone: They keep creating better models and charging higher prices. For the poor it is less from less: Conventional engineering looks at removing features as the only way to reduce costs.

In Gandhian engineering, the idea is not to create affordable [second-rate] products, but to make high technology work for the poor. So we reinvent the product from the ground up. While the standard approach aims for premium price and high margins, Gandhian engineering will always look at affordable price, but high volumes.

The Jaipur foot is a light, durable, and affordable prosthetic.Gurinder Osan/AP

What is your favorite example of Gandhian engineering?

Mashelkar: My favorite is the Jaipur foot. Normally, a sophisticated prosthetic foot costs a few thousand dollars, but the Jaipur foot does it for [US] $20. And it’s very good technology; there is a video of a person wearing a Jaipur foot climbing a tree, and you can see the flexibility is like a normal foot. Then he runs one kilometer in 4 minutes, 30 seconds.

What is required for Gandhian engineering to become more widespread?

Mashelkar: In our young people, we see innovation and we see passion, but compassion is the key. We also need more soft funding [grants or zero-interest loans], because venture capital companies often turn out to be “vulture capital” in a way, because they want immediate returns.

We need a shift in the mindset of businesses—they can make money not just from premium products for those at the top of the pyramid, but also products with affordable excellence designed for large numbers of people.

This article appears in the November 2024 print issue as “The Gandhi Inspired Inventor.”




health and food

For this Stanford Engineer, Frugal Invention Is a Calling



Manu Prakash spoke with IEEE Spectrum shortly after returning to Stanford University from a month aboard a research vessel off the coast of California, where he was testing tools to monitor oceanic carbon sequestration. The associate professor conducts fieldwork around the world to better understand the problems he’s working on, as well as the communities that will be using his inventions.

This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

Prakash develops imaging instruments and diagnostic tools, often for use in global health and environmental sciences. His devices typically cost radically less than conventional equipment—he aims for reductions of two or more orders of magnitude. Whether he’s working on pocketable microscopes, mosquito or plankton monitors, or an autonomous malaria diagnostic platform, Prakash always includes cost and access as key aspects of his engineering. He calls this philosophy “frugal science.”

Why should we think about science frugally?

Manu Prakash: To me, when we are trying to ask and solve problems and puzzles, it becomes important: In whose hands are we putting these solutions? A frugal approach to solving the problem is the difference between 1 percent of the population or billions of people having access to that solution.

Lack of access creates these kinds of barriers in people’s minds, where they think they can or cannot approach a kind of problem. It’s important that we as scientists or just citizens of this world create an environment that feels that anybody has a chance to make important inventions and discoveries if they put their heart to it. The entrance to all that is dependent on tools, but those tools are just inaccessible.

How did you first encounter the idea of “frugal science”?

Prakash: I grew up in India and lived with very little access to things. And I got my Ph.D. at MIT. I was thinking about this stark difference in worlds that I had seen and lived in, so when I started my lab, it was almost a commitment to [asking]: What does it mean when we make access one of the critical dimensions of exploration? So, I think a lot of the work I do is primarily driven by curiosity, but access brings another layer of intellectual curiosity.

How do you identify a problem that might benefit from frugal science?

Prakash: Frankly, it’s hard to find a problem that would not benefit from access. The question to ask is “Where are the neglected problems that we as a society have failed to tackle?” We do a lot of work in diagnostics. A lot [of our solutions] beat the conventional methods that are neither cost effective nor any good. It’s not about cutting corners; it’s about deeply understanding the problem—better solutions at a fraction of the cost. It does require invention. For that order of magnitude change, you really have to start fresh.

Where does your involvement with an invention end?

Prakash: Inventions are part of our soul. Your involvement never ends. I just designed the 415th version of Foldscope [a low-cost “origami” microscope]. People only know it as version 3. We created Foldscope a long time ago; then I realized that nobody was going to provide access to it. So we went back and invented the manufacturing process for Foldscope to scale it. We made the first 100,000 Foldscopes in the lab, which led to millions of Foldscopes being deployed.

So it’s continuous. If people are scared of this, they should never invent anything [laughs], because once you invent something, it’s a lifelong project. You don’t put it aside; the project doesn’t put you aside. You can try to, but that’s not really possible if your heart is in it. You always see problems. Nothing is ever perfect. That can be ever consuming. It’s hard. I don’t want to minimize this process in any way or form.




health and food

Scary Stories: Establishing a Field Amid Skepticism



In the spirit of the Halloween season, IEEE Spectrum presents a pair of stories that—although grounded in scientific truth rather than the macabre—were no less harrowing for those who lived them. In today’s installment, Robert Langer had to push back against his field’s conventional wisdom to pioneer a drug-delivery mechanism vital to modern medicine.

Nicknamed the Edison of Medicine, Robert Langer is one of the world’s most-cited researchers, with over 1,600 published papers, 1,400 patents, and a top-dog role as one of MIT’s nine prestigious Institute Professors. Langer pioneered the now-ubiquitous drug delivery systems used in modern cancer treatments and vaccines, indirectly saving countless lives throughout his 50-year career.

But, much like Edison and other inventors, Langer’s big ideas were initially met with skepticism from the scientific establishment.

He came up in the 1970s as a chemical engineering postdoc working in the lab of Dr. Judah Folkman, a pediatric surgeon at the Boston Children’s Hospital. Langer was tasked with solving what many believed was an impossible problem—isolating angiogenesis inhibitors to halt cancer growth. Folkman’s vision of stopping tumors from forming their own self-sustaining blood vessels was compelling enough, but few believed it possible.

Langer encountered both practical and social challenges before his first breakthrough. One day, a lab technician accidentally spilled six months’ worth of samples onto the floor, forcing him to repeat the painstaking process of dialyzing extracts. Those months of additional work steered Langer’s development of novel microspheres that could deliver large molecules of medicine directly to tumors.

In the 1970s, Langer developed these tiny microspheres to release large molecules through solid materials, a groundbreaking proof-of-concept for drug delivery.Robert Langer

Langer then submitted the discovery to prestigious journals and was invited to speak at a conference in Michigan in 1976. He practiced the 20-minute presentation for weeks, hoping for positive feedback from respected materials scientists. But when he stepped off the podium, a group approached him and said bluntly, “We don’t believe anything you just said.” They insisted that macromolecules were simply too large to pass through solid materials, and his choice of organic solvents would destroy many inputs. Conventional wisdom said so.

Nature published Langer’s paper three months later, demonstrating for the first time that non-inflammatory polymers could enable the sustained release of proteins and other macromolecules. The same year, Science published his isolation mechanism to restrict tumor growth.

Langer and Folkman’s research paved the way for modern drug delivery.MIT and Boston Children’s Hospital

Even with impressive publications, Langer still struggled to secure funding for his work in controlling macromolecule delivery, isolating the first angiogenesis inhibitors, and testing their behavior. His first two grant proposals were rejected on the same day, a devastating blow for a young academic. The reviewers doubted his experience as “just an engineer” who knew nothing about cancer or biology. One colleague tried to cheer him up, saying, “It’s probably good those grants were rejected early in your career. Since you’re not supporting any graduate students, you don’t have to let anyone go.” Langer thought the colleague was probably right, but the rejections still stung.

His patent applications, filed alongside Folkman at the Boston Children’s Hospital, were rejected five years in a row. After all, it’s difficult to prove you’ve got something good if you’re the only one doing it. Langer remembers feeling disappointed but not crushed entirely. Eventually, other scientists cited his findings and expanded upon them, giving Langer and Folkman the validation needed for intellectual property development. As of this writing, the pair’s two studies from 1976 have been cited nearly 2,000 times.

As the head of MIT’s Langer Lab, he often shares these same stories of rejection with early-career students and researchers. He leads a team of over 100 undergrads, grad students, postdoctoral fellows, and visiting scientists, all finding new ways to deliver genetically engineered proteins, DNA, and RNA, among other research areas. Langer’s reputation is further bolstered by the many successful companies he co-founded or advised, like mRNA leader Moderna, which rose to prominence after developing its widely used COVID-19 vaccine.

Langer sometimes thinks back to those early days—the shattered samples, the cold rejections, and the criticism from senior scientists. He maintains that “Conventional wisdom isn’t always correct, and it’s important to never give up—(almost) regardless of what others say.”




health and food

Biden Administration Should Prioritize Fight Against Superbugs

The Pew Charitable Trusts joined dozens of research, health care, and nonprofit stakeholders in urging President-elect Joe Biden to prioritize and strengthen the national response to antibiotic resistance.




health and food

Medications for Opioid Use Disorder Improve Patient Outcomes

In 2018, opioid overdoses in the United States caused one death every 11 minutes, resulting in nearly 47,000 fatalities. The most effective treatments for opioid use disorder (OUD) are three medications approved by the Food and Drug Administration (FDA): methadone, buprenorphine, and naltrexone.




health and food

Researchers Explore How the Human Body Senses Temperature

As winter arrives and daylight hours decrease, it gets easier to hit the snooze button and stay in bed. It turns out that there’s a scientific reason behind this phenomenon that helps to explain why people struggle to adjust their internal clocks—also known as circadian rhythm or sleep-wake cycle—when the weather turns colder.




health and food

Relaxed Federal Regulations Amid Pandemic Can Help Jails Better Treat Opioid Use Disorder

Few correctional facilities in the United States have treatment programs for individuals with opioid use disorder (OUD), despite clear evidence that certain medications reduce the risk of overdose and death. Even in facilities where treatment is available, the COVID-19 pandemic has complicated efforts to provide such care.




health and food

Pew Urges Federal Government to Prioritize Better Exchange of Health Data

The Pew Charitable Trusts sent comments Jan. 4 to the Office of the National Coordinator for Health Information Technology (ONC) and the Centers for Medicare & Medicaid Services (CMS) urging them to support the easy exchange of individuals’ health records through a pair of regulations.