academic and careers

Is the Clinical Application of CXCR4 Imaging in the Diagnosis and Management of Primary Aldosteronism Really Happening?




academic and careers

Arrhythmias in Nongranulomatous Myocarditis: Is There a Role for PET?




academic and careers

Precautions to Consider in the Analysis of Prognostic and Predictive Indices

Understanding the differences between prognostic and predictive indices is imperative for medical research advances. We have developed a new prognostic measure that will identify the strengths, limitations, and potential applications in clinical practice.




academic and careers

Best Patient Care Practices for Administering PSMA-Targeted Radiopharmaceutical Therapy

Optimal patient management protocols for metastatic castration-resistant prostate cancer (mCRPC) are poorly defined and even further complexified with new therapy approvals, such as radiopharmaceuticals. The prostate-specific membrane antigen (PSMA)–targeted agent 177Lu vipivotide tetraxetan ([177Lu]Lu-PSMA-617), approved after the phase III VISION study, presents physicians with additional aspects of patient management, including specific adverse event (AE) monitoring and management, as well as radiation safety. Drawing on our experience as VISION study investigators, here we provide guidance on best practices for delivering PSMA-targeted radiopharmaceutical therapy (RPT) to patients with mCRPC. After a comprehensive review of published evidence and guidelines on RPT management in prostate cancer, we identified educational gaps in managing the radiation safety and AEs associated with [177Lu]Lu-PSMA-617. Our results showed that providing sufficient education on AEs (e.g., fatigue and dry mouth) and radiation safety principles is key to effective delivery and management of patient expectations. Patient counseling by health care professionals, across disciplines, is a cornerstone of optimal patient management during PSMA-targeted RPT. Multidisciplinary collaboration is crucial, and physicians must adhere to radiation safety protocols and counsel patients on radiation safety considerations. Treatment with [177Lu]Lu-PSMA-617 is generally well tolerated; however, additional interventions may be required, such as dosing modification, medications, or transfusions. Urinary incontinence can be challenging in the context of radiation safety. Multidisciplinary collaboration between medical oncologists and nuclear medicine teams ensures that patients are monitored and managed safely and efficiently. In clinical practice, the benefit-to-risk ratio should always be evaluated on a case-by-case basis.




academic and careers

U.S. Imaging Costs: Michal Horny Talks with Ken Herrmann and Johannes Czernin About the Changing Contribution of Medical Imaging to Health Care Costs




academic and careers

The Costs to Our Patients




academic and careers

This Month in JNM




academic and careers

Healthcare avoidance during the early stages of the COVID-19 pandemic and all-cause mortality: a longitudinal community-based study

BackgroundDuring the COVID-19 pandemic, global trends of reduced healthcare-seeking behaviour were observed. This raises concerns about the consequences of healthcare avoidance for population health.AimTo determine the association between healthcare avoidance during the early stages of the COVID-19 pandemic and all-cause mortality.Design and settingThis was a 32-month follow-up within the population-based Rotterdam Study, after sending a COVID-19 questionnaire at the onset of the pandemic in April 2020 to all communty dwelling participants (n = 6241/8732, response rate 71.5%).MethodCox proportional hazards models assessed the risk of all-cause mortality among respondents who avoided health care because of the COVID-19 pandemic. Mortality status was collected through municipality registries and medical records.ResultsOf 5656 respondents, one-fifth avoided health care because of the COVID-19 pandemic (n = 1143). Compared with non-avoiders, those who avoided health care more often reported symptoms of depression (n = 357, 31.2% versus n = 554, 12.3%) and anxiety (n = 340, 29.7% versus n = 549, 12.2%), and more often rated their health as poor to fair (n = 336, 29.4% versus n = 457, 10.1%) . Those who avoided health care had an increased adjusted risk of all-cause mortality (hazard ratio [HR] 1.30, 95% confidence interval [CI] = 1.01 to 1.67), which remained nearly identical after adjustment for history of any non-communicable disease (HR 1.20, 95% CI = 0.93 to 1.54). However, this association attenuated after additional adjustment for mental and physical self-perceived health factors (HR 0.93, 95% CI = 0.71 to 1.20).ConclusionThis study found an increased risk of all-cause mortality among individuals who avoided health care during COVID-19. These individuals were characterised by poor mental and physical self-perceived health. Therefore, interventions should be targeted to these vulnerable individuals to safeguard their access to primary and specialist care to limit health disparities, inside and beyond healthcare crises.




academic and careers

Geographic inequalities in need and provision of social prescribing link workers a retrospective study in primary care

BackgroundLong-term health conditions are major challenges for care systems. Social prescribing link workers have been introduced via primary care networks (PCNs) across England since 2019 to address the wider determinants of health by connecting individuals to activities, groups, or services within their local community.AimTo assess whether the rollout of social prescribing link workers was in areas with the highest need.Design and settingA retrospective study of social prescribing link workers in England from 2019 to 2023.MethodWorkforce, population, survey, and area-level data at the PCN-level from April 2020 to October 2023 were combined. Population need before the rollout of link workers was measured using reported lack of support from local services in the 2019 General Practice Patient Survey. To assess if rollout reflected need, linear regression was used to relate provision of link workers (measured by full-time equivalent [FTE] per 10 000 patients) in each quarter to population need for support.ResultsPopulations in urban, more deprived areas and with higher proportions of people from minority ethnic groups had the highest reported lack of support. Geographically these were in the North West and London. Initially, there was no association between need and provision; then from July 2022, this became negative and significant. By October 2023, a 10-percentage point higher need for support was associated with a 0.035 (95% confidence interval = −0.634 to −0.066) lower FTE per 10 000 patients.ConclusionRollout of link workers has not been sufficiently targeted at areas with the highest need. Future deployments should be targeted at those areas.




academic and careers

Support for primary care prescribing for adult ADHD in England: national survey

BackgroundAttention deficit hyperactivity disorder (ADHD) is a common neurodevelopmental disorder, for which there are effective pharmacological treatments that improve symptoms and reduce complications. Guidelines published by the National Institute for Health and Care Excellence recommend that primary care practitioners prescribe medication for adult ADHD under shared-care agreements with Adult Mental Health Services (AMHS). However, provision remains uneven, with some practitioners reporting a lack of support.AimThis study aimed to describe elements of support, and their availability/use, in primary care prescribing for adult ADHD medication in England to improve access for this underserved population and inform service improvement.Design and settingCross-sectional surveys were used to elicit data from commissioners, health professionals (HPs), and people with lived experience of ADHD (LE) across England about elements supporting pharmacological treatment of ADHD in primary care.MethodThree interlinked cross-sectional surveys were used to ask every integrated care board in England (commissioners), along with convenience samples of HPs and LEs, about prescribing rates, AMHS availability, wait times, and shared-care agreement protocols/policies for the pharmacological treatment of ADHD in primary care. Descriptive analyses, percentages, and confidence intervals were used to summarise responses by stakeholder group. Variations in reported provision and practice were explored and displayed visually using mapping software.ResultsData from 782 responders (42 commissioners, 331 HPs, 409 LEs) revealed differences in reported provision by stakeholder group, including for prescribing (95% of HPs versus 64% of LEs). In all, >40% of responders reported extended AMHS wait times of ≥2 years. There was some variability by NHS region – for example, London had the lowest reported extended wait time (25%), while East of England had the highest (55%).ConclusionElements supporting appropriate shared-care prescribing of ADHD medication via primary care are not universally available in England. Coordinated approaches are needed to address these gaps.




academic and careers

Weight trends among adults with diabetes or hypertension during the COVID-19 pandemic: an observational study using OpenSAFELY

BackgroundCOVID-19 pandemic restrictions may have influenced behaviours related to weight.AimTo describe patterns of weight change among adults living in England with type 2 diabetes (T2D) and/or hypertension during the pandemic.Design and settingAn observational cohort study using the routinely collected health data of approximately 40% of adults living in England, accessed through the OpenSAFELY service inside TPP.MethodClinical and sociodemographic characteristics associated with rapid weight gain (>0.5 kg/m2/year) were investigated using multivariable logistic regression.ResultsData were extracted on adults with T2D (n = 1 231 455, 43.9% female, and 76.0% White British) or hypertension (n = 3 558 405, 49.7% female, and 84.3% White British). Adults with T2D lost weight overall (median δ = −0.1 kg/m2/year [interquartile range {IQR} −0.7–0.4]). However, rapid weight gain was common (20.7%) and associated with the following: sex (male versus female: adjusted odds ratio [aOR] 0.78 [95% confidence interval {CI} = 0.77 to 0.79]); age (older age reduced odds, for example, aged 60–69 years versus 18–29 years: aOR 0.66 [95% CI = 0.61 to 0.71]); deprivation (least deprived Index of Multiple Deprivation [IMD] quintile versus most deprived IMD quintile: aOR 0.87 [95% CI = 0.85 to 0.89]); White ethnicity (Black versus White: aOR 0.95 [95% CI = 0.92 to 0.98]); mental health conditions (for example, depression: aOR 1.13 [95% CI = 1.12 to 1.15]); and diabetes treatment (non-insulin treatment versus no pharmacological treatment: aOR 0.68 [95% CI = 0.67 to 0.69]). Adults with hypertension maintained stable weight overall (median δ = 0.0 kg/m2/year [IQR −0.6–0.5]); however, rapid weight gain was common (24.7%) and associated with similar characteristics as in T2D.ConclusionAmong adults living in England with T2D and/or hypertension, rapid pandemic weight gain was more common among females, younger adults, those living in more deprived areas, and those with mental health conditions.




academic and careers

Weight management with orlistat in type 2 diabetes: an electronic health records study

BackgroundOrlistat is recommended as an adjunct to diet and exercise for weight loss in the treatment of type 2 diabetes mellitus (T2DM).AimTo explore associations between patient characteristics and orlistat prescribing, and to determine associations of orlistat with weight loss in T2DM and prediabetes.Design and settingCohort study using anonymised health records from a UK database of general practice.MethodThe UK Clinical Practice Research Datalink (CPRD) Aurum database was searched to compile a cohort of patients aged ≥18 years, first diagnosed with T2DM or prediabetes in 2016 or 2017. Once the data had been collated, multivariable logistic regression models were used to determine associations with starting orlistat and stopping it early (<12 weeks of prescriptions) and orlistat’s associations with weight loss in those who had not been prescribed second-line antidiabetic medications.ResultsOut of 100 552 patients with incident T2DM or prediabetes, 655 (0.8%) patients with T2DM and 128 (0.7%) patients with prediabetes were prescribed orlistat. Younger people, females, those in areas of deprivation, current smokers, those coprescribed metformin, and those recorded as having hypertension were statistically significantly more likely to be prescribed orlistat; higher baseline glycated haemoglobin levels were associated with early stopping. In comparison with patients not on orlistat, those who continued using it for ≥12 weeks were more likely to lose ≥5% weight (adjusted odds ratio [AOR] 1.69, 95% confidence interval [CI] = 1.07 to 2.67) but those who stopped orlistat early were less likely to lose ≥5% weight (AOR 0.56, 95% CI = 0.29 to 1.09).ConclusionOrlistat was significantly associated with weight loss in patients with T2DM and prediabetes when taken for at least 12 weeks; however, it was infrequently prescribed and often taken for <12 weeks. Orlistat may be a useful adjunct to lifestyle modifications for patients with T2DM and prediabetes, but barriers to continued use means it may not be effective for everyone in managing weight loss.




academic and careers

Information needs for GPs on type 2 diabetes in Western countries: a systematic review

BackgroundMost people with type 2 diabetes receive treatment in primary care by GPs who are not specialised in diabetes. Thus, it is important to uncover the most essential information needs regarding type 2 diabetes in general practice.AimTo identify information needs related to type 2 diabetes for GPs.Design and settingSystematic review focused on literature relating to Western countries.MethodMEDLINE, Embase, PsycInfo and CINAHL were searched from inception to January 2024. Two researchers conducted the selection process, and citation searches were performed to identify any relevant articles missed by the database search. Quality appraisal was conducted with the Mixed Methods Appraisal Tool. Meaning units were coded individually, grouped into categories, and then studies were summarised within the context of these categories using narrative synthesis. An evidence map was created to highlight research gaps.ResultsThirty-nine included studies revealed eight main categories and 36 subcategories of information needs. Categories were organised into a comprehensive hierarchical model of information needs, suggesting ‘Knowledge of guidelines’ and ‘Reasons for referral’ as general information needs alongside more specific needs on ‘Medication’, ‘Management’, ‘Complications’, ‘Diagnosis’, ‘Risk factors’, and ‘Screening for diabetes’. The evidence map provides readers with the opportunity to explore the characteristics of the included studies in detail.ConclusionThis systematic review provides GPs, policymakers, and researchers with a hierarchical model of information and educational needs for GPs, and an evidence map showing gaps in the current literature. Information needs about clinical guidelines and reasons for referral to specialised care overlapped with needs for more specific information.




academic and careers

Prescribing benzodiazepines in young adults with anxiety: a qualitative study of GP perspectives

BackgroundIncident benzodiazepine prescriptions in primary care for anxiety decreased between 2003 and 2018. However, from 2008, incident prescribing of benzodiazepines for anxiety increased among those aged 18–34 years. There are increasing concerns around prescribing of benzodiazepines. Further, although guidelines state benzodiazepines should only be prescribed short term, in 2017, 44% of incident prescriptions were prescribed for longer than the recommended duration of 2–4 weeks.AimTo understand when and why GPs prescribe benzodiazepines for anxiety in young adults.Design and settingA qualitative study was undertaken using in-depth interviews with 17 GPs from 10 general practices in South West England.MethodInterviews were conducted by telephone or videocall. A topic guide was used to ensure consistency across interviews. Interviews were audio-recorded, transcribed verbatim, and data analysed using reflexive thematic analysis.ResultsGPs described caution in prescribing benzodiazepines for anxiety in young adults, but thought they had an important role in acute situations. GPs described caution in prescribing duration, but some thought longer-term prescriptions could be appropriate. In light of these views, some GPs questioned whether primary care needs to revisit how clinicians are using benzodiazepines. GPs perceived that some young adults requested benzodiazepines and suggested this might be because they wanted quick symptom relief. GPs noted that refusing to prescribe felt uncomfortable and that the number of young adults presenting to general practice, already dependent on benzodiazepines, had increased.ConclusionPatient-driven factors for prescribing benzodiazepines suggest there are current unmet treatment needs among young adults with anxiety. Given increases in prescribing in this age group, it may be timely to revisit the role of benzodiazepines in the management of people with anxiety in primary care.




academic and careers

GPs&#x2019; views of prescribing beta- blockers for people with anxiety disorders: a qualitative study

BackgroundBetween 2003 and 2018, incident prescriptions of beta-blockers for anxiety increased substantially, particularly for young adults. National Institute for Health and Care Excellence guidance for anxiety does not recommend beta-blockers, probably due to a lack of evidence to support such use. Recent reports have highlighted the potential risks of beta-blockers.AimTo understand when and why GPs prescribe beta-blockers for people with anxiety.Design and settingIn-depth interviews with 17 GPs in Bristol and the surrounding areas.MethodInterviews were held by telephone or video call. A topic guide was used to ensure consistency across interviews. Interviews were audio-recorded, transcribed verbatim, and analysed thematically.ResultsMany GPs viewed beta-blockers as ‘low risk’, particularly for young adults. Some GPs viewed beta-blockers as an alternative to benzodiazepines, acting quickly and not leading to dependence. GPs reflected that some patients appeared to want an ‘immediate fix’ to their symptoms, which GPs thought beta-blockers could potentially offer. This is salient in light of substantial waiting lists for talking therapies and delays in antidepressants taking effect. GPs described how some patients seemed more willing to try beta-blockers than antidepressants, as patients did not perceive them as ‘mental health drugs’ and therefore viewed them as potentially more acceptable and less stigmatising. Further, GPs viewed beta-blockers as ‘patient-led’, with patients managing their own dose and frequency, without GP input.ConclusionMany GPs believe that beta-blockers have a role to play in the management of anxiety. Given recent increases in the prescribing of these drugs in primary care, there is a need to assess their safety and effectiveness as a treatment for people with anxiety disorders.




academic and careers

Collaborative discussions between GPs and pharmacists to optimise patient medication: a qualitative study within a UK primary care clinical trial

BackgroundThere has been significant investment in pharmacists working in UK general practice to improve the effective and safe use of medicines. However, evidence of how to optimise collaboration between GPs and pharmacists in the context of polypharmacy (multiple medication) is lacking.AimTo explore GP and pharmacist views and experiences of in-person, interprofessional collaborative discussions (IPCDs) as part of a complex intervention to optimise medication use for patients with polypharmacy in general practice.Design and settingA mixed-method process evaluation embedded within the Improving Medicines use in People with Polypharmacy in Primary Care (IMPPP) trial conducted in Bristol and the West Midlands, between February 2021 and September 2023.MethodAudio-recordings of IPCDs between GPs and pharmacists, along with individual semi-structured interviews to explore their reflections on these discussions, were used. All recordings were transcribed verbatim and analysed thematically.ResultsA total of 14 practices took part in the process evaluation from February 2022 to September 2023; 17 IPCD meetings were audio-recorded, discussing 30 patients (range 1–6 patients per meeting). In all, six GPs and 13 pharmacists were interviewed. The IPCD was highly valued by GPs and pharmacists who described benefits, including: strengthening their working relationship; gaining in confidence to manage more complex patients; and learning from each other. It was often challenging, however, to find time for the IPCDs.ConclusionThe model of IPCD used in this study provided protected time for GPs and pharmacists to work together to deliver whole-patient care, with both professions finding this beneficial. Protected time for interprofessional liaison and collaboration, and structured interventions may facilitate improved patient care.




academic and careers

Cardiovascular disease &#x2014; risk assessment and reduction: NICE 2023 update for GPs




academic and careers

CGRP therapy in primary care for migraine: prevention and acute medication




academic and careers

Propranolol in anxiety: poor evidence for efficacy and toxicity in overdose




academic and careers

Reclaiming holistic medicine




academic and careers

Books: The Story of a Heart




academic and careers

Books: The Political Economy of Health Care: Where the NHS Came From and Where it Could Lead




academic and careers

General practice should tackle healthcare inequalities but not health inequalities




academic and careers

Do I know you? (well enough to complete your multisource feedback &#x2026; )




academic and careers

Seeing the timber and the trees




academic and careers

Yonder: Improving connections, AI in reflective practice, lung cancer diagnosis, and euthanasia aftercare




academic and careers

The Ealing domestic abuse initiative: a success story




academic and careers

'The Fellowship is breaking, it has already begun&#x2019;




academic and careers

Medicines prescribed elsewhere: don&#x2019;t forget to record them in the GP record!




academic and careers

Are good GPs holistic?




academic and careers

Primary care health professionals&#x2019; approach to clinical coding: a qualitative interview study




academic and careers

Maternal postnatal care in general practice: steps forward




academic and careers

Conversations matter: improving the diagnosis experience for people with type 2 diabetes




academic and careers

Can&#x2019;t prescribe, can&#x2019;t dispense: the challenge of medicine shortages




academic and careers

Challenging the status quo: deprescribing antihypertensive medication in older adults in primary care




academic and careers

Unpicking influence and conflicts of interest in prescribing




academic and careers

Atomically Thin Materials Significantly Shrink Qubits



Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.

IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.

Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.

“We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”

The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.

Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).

Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT

In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.

As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.

In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.

“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics.

On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.

While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.

“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”

This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.

“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.

Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.




academic and careers

How AI Will Change Chip Design



The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.

Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version.

But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform.

How is AI currently being used to design the next generation of chips?

Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.

Heather GorrMathWorks

Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.

What are the benefits of using AI for chip design?

Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.

So it’s like having a digital twin in a sense?

Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.

So, it’s going to be more efficient and, as you said, cheaper?

Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.

We’ve talked about the benefits. How about the drawbacks?

Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.

Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.

One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.

How can engineers use AI to better prepare and extract insights from hardware or sensor data?

Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.

One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.

What should engineers and designers consider when using AI for chip design?

Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.

How do you think AI will affect chip designers’ jobs?

Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.

How do you envision the future of AI and chip design?

Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.




academic and careers

Andrew Ng: Unbiggen AI



Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

Andrew Ng on...

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”





academic and careers

The Patent Battle That Won’t Quit



Just before this special issue on invention went to press, I got a message from IEEE senior member and patent attorney George Macdonald. Nearly two decades after I first reported on Corliss Orville “Cob” Burandt’s struggle with the U.S. Patent and Trademark Office, the 77-year-old inventor’s patent case was being revived.

From 1981 to 1990, Burandt had received a dozen U.S. patents for improvements to automotive engines, starting with his 1990 patent for variable valve-timing technology (U.S. Patent No. 4,961,406A). But he failed to convince any automakers to license his technology. What’s worse, he claims, some of the world’s major carmakers now use his inventions in their hybrid engines.

Shortly after reading my piece in 2005, Macdonald stepped forward to represent Burandt. By then, the inventor had already lost his patents because he hadn’t paid the US $40,000 in maintenance fees to keep them active.

Macdonald filed a petition to pay the maintenance fees late and another to revive a related child case. The maintenance fee petition was denied in 2006. While the petition to revive was still pending, Macdonald passed the maintenance fee baton to Hunton Andrews Kurth (HAK), which took the case pro bono. HAK attorneys argued that the USPTO should reinstate the 1990 parent patent.

The timing was crucial: If the parent patent was reinstated before 2008, Burandt would have had the opportunity to compel infringing corporations to pay him licensing fees. Unfortunately, for reasons that remain unclear, the patent office tried to paper Burandt’s legal team to death, Macdonald says. HAK could go no further in the maintenance-fee case after the U.S. Supreme Court declined to hear it in 2009.

Then, in 2010, the USPTO belatedly revived Burandt’s child continuation application. A continuation application lets an inventor add claims to their original patent application while maintaining the earlier filing date—1988 in this case.

However, this revival came with its own set of challenges. Macdonald was informed in 2011 that the patent examiner would issue the patent but later discovered that the application was placed into a then-secret program called the Sensitive Application Warning System (SAWS) instead. While touted as a way to quash applications for things like perpetual-motion machines, the SAWS process effectively slowed action on Burandt’s case.

After several more years of motions and rulings, Macdonald met IEEE Member Edward Pennington, who agreed to represent Burandt. Earlier this year, Pennington filed a complaint in the Eastern District of Virginia seeking the issuance of Burandt’s patent on the grounds that it was wrongfully denied.

As of this writing, Burandt still hasn’t seen a dime from his inventions. He subsists on his social security benefits. And while his case raises important questions about fairness, transparency, and the rights of individual inventors, Pennington says his client isn’t interested in becoming a poster boy for poor inventors.

“We’re not out to change policy at the patent office or to give Mr. Burandt a framed copy of the patent to say, ‘Look at me, I’m an inventor,’ ” says Pennington. “This is just to say, ‘Here’s a guy that would like to benefit from his idea.’ It just so happens that he’s pretty much in need. And even the slightest royalty would go a long ways for the guy.”




academic and careers

Multiband Antenna Simulation and Wireless KPI Extraction



In this upcoming webinar, explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems.

Overview

This webinar will explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems. Attendees will learn how to optimize antenna performance and analyze installed performance within wireless networks. The session will also demonstrate how this approach enables users to extract valuable wireless and network KPIs, providing a comprehensive toolset for enhancing antenna design, optimizing multiband communication, and improving overall network performance. Join us to discover how Ansys HFSS can transform wireless system design and network efficiency approach.

What Attendees will Learn

  • How to design interleaved multiband antenna systems using the latest capabilities in HFSS
  • How to extract Network Key Performance Indicators
  • How to run and extract RF Channels for the dynamic environment

Who Should Attend

This webinar is valuable to anyone involved in antenna, R&D, product design, and wireless networks.

Register now for this free webinar!




academic and careers

The Unlikely Inventor of the Automatic Rice Cooker



“Cover, bring to a boil, then reduce heat. Simmer for 20 minutes.” These directions seem simple enough, and yet I have messed up many, many pots of rice over the years. My sympathies to anyone who’s ever had to boil rice on a stovetop, cook it in a clay pot over a kerosene or charcoal burner, or prepare it in a cast-iron cauldron. All hail the 1955 invention of the automatic rice cooker!

How the automatic rice cooker was invented

It isn’t often that housewives get credit in the annals of invention, but in the story of the automatic rice cooker, a woman takes center stage. That happened only after the first attempts at electrifying rice cooking, starting in the 1920s, turned out to be utter failures. Matsushita, Mitsubishi, and Sony all experimented with variations of placing electric heating coils inside wooden tubs or aluminum pots, but none of these cookers automatically switched off when the rice was done. The human cook—almost always a wife or daughter—still had to pay attention to avoid burning the rice. These electric rice cookers didn’t save any real time or effort, and they sold poorly.

This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

But Shogo Yamada, the energetic development manager of the electric appliance division for Toshiba, became convinced that his company could do better. In post–World War II Japan, he was demonstrating and selling electric washing machines all over the country. When he took a break from his sales pitch and actually talked to women about their daily household labors, he discovered that cooking rice—not laundry—was their most challenging chore. Rice was a mainstay of the Japanese diet, and women had to prepare it up to three times a day. It took hours of work, starting with getting up by 5:00 am to fan the flames of a kamado, a traditional earthenware stove fueled by charcoal or wood on which the rice pot was heated. The inability to properly mind the flame could earn a woman the label of “failed housewife.”

In 1951, Yamada became the cheerleader of the rice cooker within Toshiba, which was understandably skittish given the past failures of other companies. To develop the product, he turned to Yoshitada Minami, the manager of a small family factory that produced electric water heaters for Toshiba. The water-heater business wasn’t great, and the factory was on the brink of bankruptcy.

How Sources Influence the Telling of History


As someone who does a lot of research online, I often come across websites that tell very interesting histories, but without any citations. It takes only a little bit of digging before I find entire passages copied and pasted from one site to another, and so I spend a tremendous amount of time trying to track down the original source. Accounts of popular consumer products, such as the rice cooker, are particularly prone to this problem. That’s not to say that popular accounts are necessarily wrong; plus they are often much more engaging than boring academic pieces. This is just me offering a note of caution because every story offers a different perspective depending on its sources.

For example, many popular blogs sing the praises of Fumiko Minami and her tireless contributions to the development of the rice maker. But in my research, I found no mention of Minami before Helen Macnaughtan’s 2012 book chapter, “Building up Steam as Consumers: Women, Rice Cookers and the Consumption of Everyday Household Goods in Japan,” which itself was based on episode 42 of the Project X: Challengers documentary series that was produced by NHK and aired in 2002.

If instead I had relied solely on the description of the rice cooker’s early development provided by the Toshiba Science Museum (here’s an archived page from 2007), this month’s column would have offered a detailed technical description of how uncooked rice has a crystalline structure, but as it cooks, it becomes a gelatinized starch. The museum’s website notes that few engineers had ever considered the nature of cooking rice before the rice-cooker project, and it refers simply to the “project team” that discovered the process. There’s no mention of Fumiko.

Both stories are factually correct, but they emphasize different details. Sometimes it’s worth asking who is part of the “project team” because the answer might surprise you. —A.M.


Although Minami understood the basic technical principles for an electric rice cooker, he didn’t know or appreciate the finer details of preparing perfect rice. And so Minami turned to his wife, Fumiko.

Fumiko, the mother of six children, spent five years researching and testing to document the ideal recipe. She continued to make rice three times a day, carefully measuring water-to-rice ratios, noting temperatures and timings, and prototyping rice-cooker designs. Conventional wisdom was that the heat source needed to be adjusted continuously to guarantee fluffy rice, but Fumiko found that heating the water and rice to a boil and then cooking for exactly 20 minutes produced consistently good results.

But how would an automatic rice cooker know when the 20 minutes was up? A suggestion came from Toshiba engineers. A working model based on a double boiler (a pot within a pot for indirect heating) used evaporation to mark time. While the rice cooked in the inset pot, a bimetallic switch measured the temperature in the external pot. Boiling water would hold at a constant 100 °C, but once it had evaporated, the temperature would soar. When the internal temperature of the double boiler surpassed 100 °C, the switch would bend and cut the circuit. One cup of boiling water in the external pot took 20 minutes to evaporate. The same basic principle is still used in modern cookers.



Yamada wanted to ensure that the rice cooker worked in all climates, so Fumiko tested various prototypes in extreme conditions: on her rooftop in cold winters and scorching summers and near steamy bathrooms to mimic high humidity. When Fumiko became ill from testing outside, her children pitched in to help. None of the aluminum and glass prototypes, it turned out, could maintain their internal temperature in cold weather. The final design drew inspiration from the Hokkaidō region, Japan’s northernmost prefecture. Yamada had seen insulated cooking pots there, so the Minami family tried covering the rice cooker with a triple-layered iron exterior. It worked.

How Toshiba sold its automatic rice cooker

Toshiba’s automatic rice cooker went on sale on 10 December 1955, but initially, sales were slow. It didn’t help that the rice cooker was priced at 3,200 yen, about a third of the average Japanese monthly salary. It took some salesmanship to convince women they needed the new appliance. This was Yamada’s time to shine. He demonstrated using the rice cooker to prepare takikomi gohan, a rice dish seasoned with dashi, soy sauce, and a selection of meats and vegetables. When the dish was cooked in a traditional kamado, the soy sauce often burned, making the rather simple dish difficult to master. Women who saw Yamada’s demo were impressed with the ease offered by the rice cooker.

Another clever sales technique was to get electricity companies to serve as Toshiba distributors. At the time, Japan was facing a national power surplus stemming from the widespread replacement of carbon-filament lightbulbs with more efficient tungsten ones. The energy savings were so remarkable that operations at half of the country’s power plants had to be curtailed. But with utilities distributing Toshiba rice cookers, increased demand for electricity was baked in.

Within a year, Toshiba was selling more than 200,000 rice cookers a month. Many of them came from the Minamis’ factory, which was rescued from near-bankruptcy in the process.

How the automatic rice cooker conquered the world

From there, the story becomes an international one with complex localization issues. Japanese sushi rice is not the same as Thai sticky rice which is not the same as Persian tahdig, Indian basmati, Italian risotto, or Spanish paella. You see where I’m going with this. Every culture that has a unique rice dish almost always uses its own regional rice with its own preparation preferences. And so countries wanted their own type of automatic electric rice cooker (although some rejected automation in favor of traditional cooking methods).

Yoshiko Nakano, a professor at the University of Hong Kong, wrote a book in 2009 about the localized/globalized nature of rice cookers. Where There Are Asians, There Are Rice Cookers traces the popularization of the rice cooker from Japan to China and then the world by way of Hong Kong. One of the key differences between the Japanese and Chinese rice cooker is that the latter has a glass lid, which Chinese cooks demanded so they could see when to add sausage. More innovation and diversification followed. Modern rice cookers have settings to give Iranians crispy rice at the bottom of the pot, one to let Thai customers cook noodles, one for perfect rice porridge, and one for steel-cut oats.



My friend Hyungsub Choi, in his 2022 article “Before Localization: The Story of the Electric Rice Cooker in South Korea,” pushes back a bit on Nakano’s argument that countries were insistent on tailoring cookers to their tastes. From 1965, when the first domestic rice cooker appeared in South Korea, to the early 1990s, Korean manufacturers engaged in “conscious copying,” Choi argues. That is, they didn’t bother with either innovation or adaptation. As a result, most Koreans had to put up with inferior domestic models. Even after the Korean government made it a national goal to build a better rice cooker, manufacturers failed to deliver one, perhaps because none of the engineers involved knew how to cook rice. It’s a good reminder that the history of technology is not always the story of innovation and progress.

Eventually, the Asian diaspora brought the rice cooker to all parts of the globe, including South Carolina, where I now live and which coincidentally has a long history of rice cultivation. I bought my first rice cooker on a whim, but not for its rice-cooking ability. I was intrigued by the yogurt-making function. Similar to rice, yogurt requires a constant temperature over a specific length of time. Although successful, my yogurt experiment was fleeting—store-bought was just too convenient. But the rice cooking blew my mind. Perfect rice. Every. Single. Time. I am never going back to overflowing pots of starchy water.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the November 2024 print issue as “The Automatic Rice Cooker’s Unlikely Inventor.”

References


Helen Macnaughtan’s 2012 book chapter, “Building up Steam as Consumers: Women, Rice Cookers and the Consumption of Everyday Household Goods in Japan,” was a great resource in understanding the development of the Toshiba ER-4. The chapter appeared in The Historical Consumer: Consumption and Everyday Life in Japan, 1850-2000, edited by Penelope Francks and Janet Hunter (Palgrave Macmillan).

Yoshiko Nakano’s book Where There are Asians, There are Rice Cookers (Hong Kong University Press, 2009) takes the story much further with her focus on the National (Panasonic) rice cooker and its adaptation and adoption around the world.

The Toshiba Science Museum, in Kawasaki, Japan, where we sourced our main image of the original ER-4, closed to the public in June. I do not know what the future holds for its collections, but luckily some of its Web pages have been archived to continue to help researchers like me.




academic and careers

Principles of PID Controllers



Thanks to their ability to adjust the system’s output accurately and quickly without detailed knowledge about its dynamics, PID control loops stand as a powerful and widely used tool for maintaining a stable and predictable output in a variety of applications. In this paper, we review the fundamental principles and characteristics of these control systems, providing insight into their functioning, tuning strategies, advantages, and trade-offs.

As a result of their integrated architecture, Zurich Instruments’ lock-in amplifiers allow users to make the most of all the advantages of digital PID control loops, so that their operation can be adapted to match the needs of different use cases.




academic and careers

Teens Gain Experience at IEEE’s TryEngineering Summer Institute



The future of engineering is bright, and it’s being shaped by the young minds at the TryEngineering Summer Institute (TESI), a program administered by IEEE Educational Activities. This year more than 300 students attended TESI to fuel their passion for engineering and prepare for higher education and careers. Sessions were held from 30 June through 2 August on the campuses of Rice University, the University of Pennsylvania, and the University of San Diego.

The program is an immersive experience designed for students ages 13 to 17. It offers hands-on projects, interactive workshops, field trips, and insights into the profession from practicing engineers. Participants get to stay on a college campus, providing them with a preview of university life.

Student turned instructor

One future innovator is Natalie Ghannad, who participated in the program as a student in 2022 and was a member of this year’s instructional team in Houston at Rice University. Ghannad is in her second year as an electrical engineering student at the University of San Francisco. University students join forces with science and engineering teachers at each TESI location to serve as instructors.

For many years, Ghannad wanted to follow in her mother’s footsteps and become a pediatric neurosurgeon. As a high school junior in Houston in 2022, however, she had a change of heart and decided to pursue engineering after participating in the TESI at Rice. She received a full scholarship from the IEEE Foundation TESI Scholarship Fund, supported by IEEE societies and councils.

“I really liked that it was hands-on,” Ghannad says. “From the get-go, we were introduced to 3D printers and laser cutters.”

The benefit of participating in the program, she says, was “having the opportunity to not just do the academic side of STEM but also to really get to play around, get your hands dirty, and figure out what you’re doing.”

“Looking back,” she adds, “there are so many parallels between what I’ve actually had to do as a college student, and having that knowledge from the Summer Institute has really been great.”

She was inspired to volunteer as a teaching assistant because, she says, “I know I definitely want to teach, have the opportunity to interact with kids, and also be part of the future of STEM.”

More than 90 students attended the program at Rice. They visited Space Center Houston, where former astronauts talked to them about the history of space exploration.

Participants also were treated to presentations by guest speakers including IEEE Senior Member Phil Bautista, the founder of Bull Creek Data, a consulting company that provides technical solutions; IEEE Senior Member Christopher Sanderson, chair of the IEEE Region 5 Houston Section; and James Burroughs, a standards manager for Siemens in Atlanta. Burroughs, who spoke at all three TESI events this year, provided insight on overcoming barriers to do the important work of an engineer.

Learning about transit systems and careers

The University of Pennsylvania, in Philadelphia, hosted the East Coast TESI event this year. Students were treated to a field trip to the Southeastern Pennsylvania Transportation Association (SEPTA), one of the largest transit systems in the country. Engineers from AECOM, a global infrastructure consulting firm with offices in Philadelphia that worked closely with SEPTA on its most recent station renovation, collaborated with IEEE to host the trip.

The benefit of participating in the program was “having the opportunity to not just do the academic side of STEM but also to really get to play around, get your hands dirty, and figure out what you’re doing.” — Natalie Ghannad

Participants also heard from guest speakers including Api Appulingam, chief development officer of the Philadelphia International Airport, who told the students the inspiring story of her career.

Guest speakers from Google and Meta

Students who attended the TESI camp at the University of San Diego visited Qualcomm. Hosted by the IEEE Region 6 director, Senior Member Kathy Herring Hayashi, they learned about cutting-edge technology and toured the Qualcomm Museum.

Students also heard from guest speakers including IEEE Member Andrew Saad, an engineer at Google; Gautam Deryanni, a silicon validation engineer at Meta; Kathleen Kramer, 2025 IEEE president and a professor of electrical engineering at the University of San Diego; as well as Burroughs.

“I enjoyed the opportunity to meet new, like-minded people and enjoy fun activities in the city, as well as get a sense of the dorm and college life,” one participant said.

Hands-on projects

In addition to field trips and guest speakers, participants at each location worked on several hands-on projects highlighting the engineering design process. In the toxic popcorn challenge, the students designed a process to safely remove harmful kernels. Students tackling the bridge challenge designed and built a span out of balsa wood and glue, then tested its strength by gradually adding weight until it failed. The glider challenge gave participants the tools and knowledge to build and test their aircraft designs.

One participant applauded the hands-on activities, saying, “All of them gave me a lot of experience and helped me have a better idea of what engineering field I want to go in. I love that we got to participate in challenges and not just listen to lectures—which can be boring.”

The students also worked on a weeklong sparking solutions challenge. Small teams identified a societal problem, such as a lack of clean water or limited mobility for senior citizens, then designed a solution to address it. On the last day of camp, they pitched their prototypes to a team of IEEE members that judged the projects based on their originality and feasibility. Each student on the winning teams at each location were awarded the programmable Mech-5 robot.

Twenty-nine scholarships were awarded with funding from the IEEE Foundation. IEEE societies that donated to the cause were the IEEE Computational Intelligence Society, the IEEE Computer Society, the IEEE Electronics Packaging Society, the IEEE Industry Applications Society, the IEEE Oceanic Engineering Society, the IEEE Power & Energy Society, the IEEE Power Electronics Society, the IEEE Signal Processing Society, and the IEEE Solid-State Circuits Society.




academic and careers

The AI Boom Rests on Billions of Tonnes of Concrete



Along the country road that leads to ATL4, a giant data center going up east of Atlanta, dozens of parked cars and pickups lean tenuously on the narrow dirt shoulders. The many out-of-state plates are typical of the phalanx of tradespeople who muster for these massive construction jobs. With tech giants, utilities, and governments budgeting upwards of US $1 trillion for capital expansion to join the global battle for AI dominance, data centers are the bunkers, factories, and skunkworks—and concrete and electricity are the fuel and ammunition.

To the casual observer, the data industry can seem incorporeal, its products conjured out of weightless bits. But as I stand beside the busy construction site for DataBank’s ATL4, what impresses me most is the gargantuan amount of material—mostly concrete—that gives shape to the goliath that will house, secure, power, and cool the hardware of AI. Big data is big concrete. And that poses a big problem.

This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

Concrete is not just a major ingredient in data centers and the power plants being built to energize them. As the world’s most widely manufactured material, concrete—and especially the cement within it—is also a major contributor to climate change, accounting for around 6 percent of global greenhouse gas emissions. Data centers use so much concrete that the construction boom is wrecking tech giants’ commitments to eliminate their carbon emissions. Even though Google, Meta, and Microsoft have touted goals to be carbon neutral or negative by 2030, and Amazon by 2040, the industry is now moving in the wrong direction.

Last year, Microsoft’s carbon emissions jumped by over 30 percent, primarily due to the materials in its new data centers. Google’s greenhouse emissions are up by nearly 50 percent over the past five years. As data centers proliferate worldwide, Morgan Stanley projects that data centers will release about 2.5 billion tonnes of CO2 each year by 2030—or about 40 percent of what the United States currently emits from all sources.

But even as innovations in AI and the big-data construction boom are boosting emissions for the tech industry’s hyperscalers, the reinvention of concrete could also play a big part in solving the problem. Over the last decade, there’s been a wave of innovation, some of it profit-driven, some of it from academic labs, aimed at fixing concrete’s carbon problem. Pilot plants are being fielded to capture CO 2 from cement plants and sock it safely away. Other projects are cooking up climate-friendlier recipes for cements. And AI and other computational tools are illuminating ways to drastically cut carbon by using less cement in concrete and less concrete in data centers, power plants, and other structures.

Demand for green concrete is clearly growing. Amazon, Google, Meta, and Microsoft recently joined an initiative led by the Open Compute Project Foundation to accelerate testing and deployment of low-carbon concrete in data centers, for example. Supply is increasing, too—though it’s still minuscule compared to humanity’s enormous appetite for moldable rock. But if the green goals of big tech can jump-start innovation in low-carbon concrete and create a robust market for it as well, the boom in big data could eventually become a boon for the planet.

Hyperscaler Data Centers: So Much Concrete

At the construction site for ATL4, I’m met by Tony Qorri, the company’s big, friendly, straight-talking head of construction. He says that this giant building and four others DataBank has recently built or is planning in the Atlanta area will together add 133,000 square meters (1.44 million square feet) of floor space.

They all follow a universal template that Qorri developed to optimize the construction of the company’s ever-larger centers. At each site, trucks haul in more than a thousand prefabricated concrete pieces: wall panels, columns, and other structural elements. Workers quickly assemble the precision-measured parts. Hundreds of electricians swarm the building to wire it up in just a few days. Speed is crucial when construction delays can mean losing ground in the AI battle.

The ATL4 data center outside Atlanta is one of five being built by DataBank. Together they will add over 130,000 square meters of floor space.DataBank

That battle can be measured in new data centers and floor space. The United States is home to more than 5,000 data centers today, and the Department of Commerce forecasts that number to grow by around 450 a year through 2030. Worldwide, the number of data centers now exceeds 10,000, and analysts project another 26.5 million m2 of floor space over the next five years. Here in metro Atlanta, developers broke ground last year on projects that will triple the region’s data-center capacity. Microsoft, for instance, is planning a 186,000-m2 complex; big enough to house around 100,000 rack-mounted servers, it will consume 324 megawatts of electricity.

The velocity of the data-center boom means that no one is pausing to await greener cement. For now, the industry’s mantra is “Build, baby, build.”

“There’s no good substitute for concrete in these projects,” says Aaron Grubbs, a structural engineer at ATL4. The latest processors going on the racks are bigger, heavier, hotter, and far more power hungry than previous generations. As a result, “you add a lot of columns,” Grubbs says.

1,000 Companies Working on Green Concrete

Concrete may not seem an obvious star in the story of how electricity and electronics have permeated modern life. Other materials—copper and silicon, aluminum and lithium—get higher billing. But concrete provides the literal, indispensable foundation for the world’s electrical workings. It is the solid, stable, durable, fire-resistant stuff that makes power generation and distribution possible. It undergirds nearly all advanced manufacturing and telecommunications. What was true in the rapid build-out of the power industry a century ago remains true today for the data industry: Technological progress begets more growth—and more concrete. Although each generation of processor and memory squeezes more computing onto each chip, and advances in superconducting microcircuitry raise the tantalizing prospect of slashing the data center’s footprint, Qorri doesn’t think his buildings will shrink to the size of a shoebox anytime soon. “I’ve been through that kind of change before, and it seems the need for space just grows with it,” he says.

By weight, concrete is not a particularly carbon-intensive material. Creating a kilogram of steel, for instance, releases about 2.4 times as much CO2 as a kilogram of cement does. But the global construction industry consumes about 35 billion tonnes of concrete a year. That’s about 4 tonnes for every person on the planet and twice as much as all other building materials combined. It’s that massive scale—and the associated cost and sheer number of producers—that creates both a threat to the climate and inertia that resists change.

At its Edmonton, Alberta, plant [above], Heidelberg Materials is adding systems to capture carbon dioxide produced by the manufacture of Portland cement.Heidelberg Materials North America

Yet change is afoot. When I visited the innovation center operated by the Swiss materials giant Holcim, in Lyon, France, research executives told me about the database they’ve assembled of nearly 1,000 companies working to decarbonize cement and concrete. None yet has enough traction to measurably reduce global concrete emissions. But the innovators hope that the boom in data centers—and in associated infrastructure such as new nuclear reactors and offshore wind farms, where each turbine foundation can use up to 7,500 cubic meters of concrete—may finally push green cement and concrete beyond labs, startups, and pilot plants.

Why cement production emits so much carbon

Though the terms “cement” and “concrete” are often conflated, they are not the same thing. A popular analogy in the industry is that cement is the egg in the concrete cake. Here’s the basic recipe: Blend cement with larger amounts of sand and other aggregates. Then add water, to trigger a chemical reaction with the cement. Wait a while for the cement to form a matrix that pulls all the components together. Let sit as it cures into a rock-solid mass.

Portland cement, the key binder in most of the world’s concrete, was serendipitously invented in England by William Aspdin, while he was tinkering with earlier mortars that his father, Joseph, had patented in 1824. More than a century of science has revealed the essential chemistry of how cement works in concrete, but new findings are still leading to important innovations, as well as insights into how concrete absorbs atmospheric carbon as it ages.

As in the Aspdins’ day, the process to make Portland cement still begins with limestone, a sedimentary mineral made from crystalline forms of calcium carbonate. Most of the limestone quarried for cement originated hundreds of millions of years ago, when ocean creatures mineralized calcium and carbonate in seawater to make shells, bones, corals, and other hard bits.

Cement producers often build their large plants next to limestone quarries that can supply decades’ worth of stone. The stone is crushed and then heated in stages as it is combined with lesser amounts of other minerals that typically include calcium, silicon, aluminum, and iron. What emerges from the mixing and cooking are small, hard nodules called clinker. A bit more processing, grinding, and mixing turns those pellets into powdered Portland cement, which accounts for about 90 percent of the CO2 emitted by the production of conventional concrete [see infographic, “Roads to Cleaner Concrete”].

Karen Scrivener, shown in her lab at EPFL, has developed concrete recipes that reduce emissions by 30 to 40 percent.Stefan Wermuth/Bloomberg/Getty Images

Decarbonizing Portland cement is often called heavy industry’s “hard problem” because of two processes fundamental to its manufacture. The first process is combustion: To coax limestone’s chemical transformation into clinker, large heaters and kilns must sustain temperatures around 1,500 °C. Currently that means burning coal, coke, fuel oil, or natural gas, often along with waste plastics and tires. The exhaust from those fires generates 35 to 50 percent of the cement industry’s emissions. Most of the remaining emissions result from gaseous CO 2 liberated by the chemical transformation of the calcium carbonate (CaCO3) into calcium oxide (CaO), a process called calcination. That gas also usually heads straight into the atmosphere.

Concrete production, in contrast, is mainly a business of mixing cement powder with other ingredients and then delivering the slurry speedily to its destination before it sets. Most concrete in the United States is prepared to order at batch plants—souped-up materials depots where the ingredients are combined, dosed out from hoppers into special mixer trucks, and then driven to job sites. Because concrete grows too stiff to work after about 90 minutes, concrete production is highly local. There are more ready-mix batch plants in the United States than there are Burger King restaurants.

Batch plants can offer thousands of potential mixes, customized to fit the demands of different jobs. Concrete in a hundred-story building differs from that in a swimming pool. With flexibility to vary the quality of sand and the size of the stone—and to add a wide variety of chemicals—batch plants have more tricks for lowering carbon emissions than any cement plant does.

Cement plants that capture carbon

China accounts for more than half of the concrete produced and used in the world, but companies there are hard to track. Outside of China, the top three multinational cement producers—Holcim, Heidelberg Materials in Germany, and Cemex in Mexico—have launched pilot programs to snare CO2 emissions before they escape and then bury the waste deep underground. To do that, they’re taking carbon capture and storage (CCS) technology already used in the oil and gas industry and bolting it onto their cement plants.

These pilot programs will need to scale up without eating profits—something that eluded the coal industry when it tried CCS decades ago. Tough questions also remain about where exactly to store billions of tonnes of CO 2 safely, year after year.

The appeal of CCS for cement producers is that they can continue using existing plants while still making progress toward carbon neutrality, which trade associations have committed to reach by 2050. But with well over 3,000 plants around the world, adding CCS to all of them would take enormous investment. Currently less than 1 percent of the global supply is low-emission cement. Accenture, a consultancy, estimates that outfitting the whole industry for carbon capture could cost up to $900 billion.

“The economics of carbon capture is a monster,” says Rick Chalaturnyk, a professor of geotechnical engineering at the University of Alberta, in Edmonton, Canada, who studies carbon capture in the petroleum and power industries. He sees incentives for the early movers on CCS, however. “If Heidelberg, for example, wins the race to the lowest carbon, it will be the first [cement] company able to supply those customers that demand low-carbon products”—customers such as hyperscalers.

Though cement companies seem unlikely to invest their own billions in CCS, generous government subsidies have enticed several to begin pilot projects. Heidelberg has announced plans to start capturing CO2 from its Edmonton operations in late 2026, transforming it into what the company claims would be “the world’s first full-scale net-zero cement plant.” Exhaust gas will run through stations that purify the CO2 and compress it into a liquid, which will then be transported to chemical plants to turn it into products or to depleted oil and gas reservoirs for injection underground, where hopefully it will stay put for an epoch or two.

Chalaturnyk says that the scale of the Edmonton plant, which aims to capture a million tonnes of CO2 a year, is big enough to give CCS technology a reasonable test. Proving the economics is another matter. Half the $1 billion cost for the Edmonton project is being paid by the governments of Canada and Alberta.

ROADS TO CLEANER CONCRETE


As the big-data construction boom boosts the tech industry’s emissions, the reinvention of concrete could play a major role in solving the problem.

• CONCRETE TODAY Most of the greenhouse emissions from concrete come from the production of Portland cement, which requires high heat and releases carbon dioxide (CO2) directly into the air.

• CONCRETE TOMORROW At each stage of cement and concrete production, advances in ingredients, energy supplies, and uses of concrete promise to reduce waste and pollution.

The U.S. Department of Energy has similarly offered Heidelberg up to $500 million to help cover the cost of attaching CCS to its Mitchell, Ind., plant and burying up to 2 million tonnes of CO2 per year below the plant. And the European Union has gone even bigger, allocating nearly €1.5 billion ($1.6 billion) from its Innovation Fund to support carbon capture at cement plants in seven of its member nations.

These tests are encouraging, but they are all happening in rich countries, where demand for concrete peaked decades ago. Even in China, concrete production has started to flatten. All the growth in global demand through 2040 is expected to come from less-affluent countries, where populations are still growing and quickly urbanizing. According to projections by the Rhodium Group, cement production in those regions is likely to rise from around 30 percent of the world’s supply today to 50 percent by 2050 and 80 percent before the end of the century.

So will rich-world CCS technology translate to the rest of the world? I asked Juan Esteban Calle Restrepo, the CEO of Cementos Argos, the leading cement producer in Colombia, about that when I sat down with him recently at his office in Medellín. He was frank. “Carbon capture may work for the U.S. or Europe, but countries like ours cannot afford that,” he said.

Better cement through chemistry

As long as cement plants run limestone through fossil-fueled kilns, they will generate excessive amounts of carbon dioxide. But there may be ways to ditch the limestone—and the kilns. Labs and startups have been finding replacements for limestone, such as calcined kaolin clay and fly ash, that don’t release CO 2 when heated. Kaolin clays are abundant around the world and have been used for centuries in Chinese porcelain and more recently in cosmetics and paper. Fly ash—a messy, toxic by-product of coal-fired power plants—is cheap and still widely available, even as coal power dwindles in many regions.

At the Swiss Federal Institute of Technology Lausanne (EPFL), Karen Scrivener and colleagues developed cements that blend calcined kaolin clay and ground limestone with a small portion of clinker. Calcining clay can be done at temperatures low enough that electricity from renewable sources can do the job. Various studies have found that the blend, known as LC3, can reduce overall emissions by 30 to 40 percent compared to those of Portland cement.

LC3 is also cheaper to make than Portland cement and performs as well for nearly all common uses. As a result, calcined clay plants have popped up across Africa, Europe, and Latin America. In Colombia, Cementos Argos is already producing more than 2 million tonnes of the stuff annually. The World Economic Forum’s Centre for Energy and Materials counts LC3 among the best hopes for the decarbonization of concrete. Wide adoption by the cement industry, the centre reckons, “can help prevent up to 500 million tonnes of CO2 emissions by 2030.”

In a win-win for the environment, fly ash can also be used as a building block for low- and even zero-emission concrete, and the high heat of processing neutralizes many of the toxins it contains. Ancient Romans used volcanic ash to make slow-setting but durable concrete: The Pantheon, built nearly two millennia ago with ash-based cement, is still in great shape.

Coal fly ash is a cost-effective ingredient that has reactive properties similar to those of Roman cement and Portland cement. Many concrete plants already add fresh fly ash to their concrete mixes, replacing 15 to 35 percent of the cement. The ash improves the workability of the concrete, and though the resulting concrete is not as strong for the first few months, it grows stronger than regular concrete as it ages, like the Pantheon.

University labs have tested concretes made entirely with fly ash and found that some actually outperform the standard variety. More than 15 years ago, researchers at Montana State University used concrete made with 100 percent fly ash in the floors and walls of a credit union and a transportation research center. But performance depends greatly on the chemical makeup of the ash, which varies from one coal plant to the next, and on following a tricky recipe. The decommissioning of coal-fired plants has also been making fresh fly ash scarcer and more expensive.

At Sublime Systems’ pilot plant in Massachusetts, the company is using electrochemistry instead of heat to produce lime silicate cements that can replace Portland cement.Tony Luong

That has spurred new methods to treat and use fly ash that’s been buried in landfills or dumped into ponds. Such industrial burial grounds hold enough fly ash to make concrete for decades, even after every coal plant shuts down. Utah-based Eco Material Technologies is now producing cements that include both fresh and recovered fly ash as ingredients. The company claims it can replace up to 60 percent of the Portland cement in concrete—and that a new variety, suitable for 3D printing, can substitute entirely for Portland cement.

Hive 3D Builders, a Houston-based startup, has been feeding that low-emissions concrete into robots that are printing houses in several Texas developments. “We are 100 percent Portland cement–free,” says Timothy Lankau, Hive 3D’s CEO. “We want our homes to last 1,000 years.”

Sublime Systems, a startup spun out of MIT by battery scientists, uses electrochemistry rather than heat to make low-carbon cement from rocks that don’t contain carbon. Similar to a battery, Sublime’s process uses a voltage between an electrode and a cathode to create a pH gradient that isolates silicates and reactive calcium, in the form of lime (CaO). The company mixes those ingredients together to make a cement with no fugitive carbon, no kilns or furnaces, and binding power comparable to that of Portland cement. With the help of $87 million from the U.S. Department of Energy, Sublime is building a plant in Holyoke, Mass., that will be powered almost entirely by hydroelectricity. Recently the company was tapped to provide concrete for a major offshore wind farm planned off the coast of Martha’s Vineyard.

Software takes on the hard problem of concrete

It is unlikely that any one innovation will allow the cement industry to hit its target of carbon neutrality before 2050. New technologies take time to mature, scale up, and become cost-competitive. In the meantime, says Philippe Block, a structural engineer at ETH Zurich, smart engineering can reduce carbon emissions through the leaner use of materials.

His research group has developed digital design tools that make clever use of geometry to maximize the strength of concrete structures while minimizing their mass. The team’s designs start with the soaring architectural elements of ancient temples, cathedrals, and mosques—in particular, vaults and arches—which they miniaturize and flatten and then 3D print or mold inside concrete floors and ceilings. The lightweight slabs, suitable for the upper stories of apartment and office buildings, use much less concrete and steel reinforcement and have a CO2 footprint that’s reduced by 80 percent.

There’s hidden magic in such lean design. In multistory buildings, much of the mass of concrete is needed just to hold the weight of the material above it. The carbon savings of Block’s lighter slabs thus compound, because the size, cost, and emissions of a building’s conventional-concrete elements are slashed.

Vaulted, a Swiss startup, uses digital design tools to minimize the concrete in floors and ceilings, cutting their CO2 footprint by 80 percent.Vaulted

In Dübendorf, Switzerland, a wildly shaped experimental building has floors, roofs, and ceilings created by Block’s structural system. Vaulted, a startup spun out of ETH, is engineering and fabricating the lighter floors of a 10-story office building under construction in Zug, Switzerland.

That country has also been a leader in smart ways to recycle and reuse concrete, rather than simply landfilling demolition rubble. This is easier said than done—concrete is tough stuff, riddled with rebar. But there’s an economic incentive: Raw materials such as sand and limestone are becoming scarcer and more costly. Some jurisdictions in Europe now require that new buildings be made from recycled and reused materials. The new addition of the Kunsthaus Zürich museum, a showcase of exquisite Modernist architecture, uses recycled material for all but 2 percent of its concrete.

As new policies goose demand for recycled materials and threaten to restrict future use of Portland cement across Europe, Holcim has begun building recycling plants that can reclaim cement clinker from old concrete. It recently turned the demolition rubble from some 1960s apartment buildings outside Paris into part of a 220-unit housing complex—touted as the first building made from 100 percent recycled concrete. The company says it plans to build concrete recycling centers in every major metro area in Europe and, by 2030, to include 30 percent recycled material in all of its cement.

Further innovations in low-carbon concrete are certain to come, particularly as the powers of machine learning are applied to the problem. Over the past decade, the number of research papers reporting on computational tools to explore the vast space of possible concrete mixes has grown exponentially. Much as AI is being used to accelerate drug discovery, the tools learn from huge databases of proven cement mixes and then apply their inferences to evaluate untested mixes.

Researchers from the University of Illinois and Chicago-based Ozinga, one of the largest private concrete producers in the United States, recently worked with Meta to feed 1,030 known concrete mixes into an AI. The project yielded a novel mix that will be used for sections of a data-center complex in DeKalb, Ill. The AI-derived concrete has a carbon footprint 40 percent lower than the conventional concrete used on the rest of the site. Ryan Cialdella, Ozinga’s vice president of innovation, smiles as he notes the virtuous circle: AI systems that live in data centers can now help cut emissions from the concrete that houses them.

A sustainable foundation for the information age

Cheap, durable, and abundant yet unsustainable, concrete made with Portland cement has been one of modern technology’s Faustian bargains. The built world is on track to double in floor space by 2060, adding 230,000 km 2, or more than half the area of California. Much of that will house the 2 billion more people we are likely to add to our numbers. As global transportation, telecom, energy, and computing networks grow, their new appendages will rest upon concrete. But if concrete doesn’t change, we will perversely be forced to produce even more concrete to protect ourselves from the coming climate chaos, with its rising seas, fires, and extreme weather.

The AI-driven boom in data centers is a strange bargain of its own. In the future, AI may help us live even more prosperously, or it may undermine our freedoms, civilities, employment opportunities, and environment. But solutions to the bad climate bargain that AI’s data centers foist on the planet are at hand, if there’s a will to deploy them. Hyperscalers and governments are among the few organizations with the clout to rapidly change what kinds of cement and concrete the world uses, and how those are made. With a pivot to sustainability, concrete’s unique scale makes it one of the few materials that could do most to protect the world’s natural systems. We can’t live without concrete—but with some ambitious reinvention, we can thrive with it.

This article was updated on 04 November 2024.




academic and careers

New Carrier Fluid Makes Hydrogen Way Easier to Transport



Imagine pulling up to a refueling station and filling your vehicle’s tank with liquid hydrogen, as safe and convenient to handle as gasoline or diesel, without the need for high-pressure tanks or cryogenic storage. This vision of a sustainable future could become a reality if a Calgary, Canada–based company, Ayrton Energy, can scale up its innovative method of hydrogen storage and distribution. Ayrton’s technology could make hydrogen a viable, one-to-one replacement for fossil fuels in existing infrastructure like pipelines, fuel tankers, rail cars, and trucks.

The company’s approach is to use liquid organic hydrogen carriers (LOHCs) to make it easier to transport and store hydrogen. The method chemically bonds hydrogen to carrier molecules, which absorb hydrogen molecules and make them more stable—kind of like hydrogenating cooking oil to produce margarine.

A researcher pours a sample of Ayrton’s LOHC fluid into a vial.Ayrton Energy

The approach would allow liquid hydrogen to be transported and stored in ambient conditions, rather than in the high-pressure, cryogenic tanks (to hold it at temperatures below -252 ºC) currently required for keeping hydrogen in liquid form. It would also be a big improvement on gaseous hydrogen, which is highly volatile and difficult to keep contained.

Founded in 2021, Ayrton is one of several companies across the globe developing LOHCs, including Japan’s Chiyoda and Mitsubishi, Germany’s Covalion, and China’s Hynertech. But toxicity, energy density, and input energy issues have limited LOHCs as contenders for making liquid hydrogen feasible. Ayrton says its formulation eliminates these trade-offs.

Safe, Efficient Hydrogen Fuel for Vehicles

Conventional LOHC technologies used by most of the aforementioned companies rely on substances such as toluene, which forms methylcyclohexane when hydrogenated. These carriers pose safety risks due to their flammability and volatility. Hydrogenious LOHC Technologies in Erlanger, Germany and other hydrogen fuel companies have shifted toward dibenzyltoluene, a more stable carrier that holds more hydrogen per unit volume than methylcyclohexane, though it requires higher temperatures (and thus more energy) to bind and release the hydrogen. Dibenzyltoluene hydrogenation occurs at between 3 and 10 megapascals (30 and 100 bar) and 200–300 ºC, compared with 10 MPa (100 bar), and just under 200 ºC for methylcyclohexane.

Ayrton’s proprietary oil-based hydrogen carrier not only captures and releases hydrogen with less input energy than is required for other LOHCs, but also stores more hydrogen than methylcyclohexane can—55 kilograms per cubic meter compared with methylcyclohexane’s 50 kg/m³. Dibenzyltoluene holds more hydrogen per unit volume (up to 65 kg/m³), but Ayrton’s approach to infusing the carrier with hydrogen atoms promises to cost less. Hydrogenation or dehydrogenation with Ayrton’s carrier fluid occurs at 0.1 megapascal (1 bar) and about 100 ºC, says founder and CEO Natasha Kostenuk. And as with the other LOHCs, after hydrogenation it can be transported and stored at ambient temperatures and pressures.

Judges described [Ayrton's approach] as a critical technology for the deployment of hydrogen at large scale.” —Katie Richardson, National Renewable Energy Lab

Ayrton’s LOHC fluid is as safe to handle as margarine, but it’s still a chemical, says Kostenuk. “I wouldn’t drink it. If you did, you wouldn’t feel very good. But it’s not lethal,” she says.

Kostenuk and fellow Ayrton cofounder Brandy Kinkead (who serves as the company’s chief technical officer) were originally trying to bring hydrogen generators to market to fill gaps in the electrical grid. “We were looking for fuel cells and hydrogen storage. Fuel cells were easy to find, but we couldn’t find a hydrogen storage method or medium that would be safe and easy to transport to fuel our vision of what we were trying to do with hydrogen generators,” Kostenuk says. During the search, they came across LOHC technology but weren’t satisfied with the trade-offs demanded by existing liquid hydrogen carriers. “We had the idea that we could do it better,” she says. The duo pivoted, adjusting their focus from hydrogen generators to hydrogen storage solutions.

“Everybody gets excited about hydrogen production and hydrogen end use, but they forget that you have to store and manage the hydrogen,” Kostenuk says. Incompatibility with current storage and distribution has been a barrier to adoption, she says. “We’re really excited about being able to reuse existing infrastructure that’s in place all over the world.” Ayrton’s hydrogenated liquid has fuel-cell-grade (99.999 percent) hydrogen purity, so there’s no advantage in using pure liquid hydrogen with its need for subzero temperatures, according to the company.

The main challenge the company faces is the set of issues that come along with any technology scaling up from pilot-stage production to commercial manufacturing, says Kostenuk. “A crucial part of that is aligning ourselves with the right manufacturing partners along the way,” she notes.

Asked about how Ayrton is dealing with some other challenges common to LOHCs, Kostenuk says Ayrton has managed to sidestep them. “We stayed away from materials that are expensive and hard to procure, which will help us avoid any supply chain issues,” she says. By performing the reactions at such low temperatures, Ayrton can get its carrier fluid to withstand 1,000 hydrogenation-dehydrogenation cycles before it no longer holds enough hydrogen to be useful. Conventional LOHCs are limited to a couple of hundred cycles before the high temperatures required for bonding and releasing the hydrogen breaks down the fluid and diminishes its storage capacity, Kostenuk says.

Breakthrough in Hydrogen Storage Technology

In acknowledgement of what Ayrton’s nontoxic, oil-based carrier fluid could mean for the energy and transportation sectors, the U.S. National Renewable Energy Lab (NREL) at its annual Industry Growth Forum in May named Ayrton an “outstanding early-stage venture.” A selection committee of more than 180 climate tech and cleantech investors and industry experts chose Ayrton from a pool of more than 200 initial applicants, says Katie Richardson, group manager of NREL’s Innovation and Entrepreneurship Center, which organized the forum. The committee based its decision on the company’s innovation, market positioning, business model, team, next steps for funding, technology, capital use, and quality of pitch presentation. “Judges described Ayrton’s approach as a critical technology for the deployment of hydrogen at large scale,” Richardson says.

As a next step toward enabling hydrogen to push gasoline and diesel aside, “we’re talking with hydrogen producers who are right now offering their customers cryogenic and compressed hydrogen,” says Kostenuk. “If they offered LOHC, it would enable them to deliver across longer distances, in larger volumes, in a multimodal way.” The company is also talking to some industrial site owners who could use the hydrogenated LOHC for buffer storage to hold onto some of the energy they’re getting from clean, intermittent sources like solar and wind. Another natural fit, she says, is energy service providers that are looking for a reliable method of seasonal storage beyond what batteries can offer. The goal is to eventually scale up enough to become the go-to alternative (or perhaps the standard) fuel for cars, trucks, trains, and ships.




academic and careers

Honor a Loved One With an IEEE Foundation Memorial Fund



As the philanthropic partner of IEEE, the IEEE Foundation expands the organization’s charitable body of work by inspiring philanthropic engagement that ignites a donor’s innermost interests and values.

One way the Foundation does so is by partnering with IEEE units to create memorial funds, which pay tribute to members, family, friends, teachers, professors, students, and others. This type of giving honors someone special while also supporting future generations of engineers and celebrating innovation.

Below are three recently created memorial funds that not only have made an impact on their beneficiaries and perpetuated the legacy of the namesake but also have a deep meaning for those who launched them.

EPICS in IEEE Fischer Mertel Community of Projects

The EPICS in IEEE Fischer Mertel Community of Projects was established to support projects “designed to inspire multidisciplinary teams of engineering students to collaborate and engineer solutions to address local community needs.”

The fund was created by the children of Joe Fischer and Herb Mertel to honor their fathers’ passion for mentoring students. Longtime IEEE members, Fischer and Mertel were active with the IEEE Electromagnetic Compatibility Society. Fischer was the society’s 1972 president and served on its board of directors for six years. Mertel served on the society’s board from 1979 to 1983 and again from 1989 to 1993.

“The EPICS in IEEE Fischer Mertel Community of Projects was established to inspire and support outstanding engineering ideas and efforts that help communities worldwide,” says Tina Mertel, Herb’s daughter. “Joe Fischer and my father had a lifelong friendship and excelled as engineering leaders and founders of their respective companies [Fischer Custom Communications and EMACO]. I think that my father would have been proud to know that their friendship and work are being honored in this way.”

The nine projects supported thus far have the potential to impact more than 104,000 people because of the work and collaboration of 190 students worldwide. The projects funded are intended to represent at least two of the EPICS in IEEE’s focus categories: education and outreach; human services; environmental; and access and abilities.

Here are a few of the projects:

IEEE AESS Michael C. Wicks Radar Student Travel Grant

The IEEE Michael C. Wicks Radar Student Travel Grant was established by IEEE Fellow Michael Wicks prior to his death in 2022. The grant provides travel support for graduate students who are the primary authors on a paper being presented at the annual IEEE Radar Conference. Wicks was an electronics engineer and a radio industry leader who was known for developing knowledge-based space-time adaptive processing. He believed in investing in the next generation and he wanted to provide an opportunity for that to happen.Ten graduate students have been awarded the Wicks grant to date. This year two students from Region 8 (Africa, Europe, Middle East) and two students from Region 10 (Asia and Pacific) were able to travel to Denver to attend the IEEE Radar Conference and present their research. The papers they presented are “Target Shape Reconstruction From Multi-Perspective Shadows in Drone-Borne SAR Systems” and “Design of Convolutional Neural Networks for Classification of Ships from ISAR Images.”

Life Fellow Fumio Koyama and IEEE Fellow Constance J. Chang-Hasnain proudly display their IEEE Nick Holonyak, Jr. Medal for Semiconductor Optoelectronic Technologies at this year’s IEEE Honors Ceremony. They are accompanied by IEEE President-Elect Kathleen Kramer and IEEE President Tom Coughlin.Robb Cohen

IEEE Nick Holonyak Jr. Medal for Semiconductor Optoelectronic Technologies

The IEEE Nick Holonyak Jr. Medal for Semiconductor Optoelectronic Technologies was created with a memorial fund supported by some of Holonyak’s former graduate students to honor his work as a professor and mentor. Presented on behalf of the IEEE Board of Directors, the medal recognizes outstanding contributions to semiconductor optoelectronic devices and systems including high-energy-efficiency semiconductor devices and electronics.

Holonyak was a prolific inventor and longtime professor of electrical engineering and physics. In 1962, while working as a scientist at General Electric’s Advanced Semiconductor Laboratory in Syracuse, N.Y., he invented the first practical visible-spectrum LED and laser diode. His innovations are the basis of the devices now used in high-efficiency light bulbs and laser diodes. He left GE in 1963 to join the University of Illinois Urbana-Champaign as a professor of electrical engineering and physics at the invitation of John Bardeen, his Ph.D. advisor and a two-time Nobel Prize winner in physics. Holonyak retired from UIUC in 2013 but continued research collaborations at the university with young faculty members.

“In addition to his remarkable technical contributions, he was an excellent teacher and mentor to graduate students and young electrical engineers,” says Russell Dupuis, one of his doctoral students. “The impact of his innovations has improved the lives of most people on the earth, and this impact will only increase with time. It was my great honor to be one of his students and to help create this important IEEE medal to ensure that his work will be remembered in the future.”

The award was presented for the first time at this year’s IEEE Honors Ceremony, in Boston, to IEEE Fellow Constance Chang-Hasnain and Life Fellow Fumio Koyama for “pioneering contributions to vertical cavity surface-emitting laser (VCSEL) and VCSEL-based photonics for optical communications and sensing.”

Establishing a memorial fund through the IEEE Foundation is a gratifying way to recognize someone who has touched your life while also advancing technology for humanity. If you are interested in learning more about memorial and tribute funds, reach out to the IEEE Foundation team: donate@ieee.org.




academic and careers

Why the Art of Invention Is Always Being Reinvented



Every invention begins with a problem—and the creative act of seeing a problem where others might just see unchangeable reality. For one 5-year-old, the problem was simple: She liked to have her tummy rubbed as she fell asleep. But her mom, exhausted from working two jobs, often fell asleep herself while putting her daughter to bed. “So [the girl] invented a teddy bear that would rub her belly for her,” explains Stephanie Couch, executive director of the Lemelson MIT Program. Its mission is to nurture the next generation of inventors and entrepreneurs.

Anyone can learn to be an inventor, Couch says, given the right resources and encouragement. “Invention doesn’t come from some innate genius, it’s not something that only really special people get to do,” she says. Her program creates invention-themed curricula for U.S. classrooms, ranging from kindergarten to community college.

This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

We’re biased, but we hope that little girl grows up to be an engineer. By the time she comes of age, the act of invention may be something entirely new—reflecting the adoption of novel tools and the guiding forces of new social structures. Engineers, with their restless curiosity and determination to optimize the world around them, are continuously in the process of reinventing invention.

In this special issue, we bring you stories of people who are in the thick of that reinvention today. IEEE Spectrum is marking 60 years of publication this year, and we’re celebrating by highlighting both the creative act and the grindingly hard engineering work required to turn an idea into something world changing. In these pages, we take you behind the scenes of some awe-inspiring projects to reveal how technology is being made—and remade—in our time.

Inventors Are Everywhere

Invention has long been a democratic process. The economist B. Zorina Khan of Bowdoin College has noted that the U.S. Patent and Trademark Office has always endeavored to allow essentially anyone to try their hand at invention. From the beginning, the patent examiners didn’t care who the applicants were—anyone with a novel and useful idea who could pay the filing fee was officially an inventor.

This ethos continues today. It’s still possible for an individual to launch a tech startup from a garage or go on “Shark Tank” to score investors. The Swedish inventor Simone Giertz, for example, made a name for herself with YouTube videos showing off her hilariously bizarre contraptions, like an alarm clock with an arm that slapped her awake. The MIT innovation scholar Eric von Hippel has spotlighted today’s vital ecosystem of “user innovation,” in which inventors such as Giertz are motivated by their own needs and desires rather than ambitions of mass manufacturing.

But that route to invention gets you only so far, and the limits of what an individual can achieve have become starker over time. To tackle some of the biggest problems facing humanity today, inventors need a deep-pocketed government sponsor or corporate largess to muster the equipment and collective human brainpower required.

When we think about the challenges of scaling up, it’s helpful to remember Alexander Graham Bell and his collaborator Thomas Watson. “They invent this cool thing that allows them to talk between two rooms—so it’s a neat invention, but it’s basically a gadget,” says Eric Hintz, a historian of invention at the Smithsonian Institution. “To go from that to a transcontinental long-distance telephone system, they needed a lot more innovation on top of the original invention.” To scale their invention, Hintz says, Bell and his colleagues built the infrastructure that eventually evolved into Bell Labs, which became the standard-bearer for corporate R&D.

In this issue, we see engineers grappling with challenges of scale in modern problems. Consider the semiconductor technology supported by the U.S. CHIPS and Science Act, a policy initiative aimed at bolstering domestic chip production. Beyond funding manufacturing, it also provides US $11 billion for R&D, including three national centers where companies can test and pilot new technologies. As one startup tells the tale, this infrastructure will drastically speed up the lab-to-fab process.

And then there are atomic clocks, the epitome of precision timekeeping. When researchers decided to build a commercial version, they had to shift their perspective, taking a sprawling laboratory setup and reimagining it as a portable unit fit for mass production and the rigors of the real world. They had to stop optimizing for precision and instead choose the most robust laser, and the atom that would go along with it.

These technology efforts benefit from infrastructure, brainpower, and cutting-edge new tools. One tool that may become ubiquitous across industries is artificial intelligence—and it’s a tool that could further expand access to the invention arena.

What if you had a team of indefatigable assistants at your disposal, ready to scour the world’s technical literature for material that could spark an idea, or to iterate on a concept 100 times before breakfast? That’s the promise of today’s generative AI. The Swiss company Iprova is exploring whether its AI tools can automate “eureka” moments for its clients, corporations that are looking to beat their competitors to the next big idea. The serial entrepreneur Steve Blank similarly advises young startup founders to embrace AI’s potential to accelerate product development; he even imagines testing product ideas on digital twins of customers. Although it’s still early days, generative AI offers inventors tools that have never been available before.

Measuring an Invention’s Impact

If AI accelerates the discovery process, and many more patentable ideas come to light as a result, then what? As it is, more than a million patents are granted every year, and we struggle to identify the ones that will make a lasting impact. Bryan Kelly, an economist at the Yale School of Management, and his collaborators made an attempt to quantify the impact of patents by doing a technology-assisted deep dive into U.S. patent records dating back to 1840. Using natural language processing, they identified patents that introduced novel phrasing that was then repeated in subsequent patents—an indicator of radical breakthroughs. For example, Elias Howe Jr.’s 1846 patent for a sewing machine wasn’t closely related to anything that came before but quickly became the basis of future sewing-machine patents.

Another foundational patent was the one awarded to an English bricklayer in 1824 for the invention of Portland cement, which is still the key ingredient in most of the world’s concrete. As Ted C. Fishman describes in his fascinating inquiry into the state of concrete today, this seemingly stable industry is in upheaval because of its heavy carbon emissions. The AI boom is fueling a construction boom in data centers, and all those buildings require billions of tons of concrete. Fishman takes readers into labs and startups where researchers are experimenting with climate-friendly formulations of cement and concrete. Who knows which of those experiments will result in a patent that echoes down the ages?

Some engineers start their invention process by thinking about the impact they want to make on the world. The eminent Indian technologist Raghunath Anant Mashelkar, who has popularized the idea of “Gandhian engineering”, advises inventors to work backward from “what we want to achieve for the betterment of humanity,” and to create problem-solving technologies that are affordable, durable, and not only for the elite.

Durability matters: Invention isn’t just about creating something brand new. It’s also about coming up with clever ways to keep an existing thing going. Such is the case with the Hubble Space Telescope. Originally designed to last 15 years, it’s been in orbit for twice that long and has actually gotten better with age, because engineers designed the satellite to be fixable and upgradable in space.

For all the invention activity around the globe—the World Intellectual Property Organization says that 3.5 million applications for patents were filed in 2022—it may be harder to invent something useful than it used to be. Not because “everything that can be invented has been invented,” as in the apocryphal quote attributed to the unfortunate head of the U.S. patent office in 1889. Rather, because so much education and experience are required before an inventor can even understand all the dimensions of the door they’re trying to crack open, much less come up with a strategy for doing so. Ben Jones, an economist at Northwestern’s Kellogg School of Management, has shown that the average age of great technological innovators rose by about six years over the course of the 20th century. “Great innovation is less and less the provenance of the young,” Jones concluded.

Consider designing something as complex as a nuclear fusion reactor, as Tom Clynes describes in “An Off-the-Shelf Stellarator.” Fusion researchers have spent decades trying to crack the code of commercially viable fusion—it’s more akin to a calling than a career. If they succeed, they will unlock essentially limitless clean energy with no greenhouse gas emissions or meltdown danger. That’s the dream that the physicists in a lab in Princeton, N.J., are chasing. But before they even started, they first had to gain an intimate understanding of all the wrong ways to build a fusion reactor. Once the team was ready to proceed, what they created was an experimental reactor that accelerates the design-build-test cycle. With new AI tools and unprecedented computational power, they’re now searching for the best ways to create the magnetic fields that will confine the plasma within the reactor. Already, two startups have spun out of the Princeton lab, both seeking a path to commercial fusion.

The stellarator story and many other articles in this issue showcase how one innovation leads to the next, and how one invention can enable many more. The legendary Dean Kamen, best known for mechanical devices like the Segway and the prosthetic “Luke” arm, is now trying to push forward the squishy world of biological manufacturing. In an interview, Kamen explains how his nonprofit is working on the infrastructure—bioreactors, sensors, and controls—that will enable companies to explore the possibilities of growing replacement organs. You could say that he’s inventing the launchpad so others can invent the rockets.

Sometimes everyone in a research field knows where the breakthrough is needed, but that doesn’t make it any easier to achieve. Case in point: the quest for a household humanoid robot that can perform domestic chores, switching effortlessly from frying an egg to folding laundry. Roboticists need better learning software that will enable their bots to navigate the uncertainties of the real world, and they also need cheaper and lighter actuators. Major advances in these two areas would unleash a torrent of creativity and may finally bring robot butlers into our homes.

And maybe the future roboticists who make those breakthroughs will have cause to thank Marina Umaschi Bers, a technologist at Boston College who cocreated the ScratchJr programming language and the KIBO robotics kit to teach kids the basics of coding and robotics in entertaining ways. She sees engineering as a playground, a place for children to explore and create, to be goofy or grandiose. If today’s kindergartners learn to think of themselves as inventors, who knows what they’ll create tomorrow?




academic and careers

Video Friday: Trick or Treat, Atlas



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2024: 22–24 November 2024, NANCY, FRANCE

Enjoy today’s videos!

We’re hoping to get more on this from Boston Dynamics, but if you haven’t seen it yet, here’s electric Atlas doing something productive (and autonomous!).

And why not do it in a hot dog costume for Halloween, too?

[ Boston Dynamics ]

Ooh, this is exciting! Aldebaran is getting ready to release a seventh generation of NAO!

[ Aldebaran ]

Okay I found this actually somewhat scary, but Happy Halloween from ANYbotics!

[ ANYbotics ]

Happy Halloween from the Clearpath!

[ Clearpath Robotics Inc. ]

Another genuinely freaky Happy Halloween, from Boston Dynamics!

[ Boston Dynamics ]

This “urban opera” by Compagnie La Machine took place last weekend in Toulouse, featuring some truly enormous fantastical robots.

[ Compagnie La Machine ]

Thanks, Thomas!

Impressive dismount from Deep Robotics’ DR01.

[ Deep Robotics ]

Cobot juggling from Daniel Simu.

[ Daniel Simu ]

Adaptive-morphology multirotors exhibit superior versatility and task-specific performance compared to traditional multirotors owing to their functional morphological adaptability. However, a notable challenge lies in the contrasting requirements of locking each morphology for flight controllability and efficiency while permitting low-energy reconfiguration. A novel design approach is proposed for reconfigurable multirotors utilizing soft multistable composite laminate airframes.

[ Environmental Robotics Lab paper ]

This is a pitching demonstration of new Torobo. New Torobo is lighter than the older version, enabling faster motion such as throwing a ball. The new model will be available in Japan in March 2025 and overseas from October 2025 onward.

[ Tokyo Robotics ]

I’m not sure what makes this “the world’s best robotic hand for manipulation research,” but it seems solid enough.

[ Robot Era ]

And now, picking a micro cat.

[ RoCogMan Lab ]

When Arvato’s Louisville, Ky. staff wanted a robotics system that could unload freight with greater speed and safety, Boston Dynamics’ Stretch robot stood out. Stretch is a first of its kind mobile robot designed specifically to unload boxes from trailers and shipping containers, freeing up employees to focus on more meaningful tasks in the warehouse. Arvato acquired its first Stretch system this year and the robot’s impact was immediate.

[ Boston Dynamics ]

NASA’s Perseverance Mars rover used its Mastcam-Z camera to capture the silhouette of Phobos, one of the two Martian moons, as it passed in front of the Sun on Sept. 30, 2024, the 1,285th Martian day, or sol, of the mission.

[ NASA ]

Students from Howard University, Moorehouse College, and Berea College joined University of Michigan robotics students in online Robotics 102 courses for the fall ‘23 and winter ‘24 semesters. The class is part of the distributed teaching collaborative, a co-teaching initiative started in 2020 aimed at providing cutting edge robotics courses for students who would normally not have access to at their current university.

[ University of Michigan Robotics ]

Discover the groundbreaking projects and cutting-edge technology at the Robotics and Automation Summer School (RASS) hosted by Los Alamos National Laboratory. In this exclusive behind-the-scenes video, students from top universities work on advanced robotics in disciplines such as AI, automation, machine learning, and autonomous systems.

[ Los Alamos National Laboratory ]

This week’s Carnegie Mellon University Robotics Institute Seminar is from Princeton University’s Anirudha Majumdar, on “Robots That Know When They Don’t Know.”

Foundation models from machine learning have enabled rapid advances in perception, planning, and natural language understanding for robots. However, current systems lack any rigorous assurances when required to generalize to novel scenarios. For example, perception systems can fail to identify or localize unfamiliar objects, and large language model (LLM)-based planners can hallucinate outputs that lead to unsafe outcomes when executed by robots. How can we rigorously quantify the uncertainty of machine learning components such that robots know when they don’t know and can act accordingly?

[ Carnegie Mellon University Robotics Institute ]