re

CDC investigating 21 outbreaks

The Centers for Disease Control and Prevention typically coordinates between 17 and 36 investigations of foodborne illnesses involving multiple states each week.  A report is posted weekly, but does not include any information about where the outbreaks are occurring, what foods are involved, or how many patients have been identified.... Continue Reading




re

Montana officials investigating new outbreak of Salmonella infections at schools

Montana public health officials are investigating an outbreak of infections caused by Salmonella. The Cascade City-County Health Department in Great Falls is reporting that six students at Sacajawea and Valley View elementary schools have tested positive for the pathogen. A staff member at another school has also tested positive. There... Continue Reading




re

Large EU-wide Salmonella outbreak linked to tomatoes from Italy

A multi-country Salmonella outbreak in Europe linked to tomatoes from Italy has sickened more than 250 people. From January 2023 to November 2024, 266 confirmed cases of Salmonella Strathcona have been identified in 16 European countries and the United Kingdom. Croatia, Czech Republic, Denmark, Estonia, Finland, France, Ireland, Luxembourg, the... Continue Reading




re

AIMCo expansion, Alberta's investment focus were sources of tension before purge, sources say

Pension veterans say there was more going on behind the scenes than scrutiny of costs




re

Donald Trump and Elon Musk: Could U.S. election's odd couple unleash a small-government revolution?

The appointment of a political outsider like Musk could help Trump cut regulations and rein in government bureaucracy, even if the moves are unpopular




re

Posthaste: These are the best buyers' markets in Canadian real estate — for now

Listings outpace demand in Toronto and Vancouver




re

What a Trump presidency could mean for Canadian pocketbooks

Stock and bond markets are already reacting in anticipation of the changes




re

Posthaste: Why the great Trump stock rally might not be what it seems

Stocks have soared since Donald Trump was elected, but some say the run-up has more to do with bubbles than policy




re

What is going on at AIMCo? Find out more at Q&A Wednesday

The surprise firings at Alberta Investment Management raises many questions. We will try to answer them




re

Labour minister moves to end port lockouts in Montreal and British Columbia

Dispute risks damage to Canada's reputation as reliable trade partner, says Steven Mackinnon




re

Over a dozen people rescued after wave throws boaters into Florida waters: authorities

Several people were rescued on Saturday after a wave damaged their vessel off the coast of Florida, sending some of the boaters into the water.



  • 152af265-4030-5ffb-92d1-95c5cc2e3a92
  • fnc
  • Fox News
  • fox-news/us/us-regions/southeast/florida
  • fox-news/us/crime/police-and-law-enforcement
  • fox-news/great-outdoors/boating
  • fox-news/us
  • article

re

Trump picks former intel director John Ratcliffe to head the CIA

John Ratcliffe, who previously served as President-elect Trump's principal intelligence advisor, has now been picked by Trump to serve as director of the CIA.



  • 7483051c-abdc-519c-8f6b-294dd3f54215
  • fnc
  • Fox News
  • fox-news/person/donald-trump
  • fox-news/politics/elections
  • fox-news/politics
  • fox-news/tech/topics/cia
  • fox-news/politics
  • article

re

Cowboys' Dak Prescott elects to have season-ending surgery to address injured hamstring, Jerry Jones says

The Dallas Cowboys quarterback got another opinion on his hamstring and decided that surgery would be the best way to address the injury.



  • f8d4b7f0-229c-5132-b195-d53df731c643
  • fnc
  • Fox News
  • fox-news/sports/nfl/dallas-cowboys
  • fox-news/sports/nfl
  • fox-news/person/dak-prescott
  • fox-news/sports
  • fox-news/health/medical-research/surgery
  • fox-news/sports
  • article

re

Betsy DeVos joins Trump’s call to 'disband' the Department of Education and 're-empower' families

Former Education Secretary Betsy DeVos discusses what a second Trump term could mean for U.S. education on "The Story with Martha MacCallum."



  • 2426f898-56cb-51b3-9650-47f0ef4cf50e
  • fnc
  • Fox News
  • fox-news/media
  • fox-news/topic/fox-news-flash
  • fox-news/us/education/dept-of-education
  • fox-news/politics/elections/presidential/trump-transition
  • fox-news/shows/v-full-ep-the-story
  • fox-news/media
  • article

re

Oregon man defaced synagogue with antisemitic graffiti multiple times: DOJ

A man from Eugene, Oregon, pleaded guilty to federal hate crimes on Tuesday after he spray-painted antisemitic graffiti on a synagogue in 2023 and 2024.



  • 4d913ae7-b00f-581c-8754-ee3ce43df202
  • fnc
  • Fox News
  • fox-news/us/us-regions/west/oregon
  • fox-news/topic/anti-semitism
  • fox-news/politics/justice-department
  • fox-news/politics/judiciary/federal-courts
  • fox-news/us
  • article

re

Trump nominates Pete Hegseth to serve as defense secretary

Former Fox News host Pete Hegseth has been selected by President-elect Trump to serve as his secretary of defense. Hegseth served in the U.S. Army.



  • 6fc74de4-5845-502a-9d77-cf6ca51f1b96
  • fnc
  • Fox News
  • fox-news/person/donald-trump
  • fox-news/politics/defense/secretary-of-defense
  • fox-news/politics/defense
  • fox-news/politics
  • fox-news/us
  • fox-news/politics
  • article

re

Trump's picks so far: Here's who will be advising the new president

Since winning the election last week, President-elect Trump has begun evaluating and rolling out his Cabinet picks, with dozens of names jockeying for some two dozen positions.



  • 0b65eed2-fb69-5522-a4e4-eb534bbb05e8
  • fnc
  • Fox News
  • fox-news/politics/executive/cabinet
  • fox-news/person/donald-trump
  • fox-news/politics/executive/white-house
  • fox-news/politics
  • fox-news/politics
  • article

re

SEAN HANNITY: America's massive bureaucracy will soon face a very heavy dose of reality again

Fox News host Sean Hannity says the "decentralization of power as our founders intended is very much on its way to DC."



  • db9b2382-87f4-598f-a2a5-f3e9d45fc8c8
  • fnc
  • Fox News
  • fox-news/shows/hannity
  • fox-news/shows/hannity/transcript/hannitys-monologue
  • fox-news/person/donald-trump
  • fox-news/media/fox-news-flash
  • fox-news/media
  • fox-news/media
  • article

re

Man arrested in NYC strangulation death of woman found outside Times Square hotel

Authorities arrested a man accused of strangling a woman outside a Times Square hotel who later died from her injuries, police said Tuesday.



  • d7d30f82-1959-5dbe-99be-c4c6d3d7b418
  • fnc
  • Fox News
  • fox-news/us/crime
  • fox-news/us/new-york-city
  • fox-news/us
  • fox-news/us
  • article

re

Republican Gabe Evans wins Colorado's 8th Congressional District, beating incumbent Yadira Caraveo

The Associated Press has declared a winner in Colorado's 8th Congressional District which has been one of the most closely watched races in the country.



  • a466e502-3378-573c-8ecc-0e628d1b45ea
  • fnc
  • Fox News
  • fox-news/politics
  • fox-news/us/us-regions/west/colorado
  • fox-news/politics/elections
  • fox-news/politics/house-of-representatives
  • fox-news/politics
  • article

re

Republican David Valadao wins re-election to US House in California's 22nd Congressional District

Incumbent Republican David Valadao is projected to emerge victorious in California's 22nd Congressional District. The highly contested race was considered to be a tossup.



  • 4451eb0e-c159-5978-bbc9-ce2be1359320
  • fnc
  • Fox News
  • fox-news/politics
  • fox-news/us/us-regions/west/california
  • fox-news/us/congress
  • fox-news/politics/elections/house-of-representatives
  • fox-news/politics
  • article

re

Senator-elect Jim Justice's team clarifies report claiming famous pooch Babydog banned from Senate floor

Senator-elect Jim Justice's office has clarified reports that his famous pooch Babydog was banned from the Senate floor, saying Justice never intended to bring the dog onto the floor.



  • 5e83cc3c-0f20-531a-a467-f5c5e2547352
  • fnc
  • Fox News
  • fox-news/politics
  • fox-news/politics/senate
  • fox-news/politics/elections/senate
  • fox-news/us/us-regions/southeast/west-virginia
  • fox-news/politics
  • article

re

Mutiny threat sparks House GOP infighting ahead of Trump visit: 'Just more stupid'

House Republicans are once again at odds with one another after conservatives threatened to protest Speaker Johnson's bid to lead the conference again.



  • 5cfa4a69-f5e8-544b-b124-e66551151a9a
  • fnc
  • Fox News
  • fox-news/politics/house-of-representatives
  • fox-news/politics/house-of-representatives/republicans
  • fox-news/person/mike-johnson
  • fox-news/politics
  • fox-news/politics
  • article

re

GREG GUTFELD: Trump's incoming 'border czar' doesn't care what people think of him

'Gutfeld!' panelists react to President-elect Trump's choice for 'border czar.'



  • 9d54a038-0408-5bd5-bf0f-8234ceb4bc2e
  • fnc
  • Fox News
  • fox-news/media/fox-news-flash
  • fox-news/media
  • fox-news/shows/gutfeld
  • fox-news/shows/gutfeld/transcript-gutfeld
  • fox-news/opinion
  • article

re

Dolphins' Tyreek Hill floats latest theory about arrest near NFL stadium amid battle with wrist injury

In the first quarter of Monday's Dolphins-Rams game, ESPN reported that Tyreek Hill said a torn ligament in his wrist became worst after he was detained by police.



  • 62bb1d69-5e1c-51c7-ae39-4516d9fff977
  • fnc
  • Fox News
  • fox-news/sports/nfl/miami-dolphins
  • fox-news/sports/nfl
  • fox-news/person/tyreek-hill
  • fox-news/sports
  • fox-news/sports
  • article

re

Apple in major software overhaul

Apple is making Siri smarter as part of updates that also promise improvements to Maps, Music and Photos.




re

Microsoft to acquire LinkedIn

Tech giant to pay $35.44 billion for social networking firm in surprise deal.




re

Andrew Ng: Unbiggen AI



Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

Andrew Ng on...

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”





re

Multiband Antenna Simulation and Wireless KPI Extraction



In this upcoming webinar, explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems.

Overview

This webinar will explore how to leverage the state-of-the-art high-frequency simulation capabilities of Ansys HFSS to innovate and develop advanced multiband antenna systems. Attendees will learn how to optimize antenna performance and analyze installed performance within wireless networks. The session will also demonstrate how this approach enables users to extract valuable wireless and network KPIs, providing a comprehensive toolset for enhancing antenna design, optimizing multiband communication, and improving overall network performance. Join us to discover how Ansys HFSS can transform wireless system design and network efficiency approach.

What Attendees will Learn

  • How to design interleaved multiband antenna systems using the latest capabilities in HFSS
  • How to extract Network Key Performance Indicators
  • How to run and extract RF Channels for the dynamic environment

Who Should Attend

This webinar is valuable to anyone involved in antenna, R&D, product design, and wireless networks.

Register now for this free webinar!




re

The AI Boom Rests on Billions of Tonnes of Concrete



Along the country road that leads to ATL4, a giant data center going up east of Atlanta, dozens of parked cars and pickups lean tenuously on the narrow dirt shoulders. The many out-of-state plates are typical of the phalanx of tradespeople who muster for these massive construction jobs. With tech giants, utilities, and governments budgeting upwards of US $1 trillion for capital expansion to join the global battle for AI dominance, data centers are the bunkers, factories, and skunkworks—and concrete and electricity are the fuel and ammunition.

To the casual observer, the data industry can seem incorporeal, its products conjured out of weightless bits. But as I stand beside the busy construction site for DataBank’s ATL4, what impresses me most is the gargantuan amount of material—mostly concrete—that gives shape to the goliath that will house, secure, power, and cool the hardware of AI. Big data is big concrete. And that poses a big problem.

This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

Concrete is not just a major ingredient in data centers and the power plants being built to energize them. As the world’s most widely manufactured material, concrete—and especially the cement within it—is also a major contributor to climate change, accounting for around 6 percent of global greenhouse gas emissions. Data centers use so much concrete that the construction boom is wrecking tech giants’ commitments to eliminate their carbon emissions. Even though Google, Meta, and Microsoft have touted goals to be carbon neutral or negative by 2030, and Amazon by 2040, the industry is now moving in the wrong direction.

Last year, Microsoft’s carbon emissions jumped by over 30 percent, primarily due to the materials in its new data centers. Google’s greenhouse emissions are up by nearly 50 percent over the past five years. As data centers proliferate worldwide, Morgan Stanley projects that data centers will release about 2.5 billion tonnes of CO2 each year by 2030—or about 40 percent of what the United States currently emits from all sources.

But even as innovations in AI and the big-data construction boom are boosting emissions for the tech industry’s hyperscalers, the reinvention of concrete could also play a big part in solving the problem. Over the last decade, there’s been a wave of innovation, some of it profit-driven, some of it from academic labs, aimed at fixing concrete’s carbon problem. Pilot plants are being fielded to capture CO 2 from cement plants and sock it safely away. Other projects are cooking up climate-friendlier recipes for cements. And AI and other computational tools are illuminating ways to drastically cut carbon by using less cement in concrete and less concrete in data centers, power plants, and other structures.

Demand for green concrete is clearly growing. Amazon, Google, Meta, and Microsoft recently joined an initiative led by the Open Compute Project Foundation to accelerate testing and deployment of low-carbon concrete in data centers, for example. Supply is increasing, too—though it’s still minuscule compared to humanity’s enormous appetite for moldable rock. But if the green goals of big tech can jump-start innovation in low-carbon concrete and create a robust market for it as well, the boom in big data could eventually become a boon for the planet.

Hyperscaler Data Centers: So Much Concrete

At the construction site for ATL4, I’m met by Tony Qorri, the company’s big, friendly, straight-talking head of construction. He says that this giant building and four others DataBank has recently built or is planning in the Atlanta area will together add 133,000 square meters (1.44 million square feet) of floor space.

They all follow a universal template that Qorri developed to optimize the construction of the company’s ever-larger centers. At each site, trucks haul in more than a thousand prefabricated concrete pieces: wall panels, columns, and other structural elements. Workers quickly assemble the precision-measured parts. Hundreds of electricians swarm the building to wire it up in just a few days. Speed is crucial when construction delays can mean losing ground in the AI battle.

The ATL4 data center outside Atlanta is one of five being built by DataBank. Together they will add over 130,000 square meters of floor space.DataBank

That battle can be measured in new data centers and floor space. The United States is home to more than 5,000 data centers today, and the Department of Commerce forecasts that number to grow by around 450 a year through 2030. Worldwide, the number of data centers now exceeds 10,000, and analysts project another 26.5 million m2 of floor space over the next five years. Here in metro Atlanta, developers broke ground last year on projects that will triple the region’s data-center capacity. Microsoft, for instance, is planning a 186,000-m2 complex; big enough to house around 100,000 rack-mounted servers, it will consume 324 megawatts of electricity.

The velocity of the data-center boom means that no one is pausing to await greener cement. For now, the industry’s mantra is “Build, baby, build.”

“There’s no good substitute for concrete in these projects,” says Aaron Grubbs, a structural engineer at ATL4. The latest processors going on the racks are bigger, heavier, hotter, and far more power hungry than previous generations. As a result, “you add a lot of columns,” Grubbs says.

1,000 Companies Working on Green Concrete

Concrete may not seem an obvious star in the story of how electricity and electronics have permeated modern life. Other materials—copper and silicon, aluminum and lithium—get higher billing. But concrete provides the literal, indispensable foundation for the world’s electrical workings. It is the solid, stable, durable, fire-resistant stuff that makes power generation and distribution possible. It undergirds nearly all advanced manufacturing and telecommunications. What was true in the rapid build-out of the power industry a century ago remains true today for the data industry: Technological progress begets more growth—and more concrete. Although each generation of processor and memory squeezes more computing onto each chip, and advances in superconducting microcircuitry raise the tantalizing prospect of slashing the data center’s footprint, Qorri doesn’t think his buildings will shrink to the size of a shoebox anytime soon. “I’ve been through that kind of change before, and it seems the need for space just grows with it,” he says.

By weight, concrete is not a particularly carbon-intensive material. Creating a kilogram of steel, for instance, releases about 2.4 times as much CO2 as a kilogram of cement does. But the global construction industry consumes about 35 billion tonnes of concrete a year. That’s about 4 tonnes for every person on the planet and twice as much as all other building materials combined. It’s that massive scale—and the associated cost and sheer number of producers—that creates both a threat to the climate and inertia that resists change.

At its Edmonton, Alberta, plant [above], Heidelberg Materials is adding systems to capture carbon dioxide produced by the manufacture of Portland cement.Heidelberg Materials North America

Yet change is afoot. When I visited the innovation center operated by the Swiss materials giant Holcim, in Lyon, France, research executives told me about the database they’ve assembled of nearly 1,000 companies working to decarbonize cement and concrete. None yet has enough traction to measurably reduce global concrete emissions. But the innovators hope that the boom in data centers—and in associated infrastructure such as new nuclear reactors and offshore wind farms, where each turbine foundation can use up to 7,500 cubic meters of concrete—may finally push green cement and concrete beyond labs, startups, and pilot plants.

Why cement production emits so much carbon

Though the terms “cement” and “concrete” are often conflated, they are not the same thing. A popular analogy in the industry is that cement is the egg in the concrete cake. Here’s the basic recipe: Blend cement with larger amounts of sand and other aggregates. Then add water, to trigger a chemical reaction with the cement. Wait a while for the cement to form a matrix that pulls all the components together. Let sit as it cures into a rock-solid mass.

Portland cement, the key binder in most of the world’s concrete, was serendipitously invented in England by William Aspdin, while he was tinkering with earlier mortars that his father, Joseph, had patented in 1824. More than a century of science has revealed the essential chemistry of how cement works in concrete, but new findings are still leading to important innovations, as well as insights into how concrete absorbs atmospheric carbon as it ages.

As in the Aspdins’ day, the process to make Portland cement still begins with limestone, a sedimentary mineral made from crystalline forms of calcium carbonate. Most of the limestone quarried for cement originated hundreds of millions of years ago, when ocean creatures mineralized calcium and carbonate in seawater to make shells, bones, corals, and other hard bits.

Cement producers often build their large plants next to limestone quarries that can supply decades’ worth of stone. The stone is crushed and then heated in stages as it is combined with lesser amounts of other minerals that typically include calcium, silicon, aluminum, and iron. What emerges from the mixing and cooking are small, hard nodules called clinker. A bit more processing, grinding, and mixing turns those pellets into powdered Portland cement, which accounts for about 90 percent of the CO2 emitted by the production of conventional concrete [see infographic, “Roads to Cleaner Concrete”].

Karen Scrivener, shown in her lab at EPFL, has developed concrete recipes that reduce emissions by 30 to 40 percent.Stefan Wermuth/Bloomberg/Getty Images

Decarbonizing Portland cement is often called heavy industry’s “hard problem” because of two processes fundamental to its manufacture. The first process is combustion: To coax limestone’s chemical transformation into clinker, large heaters and kilns must sustain temperatures around 1,500 °C. Currently that means burning coal, coke, fuel oil, or natural gas, often along with waste plastics and tires. The exhaust from those fires generates 35 to 50 percent of the cement industry’s emissions. Most of the remaining emissions result from gaseous CO 2 liberated by the chemical transformation of the calcium carbonate (CaCO3) into calcium oxide (CaO), a process called calcination. That gas also usually heads straight into the atmosphere.

Concrete production, in contrast, is mainly a business of mixing cement powder with other ingredients and then delivering the slurry speedily to its destination before it sets. Most concrete in the United States is prepared to order at batch plants—souped-up materials depots where the ingredients are combined, dosed out from hoppers into special mixer trucks, and then driven to job sites. Because concrete grows too stiff to work after about 90 minutes, concrete production is highly local. There are more ready-mix batch plants in the United States than there are Burger King restaurants.

Batch plants can offer thousands of potential mixes, customized to fit the demands of different jobs. Concrete in a hundred-story building differs from that in a swimming pool. With flexibility to vary the quality of sand and the size of the stone—and to add a wide variety of chemicals—batch plants have more tricks for lowering carbon emissions than any cement plant does.

Cement plants that capture carbon

China accounts for more than half of the concrete produced and used in the world, but companies there are hard to track. Outside of China, the top three multinational cement producers—Holcim, Heidelberg Materials in Germany, and Cemex in Mexico—have launched pilot programs to snare CO2 emissions before they escape and then bury the waste deep underground. To do that, they’re taking carbon capture and storage (CCS) technology already used in the oil and gas industry and bolting it onto their cement plants.

These pilot programs will need to scale up without eating profits—something that eluded the coal industry when it tried CCS decades ago. Tough questions also remain about where exactly to store billions of tonnes of CO 2 safely, year after year.

The appeal of CCS for cement producers is that they can continue using existing plants while still making progress toward carbon neutrality, which trade associations have committed to reach by 2050. But with well over 3,000 plants around the world, adding CCS to all of them would take enormous investment. Currently less than 1 percent of the global supply is low-emission cement. Accenture, a consultancy, estimates that outfitting the whole industry for carbon capture could cost up to $900 billion.

“The economics of carbon capture is a monster,” says Rick Chalaturnyk, a professor of geotechnical engineering at the University of Alberta, in Edmonton, Canada, who studies carbon capture in the petroleum and power industries. He sees incentives for the early movers on CCS, however. “If Heidelberg, for example, wins the race to the lowest carbon, it will be the first [cement] company able to supply those customers that demand low-carbon products”—customers such as hyperscalers.

Though cement companies seem unlikely to invest their own billions in CCS, generous government subsidies have enticed several to begin pilot projects. Heidelberg has announced plans to start capturing CO2 from its Edmonton operations in late 2026, transforming it into what the company claims would be “the world’s first full-scale net-zero cement plant.” Exhaust gas will run through stations that purify the CO2 and compress it into a liquid, which will then be transported to chemical plants to turn it into products or to depleted oil and gas reservoirs for injection underground, where hopefully it will stay put for an epoch or two.

Chalaturnyk says that the scale of the Edmonton plant, which aims to capture a million tonnes of CO2 a year, is big enough to give CCS technology a reasonable test. Proving the economics is another matter. Half the $1 billion cost for the Edmonton project is being paid by the governments of Canada and Alberta.

ROADS TO CLEANER CONCRETE


As the big-data construction boom boosts the tech industry’s emissions, the reinvention of concrete could play a major role in solving the problem.

• CONCRETE TODAY Most of the greenhouse emissions from concrete come from the production of Portland cement, which requires high heat and releases carbon dioxide (CO2) directly into the air.

• CONCRETE TOMORROW At each stage of cement and concrete production, advances in ingredients, energy supplies, and uses of concrete promise to reduce waste and pollution.

The U.S. Department of Energy has similarly offered Heidelberg up to $500 million to help cover the cost of attaching CCS to its Mitchell, Ind., plant and burying up to 2 million tonnes of CO2 per year below the plant. And the European Union has gone even bigger, allocating nearly €1.5 billion ($1.6 billion) from its Innovation Fund to support carbon capture at cement plants in seven of its member nations.

These tests are encouraging, but they are all happening in rich countries, where demand for concrete peaked decades ago. Even in China, concrete production has started to flatten. All the growth in global demand through 2040 is expected to come from less-affluent countries, where populations are still growing and quickly urbanizing. According to projections by the Rhodium Group, cement production in those regions is likely to rise from around 30 percent of the world’s supply today to 50 percent by 2050 and 80 percent before the end of the century.

So will rich-world CCS technology translate to the rest of the world? I asked Juan Esteban Calle Restrepo, the CEO of Cementos Argos, the leading cement producer in Colombia, about that when I sat down with him recently at his office in Medellín. He was frank. “Carbon capture may work for the U.S. or Europe, but countries like ours cannot afford that,” he said.

Better cement through chemistry

As long as cement plants run limestone through fossil-fueled kilns, they will generate excessive amounts of carbon dioxide. But there may be ways to ditch the limestone—and the kilns. Labs and startups have been finding replacements for limestone, such as calcined kaolin clay and fly ash, that don’t release CO 2 when heated. Kaolin clays are abundant around the world and have been used for centuries in Chinese porcelain and more recently in cosmetics and paper. Fly ash—a messy, toxic by-product of coal-fired power plants—is cheap and still widely available, even as coal power dwindles in many regions.

At the Swiss Federal Institute of Technology Lausanne (EPFL), Karen Scrivener and colleagues developed cements that blend calcined kaolin clay and ground limestone with a small portion of clinker. Calcining clay can be done at temperatures low enough that electricity from renewable sources can do the job. Various studies have found that the blend, known as LC3, can reduce overall emissions by 30 to 40 percent compared to those of Portland cement.

LC3 is also cheaper to make than Portland cement and performs as well for nearly all common uses. As a result, calcined clay plants have popped up across Africa, Europe, and Latin America. In Colombia, Cementos Argos is already producing more than 2 million tonnes of the stuff annually. The World Economic Forum’s Centre for Energy and Materials counts LC3 among the best hopes for the decarbonization of concrete. Wide adoption by the cement industry, the centre reckons, “can help prevent up to 500 million tonnes of CO2 emissions by 2030.”

In a win-win for the environment, fly ash can also be used as a building block for low- and even zero-emission concrete, and the high heat of processing neutralizes many of the toxins it contains. Ancient Romans used volcanic ash to make slow-setting but durable concrete: The Pantheon, built nearly two millennia ago with ash-based cement, is still in great shape.

Coal fly ash is a cost-effective ingredient that has reactive properties similar to those of Roman cement and Portland cement. Many concrete plants already add fresh fly ash to their concrete mixes, replacing 15 to 35 percent of the cement. The ash improves the workability of the concrete, and though the resulting concrete is not as strong for the first few months, it grows stronger than regular concrete as it ages, like the Pantheon.

University labs have tested concretes made entirely with fly ash and found that some actually outperform the standard variety. More than 15 years ago, researchers at Montana State University used concrete made with 100 percent fly ash in the floors and walls of a credit union and a transportation research center. But performance depends greatly on the chemical makeup of the ash, which varies from one coal plant to the next, and on following a tricky recipe. The decommissioning of coal-fired plants has also been making fresh fly ash scarcer and more expensive.

At Sublime Systems’ pilot plant in Massachusetts, the company is using electrochemistry instead of heat to produce lime silicate cements that can replace Portland cement.Tony Luong

That has spurred new methods to treat and use fly ash that’s been buried in landfills or dumped into ponds. Such industrial burial grounds hold enough fly ash to make concrete for decades, even after every coal plant shuts down. Utah-based Eco Material Technologies is now producing cements that include both fresh and recovered fly ash as ingredients. The company claims it can replace up to 60 percent of the Portland cement in concrete—and that a new variety, suitable for 3D printing, can substitute entirely for Portland cement.

Hive 3D Builders, a Houston-based startup, has been feeding that low-emissions concrete into robots that are printing houses in several Texas developments. “We are 100 percent Portland cement–free,” says Timothy Lankau, Hive 3D’s CEO. “We want our homes to last 1,000 years.”

Sublime Systems, a startup spun out of MIT by battery scientists, uses electrochemistry rather than heat to make low-carbon cement from rocks that don’t contain carbon. Similar to a battery, Sublime’s process uses a voltage between an electrode and a cathode to create a pH gradient that isolates silicates and reactive calcium, in the form of lime (CaO). The company mixes those ingredients together to make a cement with no fugitive carbon, no kilns or furnaces, and binding power comparable to that of Portland cement. With the help of $87 million from the U.S. Department of Energy, Sublime is building a plant in Holyoke, Mass., that will be powered almost entirely by hydroelectricity. Recently the company was tapped to provide concrete for a major offshore wind farm planned off the coast of Martha’s Vineyard.

Software takes on the hard problem of concrete

It is unlikely that any one innovation will allow the cement industry to hit its target of carbon neutrality before 2050. New technologies take time to mature, scale up, and become cost-competitive. In the meantime, says Philippe Block, a structural engineer at ETH Zurich, smart engineering can reduce carbon emissions through the leaner use of materials.

His research group has developed digital design tools that make clever use of geometry to maximize the strength of concrete structures while minimizing their mass. The team’s designs start with the soaring architectural elements of ancient temples, cathedrals, and mosques—in particular, vaults and arches—which they miniaturize and flatten and then 3D print or mold inside concrete floors and ceilings. The lightweight slabs, suitable for the upper stories of apartment and office buildings, use much less concrete and steel reinforcement and have a CO2 footprint that’s reduced by 80 percent.

There’s hidden magic in such lean design. In multistory buildings, much of the mass of concrete is needed just to hold the weight of the material above it. The carbon savings of Block’s lighter slabs thus compound, because the size, cost, and emissions of a building’s conventional-concrete elements are slashed.

Vaulted, a Swiss startup, uses digital design tools to minimize the concrete in floors and ceilings, cutting their CO2 footprint by 80 percent.Vaulted

In Dübendorf, Switzerland, a wildly shaped experimental building has floors, roofs, and ceilings created by Block’s structural system. Vaulted, a startup spun out of ETH, is engineering and fabricating the lighter floors of a 10-story office building under construction in Zug, Switzerland.

That country has also been a leader in smart ways to recycle and reuse concrete, rather than simply landfilling demolition rubble. This is easier said than done—concrete is tough stuff, riddled with rebar. But there’s an economic incentive: Raw materials such as sand and limestone are becoming scarcer and more costly. Some jurisdictions in Europe now require that new buildings be made from recycled and reused materials. The new addition of the Kunsthaus Zürich museum, a showcase of exquisite Modernist architecture, uses recycled material for all but 2 percent of its concrete.

As new policies goose demand for recycled materials and threaten to restrict future use of Portland cement across Europe, Holcim has begun building recycling plants that can reclaim cement clinker from old concrete. It recently turned the demolition rubble from some 1960s apartment buildings outside Paris into part of a 220-unit housing complex—touted as the first building made from 100 percent recycled concrete. The company says it plans to build concrete recycling centers in every major metro area in Europe and, by 2030, to include 30 percent recycled material in all of its cement.

Further innovations in low-carbon concrete are certain to come, particularly as the powers of machine learning are applied to the problem. Over the past decade, the number of research papers reporting on computational tools to explore the vast space of possible concrete mixes has grown exponentially. Much as AI is being used to accelerate drug discovery, the tools learn from huge databases of proven cement mixes and then apply their inferences to evaluate untested mixes.

Researchers from the University of Illinois and Chicago-based Ozinga, one of the largest private concrete producers in the United States, recently worked with Meta to feed 1,030 known concrete mixes into an AI. The project yielded a novel mix that will be used for sections of a data-center complex in DeKalb, Ill. The AI-derived concrete has a carbon footprint 40 percent lower than the conventional concrete used on the rest of the site. Ryan Cialdella, Ozinga’s vice president of innovation, smiles as he notes the virtuous circle: AI systems that live in data centers can now help cut emissions from the concrete that houses them.

A sustainable foundation for the information age

Cheap, durable, and abundant yet unsustainable, concrete made with Portland cement has been one of modern technology’s Faustian bargains. The built world is on track to double in floor space by 2060, adding 230,000 km 2, or more than half the area of California. Much of that will house the 2 billion more people we are likely to add to our numbers. As global transportation, telecom, energy, and computing networks grow, their new appendages will rest upon concrete. But if concrete doesn’t change, we will perversely be forced to produce even more concrete to protect ourselves from the coming climate chaos, with its rising seas, fires, and extreme weather.

The AI-driven boom in data centers is a strange bargain of its own. In the future, AI may help us live even more prosperously, or it may undermine our freedoms, civilities, employment opportunities, and environment. But solutions to the bad climate bargain that AI’s data centers foist on the planet are at hand, if there’s a will to deploy them. Hyperscalers and governments are among the few organizations with the clout to rapidly change what kinds of cement and concrete the world uses, and how those are made. With a pivot to sustainability, concrete’s unique scale makes it one of the few materials that could do most to protect the world’s natural systems. We can’t live without concrete—but with some ambitious reinvention, we can thrive with it.

This article was updated on 04 November 2024.




re

Why the Art of Invention Is Always Being Reinvented



Every invention begins with a problem—and the creative act of seeing a problem where others might just see unchangeable reality. For one 5-year-old, the problem was simple: She liked to have her tummy rubbed as she fell asleep. But her mom, exhausted from working two jobs, often fell asleep herself while putting her daughter to bed. “So [the girl] invented a teddy bear that would rub her belly for her,” explains Stephanie Couch, executive director of the Lemelson MIT Program. Its mission is to nurture the next generation of inventors and entrepreneurs.

Anyone can learn to be an inventor, Couch says, given the right resources and encouragement. “Invention doesn’t come from some innate genius, it’s not something that only really special people get to do,” she says. Her program creates invention-themed curricula for U.S. classrooms, ranging from kindergarten to community college.

This article is part of our special report, “Reinventing Invention: Stories from Innovation’s Edge.”

We’re biased, but we hope that little girl grows up to be an engineer. By the time she comes of age, the act of invention may be something entirely new—reflecting the adoption of novel tools and the guiding forces of new social structures. Engineers, with their restless curiosity and determination to optimize the world around them, are continuously in the process of reinventing invention.

In this special issue, we bring you stories of people who are in the thick of that reinvention today. IEEE Spectrum is marking 60 years of publication this year, and we’re celebrating by highlighting both the creative act and the grindingly hard engineering work required to turn an idea into something world changing. In these pages, we take you behind the scenes of some awe-inspiring projects to reveal how technology is being made—and remade—in our time.

Inventors Are Everywhere

Invention has long been a democratic process. The economist B. Zorina Khan of Bowdoin College has noted that the U.S. Patent and Trademark Office has always endeavored to allow essentially anyone to try their hand at invention. From the beginning, the patent examiners didn’t care who the applicants were—anyone with a novel and useful idea who could pay the filing fee was officially an inventor.

This ethos continues today. It’s still possible for an individual to launch a tech startup from a garage or go on “Shark Tank” to score investors. The Swedish inventor Simone Giertz, for example, made a name for herself with YouTube videos showing off her hilariously bizarre contraptions, like an alarm clock with an arm that slapped her awake. The MIT innovation scholar Eric von Hippel has spotlighted today’s vital ecosystem of “user innovation,” in which inventors such as Giertz are motivated by their own needs and desires rather than ambitions of mass manufacturing.

But that route to invention gets you only so far, and the limits of what an individual can achieve have become starker over time. To tackle some of the biggest problems facing humanity today, inventors need a deep-pocketed government sponsor or corporate largess to muster the equipment and collective human brainpower required.

When we think about the challenges of scaling up, it’s helpful to remember Alexander Graham Bell and his collaborator Thomas Watson. “They invent this cool thing that allows them to talk between two rooms—so it’s a neat invention, but it’s basically a gadget,” says Eric Hintz, a historian of invention at the Smithsonian Institution. “To go from that to a transcontinental long-distance telephone system, they needed a lot more innovation on top of the original invention.” To scale their invention, Hintz says, Bell and his colleagues built the infrastructure that eventually evolved into Bell Labs, which became the standard-bearer for corporate R&D.

In this issue, we see engineers grappling with challenges of scale in modern problems. Consider the semiconductor technology supported by the U.S. CHIPS and Science Act, a policy initiative aimed at bolstering domestic chip production. Beyond funding manufacturing, it also provides US $11 billion for R&D, including three national centers where companies can test and pilot new technologies. As one startup tells the tale, this infrastructure will drastically speed up the lab-to-fab process.

And then there are atomic clocks, the epitome of precision timekeeping. When researchers decided to build a commercial version, they had to shift their perspective, taking a sprawling laboratory setup and reimagining it as a portable unit fit for mass production and the rigors of the real world. They had to stop optimizing for precision and instead choose the most robust laser, and the atom that would go along with it.

These technology efforts benefit from infrastructure, brainpower, and cutting-edge new tools. One tool that may become ubiquitous across industries is artificial intelligence—and it’s a tool that could further expand access to the invention arena.

What if you had a team of indefatigable assistants at your disposal, ready to scour the world’s technical literature for material that could spark an idea, or to iterate on a concept 100 times before breakfast? That’s the promise of today’s generative AI. The Swiss company Iprova is exploring whether its AI tools can automate “eureka” moments for its clients, corporations that are looking to beat their competitors to the next big idea. The serial entrepreneur Steve Blank similarly advises young startup founders to embrace AI’s potential to accelerate product development; he even imagines testing product ideas on digital twins of customers. Although it’s still early days, generative AI offers inventors tools that have never been available before.

Measuring an Invention’s Impact

If AI accelerates the discovery process, and many more patentable ideas come to light as a result, then what? As it is, more than a million patents are granted every year, and we struggle to identify the ones that will make a lasting impact. Bryan Kelly, an economist at the Yale School of Management, and his collaborators made an attempt to quantify the impact of patents by doing a technology-assisted deep dive into U.S. patent records dating back to 1840. Using natural language processing, they identified patents that introduced novel phrasing that was then repeated in subsequent patents—an indicator of radical breakthroughs. For example, Elias Howe Jr.’s 1846 patent for a sewing machine wasn’t closely related to anything that came before but quickly became the basis of future sewing-machine patents.

Another foundational patent was the one awarded to an English bricklayer in 1824 for the invention of Portland cement, which is still the key ingredient in most of the world’s concrete. As Ted C. Fishman describes in his fascinating inquiry into the state of concrete today, this seemingly stable industry is in upheaval because of its heavy carbon emissions. The AI boom is fueling a construction boom in data centers, and all those buildings require billions of tons of concrete. Fishman takes readers into labs and startups where researchers are experimenting with climate-friendly formulations of cement and concrete. Who knows which of those experiments will result in a patent that echoes down the ages?

Some engineers start their invention process by thinking about the impact they want to make on the world. The eminent Indian technologist Raghunath Anant Mashelkar, who has popularized the idea of “Gandhian engineering”, advises inventors to work backward from “what we want to achieve for the betterment of humanity,” and to create problem-solving technologies that are affordable, durable, and not only for the elite.

Durability matters: Invention isn’t just about creating something brand new. It’s also about coming up with clever ways to keep an existing thing going. Such is the case with the Hubble Space Telescope. Originally designed to last 15 years, it’s been in orbit for twice that long and has actually gotten better with age, because engineers designed the satellite to be fixable and upgradable in space.

For all the invention activity around the globe—the World Intellectual Property Organization says that 3.5 million applications for patents were filed in 2022—it may be harder to invent something useful than it used to be. Not because “everything that can be invented has been invented,” as in the apocryphal quote attributed to the unfortunate head of the U.S. patent office in 1889. Rather, because so much education and experience are required before an inventor can even understand all the dimensions of the door they’re trying to crack open, much less come up with a strategy for doing so. Ben Jones, an economist at Northwestern’s Kellogg School of Management, has shown that the average age of great technological innovators rose by about six years over the course of the 20th century. “Great innovation is less and less the provenance of the young,” Jones concluded.

Consider designing something as complex as a nuclear fusion reactor, as Tom Clynes describes in “An Off-the-Shelf Stellarator.” Fusion researchers have spent decades trying to crack the code of commercially viable fusion—it’s more akin to a calling than a career. If they succeed, they will unlock essentially limitless clean energy with no greenhouse gas emissions or meltdown danger. That’s the dream that the physicists in a lab in Princeton, N.J., are chasing. But before they even started, they first had to gain an intimate understanding of all the wrong ways to build a fusion reactor. Once the team was ready to proceed, what they created was an experimental reactor that accelerates the design-build-test cycle. With new AI tools and unprecedented computational power, they’re now searching for the best ways to create the magnetic fields that will confine the plasma within the reactor. Already, two startups have spun out of the Princeton lab, both seeking a path to commercial fusion.

The stellarator story and many other articles in this issue showcase how one innovation leads to the next, and how one invention can enable many more. The legendary Dean Kamen, best known for mechanical devices like the Segway and the prosthetic “Luke” arm, is now trying to push forward the squishy world of biological manufacturing. In an interview, Kamen explains how his nonprofit is working on the infrastructure—bioreactors, sensors, and controls—that will enable companies to explore the possibilities of growing replacement organs. You could say that he’s inventing the launchpad so others can invent the rockets.

Sometimes everyone in a research field knows where the breakthrough is needed, but that doesn’t make it any easier to achieve. Case in point: the quest for a household humanoid robot that can perform domestic chores, switching effortlessly from frying an egg to folding laundry. Roboticists need better learning software that will enable their bots to navigate the uncertainties of the real world, and they also need cheaper and lighter actuators. Major advances in these two areas would unleash a torrent of creativity and may finally bring robot butlers into our homes.

And maybe the future roboticists who make those breakthroughs will have cause to thank Marina Umaschi Bers, a technologist at Boston College who cocreated the ScratchJr programming language and the KIBO robotics kit to teach kids the basics of coding and robotics in entertaining ways. She sees engineering as a playground, a place for children to explore and create, to be goofy or grandiose. If today’s kindergartners learn to think of themselves as inventors, who knows what they’ll create tomorrow?




re

Video Friday: Trick or Treat, Atlas



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2024: 22–24 November 2024, NANCY, FRANCE

Enjoy today’s videos!

We’re hoping to get more on this from Boston Dynamics, but if you haven’t seen it yet, here’s electric Atlas doing something productive (and autonomous!).

And why not do it in a hot dog costume for Halloween, too?

[ Boston Dynamics ]

Ooh, this is exciting! Aldebaran is getting ready to release a seventh generation of NAO!

[ Aldebaran ]

Okay I found this actually somewhat scary, but Happy Halloween from ANYbotics!

[ ANYbotics ]

Happy Halloween from the Clearpath!

[ Clearpath Robotics Inc. ]

Another genuinely freaky Happy Halloween, from Boston Dynamics!

[ Boston Dynamics ]

This “urban opera” by Compagnie La Machine took place last weekend in Toulouse, featuring some truly enormous fantastical robots.

[ Compagnie La Machine ]

Thanks, Thomas!

Impressive dismount from Deep Robotics’ DR01.

[ Deep Robotics ]

Cobot juggling from Daniel Simu.

[ Daniel Simu ]

Adaptive-morphology multirotors exhibit superior versatility and task-specific performance compared to traditional multirotors owing to their functional morphological adaptability. However, a notable challenge lies in the contrasting requirements of locking each morphology for flight controllability and efficiency while permitting low-energy reconfiguration. A novel design approach is proposed for reconfigurable multirotors utilizing soft multistable composite laminate airframes.

[ Environmental Robotics Lab paper ]

This is a pitching demonstration of new Torobo. New Torobo is lighter than the older version, enabling faster motion such as throwing a ball. The new model will be available in Japan in March 2025 and overseas from October 2025 onward.

[ Tokyo Robotics ]

I’m not sure what makes this “the world’s best robotic hand for manipulation research,” but it seems solid enough.

[ Robot Era ]

And now, picking a micro cat.

[ RoCogMan Lab ]

When Arvato’s Louisville, Ky. staff wanted a robotics system that could unload freight with greater speed and safety, Boston Dynamics’ Stretch robot stood out. Stretch is a first of its kind mobile robot designed specifically to unload boxes from trailers and shipping containers, freeing up employees to focus on more meaningful tasks in the warehouse. Arvato acquired its first Stretch system this year and the robot’s impact was immediate.

[ Boston Dynamics ]

NASA’s Perseverance Mars rover used its Mastcam-Z camera to capture the silhouette of Phobos, one of the two Martian moons, as it passed in front of the Sun on Sept. 30, 2024, the 1,285th Martian day, or sol, of the mission.

[ NASA ]

Students from Howard University, Moorehouse College, and Berea College joined University of Michigan robotics students in online Robotics 102 courses for the fall ‘23 and winter ‘24 semesters. The class is part of the distributed teaching collaborative, a co-teaching initiative started in 2020 aimed at providing cutting edge robotics courses for students who would normally not have access to at their current university.

[ University of Michigan Robotics ]

Discover the groundbreaking projects and cutting-edge technology at the Robotics and Automation Summer School (RASS) hosted by Los Alamos National Laboratory. In this exclusive behind-the-scenes video, students from top universities work on advanced robotics in disciplines such as AI, automation, machine learning, and autonomous systems.

[ Los Alamos National Laboratory ]

This week’s Carnegie Mellon University Robotics Institute Seminar is from Princeton University’s Anirudha Majumdar, on “Robots That Know When They Don’t Know.”

Foundation models from machine learning have enabled rapid advances in perception, planning, and natural language understanding for robots. However, current systems lack any rigorous assurances when required to generalize to novel scenarios. For example, perception systems can fail to identify or localize unfamiliar objects, and large language model (LLM)-based planners can hallucinate outputs that lead to unsafe outcomes when executed by robots. How can we rigorously quantify the uncertainty of machine learning components such that robots know when they don’t know and can act accordingly?

[ Carnegie Mellon University Robotics Institute ]




re

Touchscreens Are Out, and Tactile Controls Are Back



Tactile controls are back in vogue. Apple added two new buttons to the iPhone 16, home appliances like stoves and washing machines are returning to knobs, and several car manufacturers are reintroducing buttons and dials to dashboards and steering wheels.

With this “re-buttonization,” as The Wall Street Journal describes it, demand for Rachel Plotnick’s expertise has grown. Plotnick, an associate professor of cinema and media studies at Indiana University in Bloomington, is the leading expert on buttons and how people interact with them. She studies the relationship between technology and society with a focus on everyday or overlooked technologies, and wrote the 2018 book Power Button: A History of Pleasure, Panic, and the Politics of Pushing (The MIT Press). Now, companies are reaching out to her to help improve their tactile controls.

You wrote a book a few years ago about the history of buttons. What inspired that book?

Rachel Plotnick: Around 2009, I noticed there was a lot of discourse in the news about the death of the button. This was a couple years after the first iPhone had come out, and a lot of people were saying that, as touchscreens were becoming more popular, eventually we weren’t going to have any more physical buttons to push. This started to happen across a range of devices like the Microsoft Kinect, and after films like Minority Report had come out in the early 2000s, everyone thought we were moving to this kind of gesture or speech interface. I was fascinated by this idea that an entire interface could die, and that led me down this big wormhole, to try to understand how we came to be a society that pushed buttons everywhere we went.

Rachel Plotnick studies the ways we use everyday technologies and how they shape our relationships with each other and the world.Rachel Plotnick

The more that I looked around, the more that I saw not only were we pressing digital buttons on social media and to order things from Amazon, but also to start our coffee makers and go up and down in elevators and operate our televisions. The pervasiveness of the button as a technology pitted against this idea of buttons disappearing seemed like such an interesting dichotomy to me. And so I wanted to understand an origin story, if I could come up with it, of where buttons came from.

What did you find in your research?

Plotnick: One of the biggest observations I made was that a lot of fears and fantasies around pushing buttons were the same 100 years ago as they are today. I expected to see this society that wildly transformed and used buttons in such a different way, but I saw these persistent anxieties over time about control and who gets to push the button, and also these pleasures around button pushing that we can use for advertising and to make technology simpler. That pendulum swing between fantasy and fear, pleasure and panic, and how those themes persisted over more than a century was what really interested me. I liked seeing the connections between the past and the present.

[Back to top]

We’ve experienced the rise of touchscreens, but now we might be seeing another shift—a renaissance in buttons and physical controls. What’s prompting the trend?

Plotnick: There was this kind of touchscreen mania, where all of a sudden everything became a touchscreen. Your car was a touchscreen, your refrigerator was a touchscreen. Over time, people became somewhat fatigued with that. That’s not to say touchscreens aren’t a really useful interface, I think they are. But on the other hand, people seem to have a hunger for physical buttons, both because you don’t always have to look at them—you can feel your way around for them when you don’t want to directly pay attention to them—but also because they offer a greater range of tactility and feedback.

If you look at gamers playing video games, they want to push a lot of buttons on those controls. And if you look at DJs and digital musicians, they have endless amounts of buttons and joysticks and dials to make music. There seems to be this kind of richness of the tactile experience that’s afforded by pushing buttons. They’re not perfect for every situation, but I think increasingly, we’re realizing the merit that the interface offers.

What else is motivating the re-buttoning of consumer devices?

Plotnick: Maybe screen fatigue. We spend all our days and nights on these devices, scrolling or constantly flipping through pages and videos, and there’s something tiring about that. The button may be a way to almost de-technologize our everyday existence, to a certain extent. That’s not to say buttons don’t work with screens very nicely—they’re often partners. But in a way, it’s taking away the priority of vision as a sense, and recognizing that a screen isn’t always the best way to interact with something.

When I’m driving, it’s actually unsafe for my car to be operated in that way. It’s hard to generalize and say, buttons are always easy and good, and touchscreens are difficult and bad, or vice versa. Buttons tend to offer you a really limited range of possibilities in terms of what you can do. Maybe that simplicity of limiting our field of choices offers more safety in certain situations.

It also seems like there’s an accessibility issue when prioritizing vision in device interfaces, right?

Plotnick: The blind community had to fight for years to make touchscreens more accessible. It’s always been funny to me that we call them touchscreens. We think about them as a touch modality, but a touchscreen prioritizes the visual. Over the last few years, we’re seeing Alexa and Siri and a lot of these other voice-activated systems that are making things a little bit more auditory as a way to deal with that. But the touchscreen is oriented around visuality.

It sounds like, in general, having multiple interface options is the best way to move forward—not that touchscreens are going to become completely passé, just like the button never actually died.

Plotnick: I think that’s accurate. We see paradigm shifts over time with technologies, but for the most part, we often recycle old ideas. It’s striking that if we look at the 1800s, people were sending messages via telegraph about what the future would look like if we all had this dashboard of buttons at our command where we could communicate with anyone and shop for anything. And that’s essentially what our smartphones became. We still have this dashboard menu approach. I think it means carefully considering what the right interface is for each situation.

[Back to top]

Several companies have reached out to you to learn from your expertise. What do they want to know?

Plotnick: I think there is a hunger out there from companies designing buttons or consumer technologies to try to understand the history of how we used to do things, how we might bring that to bear on the present, and what the future looks like with these interfaces. I’ve had a number of interesting discussions with companies, including one that manufactures push-button interfaces. I had a conversation with them about medical devices like CT machines and X-ray machines, trying to imagine the easiest way to push a button in that situation, to save people time and improve the patient encounter.

I’ve also talked to people about what will make someone use a defibrillator or not. Even though it’s really simple to go up to these automatic machines, if you see someone going into cardiac arrest in a mall or out on the street, a lot of people are terrified to actually push the button that would get this machine started. We had a really fascinating discussion about why someone wouldn’t push a button, and what would it take to get them to feel okay about doing that.

In all of these cases, these are design questions, but they’re also social and cultural questions. I like the idea that people who are in the humanities studying these things from a long-term perspective can also speak to engineers trying to build these devices.

So these companies also want to know about the history of buttons?

Plotnick: I’ve had some fascinating conversations around history. We all want to learn what mistakes not to make and what worked well in the past. There’s often this narrative of progress, that things are only getting better with technology over time. But if we look at these lessons, I think we can see that sometimes things were simpler or better in a past moment, and sometimes they were harder. Often with new technologies, we think we’re completely reinventing the wheel. But maybe these concepts existed a long time ago, and we haven’t paid attention to that. There’s a lot to be learned from the past.

[Back to top]




re

Wireless Signals That Predict Flash Floods



Like many innovators, Hagit Messer-Yaron had a life-changing idea while doing something mundane: Talking with a colleague over a cup of coffee. The IEEE Life Fellow, who in 2006 was head of Tel Aviv University’s Porter School of Environmental Studies, was at the school’s cafeteria with a meteorological researcher. He shared his struggles with finding high-resolution weather data for his climate models, which are used to forecast and track flash floods.

Predicting floods is crucial for quickly evacuating residents in affected areas and protecting homes and businesses against damage.

Hagit Messer-Yaron


Employer Tel Aviv University

Title Professor emerita

Member grade Life Fellow

Alma mater Tel Aviv University

Her colleague “said researchers in the field had limited measurements because the equipment meteorologists used to collect weather data—including radar satellites—is expensive to purchase and maintain, especially in developing countries,” Messer-Yaron says.

Because of that, she says, high-resolution data about temperature, air quality, wind speed, and precipitation levels is often inconsistent—which is a problem when trying to produce accurate models and predictions.

An expert in signal processing and cellular communication, Messer-Yaron came up with the idea of using existing wireless communication signals to collect weather data, as communication networks are spread across the globe.

In 2006 she and her research team developed algorithms that process and analyze data collected by communication networks to monitor rainfall. They measure the difference in amplitude of the signals transmitted and received by the systems to extract data needed to predict flash floods.

The method was first demonstrated in Israel. Messer-Yaron is working to integrate it into communication networks worldwide.

For her work, she received this year’s IEEE Medal for Environmental and Safety Technologies for “contributions to sensing of the environment using wireless communication networks.” The award is sponsored by Toyota.

“Receiving an IEEE medal, which is the highest-level award you can get within the organization, was really a surprise, and I was extremely happy to [receive] it,” she says. “I was proud that IEEE was able to evaluate and see the potential in our technology for public good and to reward it.”

A passion for teaching

Growing up in Israel, Messer-Yaron was interested in art, literature, and science. When it came time to choose a career, she found it difficult to decide, she says. Ultimately, she chose electrical engineering, figuring it would be easier to enjoy art and literature as hobbies.

After completing her mandatory service in the Israel Defense Forces in 1973, she began her undergraduate studies at Tel Aviv University, where she found her passion: Signal processing.

“Electrical engineering is a very broad topic,” she says. “As an undergrad, you learn all the parts that make up electrical engineering, including applied physics and applied mathematics. I really enjoyed applied mathematics and soon discovered signal processing. I found it quite amazing how, by using algorithms, you can direct signals to extract information.”

She graduated with a bachelor’s degree in EE in 1977 and continued her education there, earning master’s and doctoral degrees in 1979 and 1984. She moved to the United States for a postdoctoral position at Yale. There she worked with IEEE Life Fellow Peter Schultheiss, who was known for his research in using sensor array systems in underwater acoustics.

Inspired by Schultheiss’s passion for teaching, Messer-Yaron decided to pursue a career in academia. She was hired by Tel Aviv University as an electrical engineering professor in 1986. She was the first woman in Israel to become a full professor in the subject.

“Being a faculty member at a public university is the best job you can do. I didn’t make a lot of money, but at the end of each day, I looked back at what I did [with pride].”

For the next 14 years, she conducted research in statistical signal processing, time-delay estimation, and sensor array processing.

Her passion for teaching took her around the world as a visiting professor at Yale, the New Jersey Institute of Technology, the Institut Polytechnique de Paris, and other schools. She collaborated with colleagues from the universities on research projects.

In 1999 she was promoted to director of Tel Aviv University’s undergraduate electrical engineering program.

A year later, she was offered an opportunity she couldn’t refuse: Serving as chief scientist for the Israeli Ministry of Science, Culture, and Sports. She took a sabbatical from teaching and for the next three years oversaw the country’s science policy.

“I believe [working in the public sector] is part of our duty as faculty members, especially in public universities, because that makes you a public intellectual,” she says. “Working for the government gave me a broad view of many things that you don’t see as a professor, even in a large university.”

When she returned to the university in 2004, Messer-Yaron was appointed as the director of the new school of environmental studies. She oversaw the allocation of research funding and spoke with researchers individually to better understand their needs. After having coffee with one researcher, she realized there was a need to develop better weather-monitoring technology.

Hagit Messer-Yaron proudly displays her IEEE Medal for Environmental and Safety Technologies at this year’s IEEE Honors Ceremony. She is accompanied by IEEE President-Elect Kathleen Kramer and IEEE President Tom Couglin.Robb Cohen

Using signal processing to monitor weather

Because the planet is warming, the risk of flash floods is steadily increasing. Warmer air holds more water—which leads to heavier-than-usual rainfall and results in more flooding, according to the U.S. Environmental Protection Agency.

Data about rainfall is typically collected by satellite radar and ground-based rain gauges. However, radar images don’t provide researchers with precise readings of what’s happening on the ground, according to an Ensia article. Rain gauges are accurate but provide data from small areas only.

So Messer-Yaron set her sights on developing technology that connects to cellular networks close to the ground to provide more accurate measurements, she says. Using existing infrastructure eliminates the need to build new weather radars and weather stations.

Communication systems automatically record the transmitted signal level and the received signal level, but rain can alter otherwise smooth wave patterns. By measuring the difference in the amplitude, meteorologists could extract the data necessary to track rainfall using the signal processing algorithms.

In 2005 Messer-Yaron and her group successfully tested the technology. The following year, their “Environmental Monitoring by Wireless Communication Networks” paper was published in Science.

The algorithm is being used in Israel in partnership with all three of the country’s major cellular service providers. Messer-Yaron acknowledges, however, that negotiating deals with cellular service companies in other countries has been difficult.

To expand the technology’s use worldwide, Messer-Yaron launched a research network through the European Cooperation in Science and Technology (COST), called an opportunistic precipitation sensing network known as OPENSENSE. The group connects researchers, meteorologists, and other experts around the world to collaborate on integrating the technology in members’ communities.

Monitoring the effects of climate change

Since developing the technology, Messer-Yaron has held a number of jobs including president of the Open University of Israel and vice chair of the country’s Council for Higher Education, which accredits academic institutions.

She is maintaining her link with Tel Aviv University today as a professor emerita.

“Being a faculty member at a public university is the best job you can do,” she says. “I didn’t make a lot of money, but at the end of each day, I looked back at what I did [with pride]. Because of the academic freedom and the autonomy I had, I was able to do many things in addition to teaching, including research.”

To continue her work in developing technology to monitor weather events, in 2016, she helped found ClimaCell, now Tomorrow.io, based in Boston. The startup aims to use wireless communication infrastructure and IoT devices to collect real-time weather data. Messer-Yaron served as its chief scientist until 2017.

She continues to update the original algorithms with her students, most recently with machine learning capabilities to extract data from physical measurements of the signal level in communication networks.

A global engineering community

When Messer-Yaron was an undergraduate student, she joined IEEE at the suggestion of one of her professors.

“I didn’t think much about the benefits of being a member until I became a graduate student,” she says. “I started attending conferences and publishing papers in IEEE journals, and the organization became my professional community.”

She is an active volunteer and a member of the IEEE Signal Processing Society. From 1994 to 2010 she served on the society’s Signal Processing Theory and Methods technical committee. She was associate editor of IEEE Signal Processing Letters and IEEE Transactions on Signal Processing. She is a member of the editorial boards of the IEEE Journal of Selected Topics in Signal Processing and IEEE Transactions on Signal Processing.

In the past 10 years, she’s been involved with other IEEE committees including the conduct review, ethics and member conduct, and global public policy bodies.

“I don’t see my career or my professional life without the IEEE,” she says




re

Oceans Lock Away Carbon Slower Than Previously Thought



Research expeditions conducted at sea using a rotating gravity machine and microscope found that the Earth’s oceans may not be absorbing as much carbon as researchers have long thought.

Oceans are believed to absorb roughly 26 percent of global carbon dioxide emissions by drawing down CO2 from the atmosphere and locking it away. In this system, CO2 enters the ocean, where phytoplankton and other organisms consume about 70 percent of it. When these organisms eventually die, their soft, small structures sink to the bottom of the ocean in what looks like an underwater snowfall.

This “marine snow” pulls carbon away from the surface of the ocean and sequesters it in the depths for millennia, which enables the surface waters to draw down more CO2 from the air. It’s one of Earth’s best natural carbon-removal systems. It’s so effective at keeping atmospheric CO2 levels in check that many research groups are trying to enhance the process with geoengineering techniques.

But the new study, published on 11 October in Science, found that the sinking particles don’t fall to the ocean floor as quickly as researchers thought. Using a custom gravity machine that simulated marine snow’s native environment, the study’s authors observed that the particles produce mucus tails that act like parachutes, putting the brakes on their descent—sometimes even bringing them to a standstill.

The physical drag leaves carbon lingering in the upper hydrosphere, rather than being safely sequestered in deeper waters. Living organisms can then consume the marine snow particles and respire their carbon back into the sea. Ultimately, this impedes the rate at which the ocean draws down and sequesters additional CO2 from the air.

The implications are grim: Scientists’ best estimates of how much CO2 the Earth’s oceans sequester could be way off. “We’re talking roughly hundreds of gigatonnes of discrepancy if you don’t include these marine snow tails,” says Manu Prakash, a bioengineer at Stanford University and one of the paper’s authors. The work was conducted by researchers at Stanford, Rutgers University in New Jersey, and Woods Hole Oceanographic Institution in Massachusetts.

Oceans Absorb Less CO2 Than Expected

Researchers for years have been developing numerical models to estimate marine carbon sequestration. Those models will need to be adjusted for the slower sinking speed of marine snow, Prakash says.

The findings also have implications for startups in the fledgling marine carbon geoengineering field. These companies use techniques such as ocean alkalinity enhancement to augment the ocean’s ability to sequester carbon. Their success depends, in part, on using numerical models to prove to investors and the public that their techniques work. But their estimates are only as good as the models they use, and the scientific community’s confidence in them.

“We’re talking roughly hundreds of gigatonnes of discrepancy if you don’t include these marine snow tails.” —Manu Prakash, Stanford University

The Stanford researchers made the discovery on an expedition off the coast of Maine. There, they collected marine samples by hanging traps from their boat 80 meters deep. After pulling up a sample, the researchers quickly analyzed the contents while still on board the ship using their wheel-shaped machine and microscope.

The researchers built a microscope with a spinning wheel that simulates marine snow falling through sea water over longer distances than would otherwise be practical.Prakash Lab/Stanford

The device simulates the organisms’ vertical travel over long distances. Samples go into a wheel about the size of a vintage film reel. The wheel spins constantly, allowing suspended marine-snow particles to sink while a camera captures their every move.

The apparatus adjusts for temperature, light, and pressure to emulate marine conditions. Computational tools assess flow around the sinking particles and custom software removes noise in the data from the ship’s vibrations. To accommodate for the tilt and roll of the ship, the researchers mounted the device on a two-axis gimbal.

Slower Marine Snow Reduces Carbon Sequestration

With this setup, the team observed that sinking marine snow generates an invisible halo-shaped comet tail made of viscoelastic transparent exopolymer—a mucus-like parachute. They discovered the invisible tail by adding small beads to the seawater sample in the wheel, and analyzing the way they flowed around the marine snow. “We found that the beads were stuck in something invisible trailing behind the sinking particles,” says Rahul Chajwa, a bioengineering postdoctoral fellow at Stanford.

The tail introduces drag and buoyancy, doubling the amount of time marine snow spends in the upper 100 meters of the ocean, the researchers concluded. “This is the sedimentation law we should be following,” says Prakash, who hopes to get the results into climate models.

The study will likely help models project carbon export—the process of transporting CO2 from the atmosphere to the deep ocean, says Lennart Bach, a marine biochemist at the University of Tasmania in Australia, who was not involved with the research. “The methodology they developed is very exciting and it’s great to see new methods coming into this research field,” he says.

But Bach cautions against extrapolating the results too far. “I don’t think the study will change the numbers on carbon export as we know them right now,” because these numbers are derived from empirical methods that would have unknowingly included the effects of the mucus tail, he says.

Marine snow may be slowed by “parachutes” of mucus while sinking, potentially lowering the rate at which the global ocean can sequester carbon in the depths.Prakash Lab/Stanford

Prakash and his team came up with the idea for the microscope while conducting research on a human parasite that can travel dozens of meters. “We would make 5- to 10-meter-tall microscopes, and one day, while packing for a trip to Madagascar, I had this ‘aha’ moment,” says Prakash. “I was like: Why are we packing all these tubes? What if the two ends of these tubes were connected?”

The group turned their linear tube into a closed circular channel—a hamster wheel approach to observing microscopic particles. Over five expeditions at sea, the team further refined the microscope’s design and fluid mechanics to accommodate marine samples, often tackling the engineering while on the boat and adjusting for flooding and high seas.

In addition to the sedimentation physics of marine snow, the team also studies other plankton that may affect climate and carbon-cycle models. On a recent expedition off the coast of Northern California, the group discovered a cell with silica ballast that makes marine snow sink like a rock, Prakash says.

The crafty gravity machine is one of Prakash’s many frugal inventions, which include an origami-inspired paper microscope, or “foldscope,” that can be attached to a smartphone, and a paper-and-string biomedical centrifuge dubbed a “paperfuge.”




re

U.S. Chip Revival Plan Chooses Sites



Last week the organization tasked with running the the biggest chunk of U.S. CHIPS Act’s US $13 billion R&D program made some significant strides: The National Semiconductor Technology Center (NSTC) released a strategic plan and selected the sites of two of three planned facilities and released a new strategic plan. The locations of the two sites—a “design and collaboration” center in Sunnyvale, Calif., and a lab devoted to advancing the leading edge of chipmaking, in Albany, N.Y.—build on an existing ecosystem at each location, experts say. The location of the third planned center—a chip prototyping and packaging site that could be especially critical for speeding semiconductor startups—is still a matter of speculation.

“The NSTC represents a once-in-a-generation opportunity for the U.S. to accelerate the pace of innovation in semiconductor technology,” Deirdre Hanford, CEO of Natcast, the nonprofit that runs the NSTC centers, said in a statement. According to the strategic plan, which covers 2025 to 2027, the NSTC is meant to accomplish three goals: extend U.S. technology leadership, reduce the time and cost to prototype, and build and sustain a semiconductor workforce development ecosystem. The three centers are meant to do a mix of all three.

New York gets extreme ultraviolet lithography

NSTC plans to direct $825 million into the Albany project. The site will be dedicated to extreme ultraviolet lithography, a technology that’s essential to making the most advanced logic chips. The Albany Nanotech Complex, which has already seen more than $25 billion in investments from the state and industry partners over two decades, will form the heart of the future NSTC center. It already has an EUV lithography machine on site and has begun an expansion to install a next-generation version, called high-NA EUV, which promises to produce even finer chip features. Working with a tool recently installed in Europe, IBM, a long-time tenant of the Albany research facility, reported record yields of copper interconnects built every 21 nanometers, a pitch several nanometers tighter than possible with ordinary EUV.

“It’s fulfilling to see that this ecosystem can be taken to the national and global level through CHIPS Act funding,” said Mukesh Khare, general manager of IBM’s semiconductors division, speaking from the future site of the NSTC EUV center. “It’s the right time, and we have all the ingredients.”

While only a few companies are capable of manufacturing cutting edge logic using EUV, the impact of the NSTC center will be much broader, Khare argues. It will extend down as far as early-stage startups with ideas or materials for improving the chipmaking process “An EUV R&D center doesn’t mean just one machine,” says Khare. “It needs so many machines around it… It’s a very large ecosystem.”

Silicon Valley lands the design center

The design center is tasked with conducting advanced research in chip design, electronic design automation (EDA), chip and system architectures, and hardware security. It will also host the NSTC’s design enablement gateway—a program that provides NSTC members with a secure, cloud-based access to design tools, reference processes and designs, and shared data sets, with the goal of reducing the time and cost of design. Additionally, it will house workforce development, member convening, and administration functions.

Situating the design center in Silicon Valley, with its concentration of research universities, venture capital, and workforce, seems like the obvious choice to many experts. “I can’t think of a better place,” says Patrick Soheili, co-founder of interconnect technology startup Eliyan, which is based in Santa Clara, Calif.

Abhijeet Chakraborty, vice president of engineering in the technology and product group at Silicon Valley-based Synopsys, a leading maker of EDA software, sees Silicon Valley’s expansive tech ecosystem as one of its main advantages in landing the NSTC’s design center. The region concentrates companies and researchers involved in the whole spectrum of the industry from semiconductor process technology to cloud software.

Access to such a broad range of industries is increasingly important for chip design startups, he says. “To design a chip or component these days you need to go from concept to design to validation in an environment that takes care of the entire stack,” he says. It’s prohibitively expensive for a startup to do that alone, so one of Chakraborty’s hopes for the design center is that it will help startups access the design kits and other data needed to operate in this new environment.

Packaging and prototyping still to come

A third promised center for prototyping and packaging is still to come. “The big question is where does the packaging and prototyping go?” says Mark Granahan, cofounder and CEO of Pennsylvania-based power semiconductor startup Ideal Semiconductor. “To me that’s a great opportunity.” He points out that because there is so little packaging technology infrastructure in the United States, any ambitious state or region should have a shot at hosting such a center. One of the original intentions of the act, after all, was to expand the number of regions of the country that are involved in the semiconductor industry.

But that hasn’t stopped some already tech-heavy regions from wanting it. “Oregon offers the strongest ecosystem for such a facility,” a spokesperson for Intel, whose technology development is done there. “The state is uniquely positioned to contribute to the success of the NSTC and help drive technological advancements in the U.S. semiconductor industry.”

As NSTC makes progress, Granahan’s concern is that bureaucracy will expand with it and slow efforts to boost the U.S. chip industry. Already the layers of control are multiplying. The Chips Office at the National Institute of Standards and Technology executes the Act. The NSTC is administered by the nonprofit Natcast, which directs the EUV center, which is in a facility run by another nonprofit, NY CREATES. “We want these things to be agile and make local decisions.”




re

This Mobile 3D Printer Can Print Directly on Your Floor



Waiting for each part of a 3D-printed project to finish, taking it out of the printer, and then installing it on location can be tedious for multi-part projects. What if there was a way for your printer to print its creation exactly where you needed it? That’s the promise of MobiPrint, a new 3D printing robot that can move around a room, printing designs directly onto the floor.

MobiPrint, designed by Daniel Campos Zamora at the University of Washington, consists of a modified off-the-shelf 3D printer atop a home vacuum robot. First it autonomously maps its space—be it a room, a hallway, or an entire floor of a house. Users can then choose from a prebuilt library or upload their own design to be printed anywhere in the mapped area. The robot then traverses the room and prints the design.

It’s “a new system that combines robotics and 3D printing that could actually go and print in the real world,” Campos Zamora says. He presented MobiPrint on 15 October at the ACM Symposium on User Interface Software and Technology.

Campos Zamora and his team started with a Roborock S5 vacuum robot and installed firmware that allowed it to communicate with the open source program Valetudo. Valetudo disconnects personal robots from their manufacturer’s cloud, connecting them to a local server instead. Data collected by the robot, such as environmental mapping, movement tracking, and path planning, can all be observed locally, enabling users to see the robot’s LIDAR-created map.

Campos Zamora built a layer of software that connects the robot’s perception of its environment to the 3D printer’s print commands. The printer, a modified Prusa Mini+, can print on carpet, hardwood, and vinyl, with maximum printing dimensions of 180 by 180 by 65 millimeters. The robot has printed pet food bowls, signage, and accessibility markers as sample objects.

MakeabilityLab/YouTube

Currently, MobiPrint can only “park and print.” The robot base cannot move during printing to make large objects, like a mobility ramp. Printing designs larger than the robot is one of Campos Zamora’s goals in the future. To learn more about the team’s vision for MobiPrint, Campos Zamora answered a few questions from IEEE Spectrum.

What was the inspiration for creating your mobile 3D printer?

Daniel Campos Zamora: My lab is focused on building systems with an eye towards accessibility. One of the things that really inspired this project was looking at the tactile surface indicators that help blind and low vision users find their way around a space. And so we were like, what if we made something that could automatically go and deploy these things? Especially in indoor environments, which are generally a little trickier and change more frequently over time.

We had to step back and build this entirely different thing, using the environment as a design element. We asked: how do you integrate the real world environment into the design process, and then what kind of things can you print out in the world? That’s how this printer was born.

What were some surprising moments in your design process?

Campos Zamora: When I was testing the robot on different surfaces, I was not expecting the 3D printed designs to stick extremely well to the carpet. It stuck way too well. Like, you know, just completely bonded down there.

I think there’s also just a lot of joy in seeing this printer move. When I was doing a demonstration of it at this conference last week, it almost seemed like the robot had a personality. A vacuum robot can seem to have a personality, but this printer can actually make objects in my environment, so I feel a different relationship to the machine.

Where do you hope to take MobiPrint in the future?

Campos Zamora: There’s several directions I think we could go. Instead of controlling the robot remotely, we could have it follow someone around and print accessibility markers along a path they walk. Or we could integrate an AI system that recommends objects be printed in different locations. I also want to explore having the robot remove and recycle the objects it prints.





re

Why Are Kindle Colorsofts Turning Yellow?



In physical books, yellowing pages are usually a sign of age. But brand-new users of Amazon’s Kindle Colorsofts, the tech giant’s first color e-reader, are already noticing yellow hues appearing at the bottoms of their displays.

Since the complaints began the trickle in, Amazon has reportedly suspended shipments and announced that it is working to fix the issue. (As of publication of this article, the US $280 Kindle had an average 2.6 star rating on Amazon.) It’s not yet clear what is causing the discoloration. But while the issue is new—and unexpected—the technology is not, says Jason Heikenfeld, an IEEE Fellow and engineering professor at the University of Cincinnati. The Kindle Colorsoft, which became available on 30 October, uses “a very old approach,” says Heikenfeld, who previously worked to develop the ultimate e-paper technology. “It was the first approach everybody tried.”

Amazon’s e-reader uses reflective display technology developed by E Ink, a company that started in the 1990s as an MIT Media Lab spin off before developing its now-dominant electronic paper displays. E Ink is used in Kindles, as well as top e-readers from Kobo, reMarkable, Onyx, and more. E Ink first introduced Kaleido—the basis of the Colorsoft’s display—five years ago, though the road to full-color e-paper started well before.

How E-Readers Work

Monochromatic Kindles work by applying voltages to electrodes in the screen that bring black or white pigment to the top of each pixel. Those pixels then reflect ambient light, creating a paper-like display. To create a full-color display, companies like E Ink added an array of filters just above the ink. This approach didn’t work well at first because the filters lost too much light, making the displays dark and low resolution. But with a few adjustments, Kaleido was ready for consumer products in 2019. (Other approaches—like adding colored pigments to the ink—have been developed, but these come with their own drawbacks, including a higher price tag.)

Given this design, it initially seemed to Heikenfeld that the issue would have stemmed from the software, which determines the voltages applied to each electrode. This aligned with reports from some users that the issue appeared after a software update.

But industry analyst Ming-Chi Kuo suggested in a post on X that the issue is due to the e-reader’s hardware. Amazon switched the optically clear adhesive (OCA) used in the Colorsoft to a material that may not be so optically clear. In its announcement of the Colorsoft, the company boasted “custom formulated coatings” that would enhance the color display as one of the new e-reader’s innovations.

In terms of resolving the issue, Kuo’s post also stated that “While component suppliers have developed several hardware solutions, Amazon seems to be leaning toward a software-based fix.” Heikenfeld is not sure how a software fix would work, apart from blacking out the bottom of the screen.

Amazon did not reply to IEEE Spectrum’s request for comment. In an email to IEEE Spectrum, E Ink stated, “While we cannot comment on any individual partner or product, we are committed to supporting our partners in understanding and addressing any issues that arise.”

The Future of E-Readers

It took a long time for color Kindles to arrive, and the future of reflective e-reader displays isn’t likely to improve much, according to Heikenfeld. “I used to work a lot in this field, and it just really slowed down at some point, because it’s a tough nut to crack,” Heikenfeld says.

There are inherent limitations and inefficiencies to working with filter-based color displays that rely on ambient light, and there’s no Moore’s Law for these displays. Instead, their improvement is asymptotic—and we may already be close to the limit. Meanwhile, displays that emit light, like LCD and OLED, continue to improve. “An iPad does a pretty damn good job with battery life now,” says Heikenfeld.

At the same time, he believes there will always be a place for reflective displays, which remain a more natural experience for our eyes. “We live in a world of reflective color,” Heikenfeld says.

This is story was updated on 12 November 2024 to correct that Jason Heikenfeld is an IEEE Fellow.




re

Get to Know the IEEE Board of Directors



The IEEE Board of Directors shapes the future direction of IEEE and is committed to ensuring IEEE remains a strong and vibrant organization—serving the needs of its members and the engineering and technology community worldwide—while fulfilling the IEEE mission of advancing technology for the benefit of humanity.

This article features IEEE Board of Directors members ChunChe “Lance” Fung, Eric Grigorian, and Christina Schober.

IEEE Senior Member ChunChe “Lance” Fung

Director, Region 10: Asia Pacific

Joanna Mai Yie Leung

Fung has worked in academia and provided industry consultancy services for more than 40 years. His research interests include applying artificial intelligence, machine learning, computational intelligence, and other techniques to solve practical problems. He has authored more than 400 publications in the disciplines of AI, computational intelligence, and related applications. Fung currently works on the ethical applications and social impacts of AI.

A member of the IEEE Systems, Man, and Cybernetics Society, Fung has been an active IEEE volunteer for more than 30 years. As a member and chair of the IEEE Technical Program Integrity and Conference Quality committees, he oversaw the quality of technical programs presented at IEEE conferences. Fung also chaired the Region 10 Educational Activities Committee. He was instrumental in translating educational materials to local languages for the IEEE Reaching Locals project.

As chair of the IEEE New Initiatives Committee, he established and promoted the US $1 Million Challenge Call for New Initiatives, which supports potential IEEE programs, services, or products that will significantly benefit members, the public, the technical community, or customers and could have a lasting impact on IEEE or its business processes.

Fung has left an indelible mark as a dedicated educator at Singapore Polytechnic, Curtin University, and Murdoch University. He was appointed in 2015 as professor emeritus at Murdoch, and he takes pride in training the next generation of volunteers, leaders, teachers, and researchers in the Western Australian community. Fung received the IEEE Third Millennium Medal and the IEEE Region 10 Outstanding Volunteer Award.

IEEE Senior Member Eric Grigorian

Director, Region 3: Southern U.S. & Jamaica

Sean McNeil/GTRI

Grigorian has extensive experience leading international cross-domain teams that support the commercial and defense industries. His current research focuses on implementing model-based systems engineering, creating models that depict system behavior, interfaces, and architecture. His work has led to streamlined processes, reduced costs, and faster design and implementation of capabilities due to efficient modeling and verification. Grigorian holds two U.S. utility patents.

Grigorian has been an active volunteer with IEEE since his time as a student member at the University of Alabama in Huntsville (UAH). He saw it as an excellent way to network and get to know people. He found his personality was suited for working within the organization and building leadership skills. During the past 43 years as an IEEE member, he has been affiliated with the IEEE Aerospace and Electronic Systems (AESS), IEEE Computer, and IEEE Communications societies.

As Grigorian’s career has evolved, his involvement with IEEE has also increased. He has been the IEEE Huntsville Section student activities chair, as well as vice chair, and chair. He also was the section’s AESS chair. He served as IEEE SoutheastCon chair in 2008 and 2019, and served on the IEEE Region 3 executive committee as area chair and conference committee chair, enhancing IEEE members’ benefits, engagement, and career advancement. He has significantly contributed to initiatives within IEEE, including promoting preuniversity science, technology, engineering, and mathematics efforts in Alabama.

Grigorian’s professional achievements have been recognized with numerous awards from employers and local technical chapters, including with the 2020 UAH Alumni of Achievement Award for the College of Engineering and the 2006 IEEE Region 3 Outstanding Engineer of the Year Award. He is a member of the IEEE–Eta Kappa Nu honor society.

IEEE Life Senior Member Christina Schober

Director, Division V

Katie Fears/Brio Art

Schober is an innovative engineer with a diverse design and manufacturing engineering background. With more than 40 years of experience, her career has spanned research, design, and manufacturing sensors for space, commercial, and military aircraft navigation and tactical guidance systems. She was responsible for the successful transition from design to production for groundbreaking programs including an integrated flight management system, the Stinger missile’s roll frequency sensor, and the designing of three phases of the DARPA atomic clock. She holds 17 U.S. patents and 24 other patents in the aerospace and navigation fields.

Schober started her career in the 1980s, at a time when female engineers were not widely accepted. The prevailing attitude required her to “stay tough,” she says, and she credits IEEE for giving her technical and professional support. Because of her experiences, she became dedicated to making diversity and inclusion systemic in IEEE.

Schober has held many leadership roles, including IEEE Division VIII Director, IEEE Sensors Council president, and IEEE Standards Sensors Council secretary. In addition to her membership in the IEEE Photonics Society, she is active with the IEEE Computer Society, IEEE Sensors Council, IEEE Standards Association, and IEEE Women in Engineering.

She is also active in her local community, serving as an invited speaker on STEM for the public school system and was a volunteer at youth shelters. Schober has received numerous awards including the IEEE Sensors Council Lifetime Contribution Award and the IEEE Twin Cities Section’s Young Engineer of the Year Award. She is an IEEE Computer Society Gold Core member, a member of the IEEE–Eta Kappa Nu honor society and received the IEEE Third Millennium Medal.




re

Stranded Astronauts Set to Come Home After SpaceX Capsule With Extra Seats Reaches ISS

Two astronauts relinquished their seats on a four-person spacecraft so that their colleagues could return to Earth from the ISS, where they’ve been stuck since June.




re

A New Spacecraft Could Help Determine if There’s Life on a Moon of Jupiter

The Europa Clipper, set for launch in October, will explore a distant ocean world.




re

You Won’t Want to Miss October’s Rare Comet Sighting. Here’s How and When You Can See It

A ”once in a lifetime” comet is expected to light up the night sky as it passes by Earth.




re

In Photos: Celebrating Hawaii’s Wonder a Year After the Maui Wildfires

In his latest book, The Blue on Fire: Hawaii, photographer Enzo Barracco hopes to inspire the world to protect the ocean.




re

4 Astronauts Return to Earth After Being Delayed by Boeing’s Capsule Trouble and Hurricane Milton

A SpaceX capsule carrying the crew parachuted before dawn into the Gulf of Mexico just off the Florida coast.




re

It’s Time to Redefine What a Megafire Is in the Climate Change Era

It's not the reach of a fire that matters most; it's the speed. Understanding this can help society better prepare.





re

Comment on Preventing Hair Loss: How Diwali Commitments Disrupt Women’s Hair Care Routine by Emlakçılık Belgesi

https://maps.google.co.in/url?q=https://yukselenakademi.com/kurs/detay/emlakcilik-belgesi-seviye-5




re

Comment on Are You Breathing More Than Just Festive Cheer This Diwali? Beware Of The Air Pollution by Emlakçılık Belgesi

https://maps.google.co.uk/url?q=https://yukselenakademi.com/kurs/detay/emlakcilik-belgesi-seviye-5