ring

Measuring Education Outcomes: Moving from Enrollment to Learning

Event Information

June 2, 2010
1:00 PM - 5:00 PM EDT

The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

On Wednesday, June 2, the Center for Universal Education at Brookings hosted a discussion on the need to refocus the international education dialogue from school enrollment to learning achieved in developing countries. Participants, who included education experts from academia, international organizations and government, assessed the current state of systematic efforts at the global level to measure learning outcomes.

Center for Universal Education Co-Director and Senior Fellow Jacques van der Gaag opened the event by charting the landscape of learning, including education outside the primary school classroom, during early childhood development and the importance of acquiring both cognitive and non-cognitive skills for ensuring learning outcomes.

View the event summary »

Event Materials

     
 
 




ring

Where is the Learning? Measuring Schooling Efforts in Developing Countries

INTRODUCTION—

Achieving universal education is a twofold challenge: to get children and youth into school and then to teach them something meaningful while they are there. While important progress has been made on the first challenge, there is a crisis unfolding in relation to learning. Around the world, there have been major gains in primary school enrollment partly due to the United Nations’ Millennium Development Goals and the abolition of school fees by many national governments. However in many countries, students are spending years in school without learning core competencies, such as reading and writing. To address this learning crisis, the global community and national governments need to place a much greater focus on the ultimate objective of education—to acquire knowledge and develop skills.

This shift in focus away from just enrollment to enrollment plus quality learning requires measuring learning outcomes. However, the global education community is not yet systematically using effective instruments for measuring primary school learning in low- and middle-income countries. This policy brief reviews the global efforts among the primary donors to support the measurement of learning outcomes. It then suggests steps needed to transition global education policy into a new paradigm of enrollment plus quality learning, which includes: scaling up the implementation of national education accounts and national assessment systems; increasing attention to monitoring early learning during child development to improve readiness for school; and expanding the systematic use of simple assessments of basic cognitive functions in the early grades to help teachers improve their practice.

Downloads

Authors

     
 
 




ring

Putin’s not-so-excellent spring

Early this year, Vladimir Putin had big plans for an excellent spring: first, constitutional amendments approved by the legislative branch and public allowing him the opportunity to remain in power until 2036, followed by a huge patriotic celebration of the 75th anniversary of the defeat of Nazi Germany. Well, stuff happens—specifically, COVID-19. Putin’s spring has…

       




ring

The Arab Spring five years later: Toward greater inclusiveness


Event Information

January 15, 2016
10:15 AM - 11:45 AM EST

Falk Auditorium
Brookings Institution
1775 Massachusetts Avenue NW
Washington, DC 20036

Register for the Event

Five years have passed since the self-immolation of Mohamed Bouazizi in Tunisia sparked revolts around the Arab world and the beginning of the Arab Spring. Despite high hopes that the Arab world was entering a new era of freedom, economic growth, and social justice, the transition turned out to be long and difficult, with the Arab world now in turmoil with revolutions, counter revolutions, wars, civil strife, and the worst refugee crisis of our times. The response to the Arab Spring and its aftermath has focused almost exclusively on political and security issues, and on the very divisive questions of national identity and political regimes. Economic and social questions have been put on the back burner.

On January 15, Global Economy and Development at Brookings hosted a discussion on a new book, "The Arab Spring Five Years Later," which explores the critical economic and social issues driving the Arab Spring agenda and the real economic grievances that must be addressed in order to achieve peace, stability, and successful political transitions as well as provides an approach to addressing those grievances.

Hafez Ghanem and Shinchi Yamanaka presented the key findings of the book, followed by a panel discussion. 


Audio

Transcript

Event Materials

     
 
 




ring

New frameworks for countering terrorism and violent extremism


Event Information

February 16, 2016
10:00 AM - 11:00 AM EST

Saul/Zilkha Rooms
Brookings Institution
1775 Massachusetts Avenue NW
Washington, DC 20036

A conversation with Deputy Secretary of State Antony Blinken



One year after the White House Summit on Countering Violent Extremism, the United States continues to adapt its efforts to blunt the appeal of violent extremism. As part of this effort, the State Department is launching a series of new initiatives to better coordinate the U.S. response to terrorist propaganda and recruitment.

On February 16, the Foreign Policy program at Brookings hosted The Honorable Antony J. Blinken, deputy secretary of state, for a discussion of the United States’ civilian-led initiatives to counter the spread of the Islamic State and other violent extremist groups. Blinken will chart the path forward, to include partnerships with industry and civil society, and outlined the challenges that lie ahead.

Brookings President Strobe Talbott offered welcoming remarks. General John Allen, senior fellow and co-director of the Center for 21st Century Security and Intelligence at Brookings, introduced Deputy Secretary Blinken, and Tamara Cofman Wittes, senior fellow and director of the Center for Middle East Policy at Brookings, joined Deputy Secretary Blinken in conversation following his remarks.

Join the conversation on Twitter using #CVE

Video

Audio

Transcript

Event Materials

     
 
 




ring

Caremongering in the time of coronavirus: Random acts of kindness and online enrichment

It is the middle of the night and I am cloistered in my apartment in downtown Washington, D.C. I am facing four screens, including my smartphone, a laptop, a Mac desktop and a large wall monitor. I am trying to make sense of the fast-changing data on the spread and deadliness of the virus around…

       




ring

Exploring High-Speed Rail Options for the United States


When President Obama unveiled his budget allocation for high-speed rail, he said, “In France, high-speed rail has pulled regions from isolation, ignited growth [and], remade quiet towns into thriving tourist destinations.” His remarks emphasize how high-speed rail is increasing the accessibility of isolated places as an argument for similarly investments. So, what’s the source of this argument in the European context?

In November 2009, the European Union’s ESPON (the European Observation Network for Territorial Development and Cohesion) released a report called “Trends in Accessibility.” ESPON examined the extent to which accessibility has changed between 2001 and 2006. ESPON defines accessibility as how “easily people in one region can reach people in another region.” This measurement of accessibility helps determine the “potential for activities and enterprises in the region to reach markets and activities in other regions.”

ESPON’s research concluded that in this five-year period, rail accessibility grew an average of 13.1 percent. The report further concludes that high-speed rail lines have “influenced positively the potential accessibility of many European regions and cities.”

In particular, the research found that the core of Europe--Germany, France, Belgium, the Netherlands, and Switzerland--has the highest potential accessibility. Europe’s core produces the highest levels of economic output and has the highest population densities. ESPON argues that with such densities, the core has found reason to link their economic hubs (cities) with high-speed rail. These are the places in Europe where they have the greatest returns on investment.

But ESPON also found that high speed rail is starting to increase the accessibility of isolated places such as France’s Tours, Lyon, and Marseille. This is a very important finding for Europe. They have a long-standing policy of social cohesion and balance, striving to create economic sustainability and population stability across Europe. The objective is for areas well beyond core to thrive economically and to dissuade people from migrating in search of jobs. Fiscally, social cohesion translates into investing disproportionately more money into areas not producing sufficient levels of economic output. High-speed rail is but one of the many strategies intending to produce “economic and social cohesion,” states a European Commission report on high-speed rail.

But we are not Europe. While their thesis underpinning high-speed rail is social cohesion, what is our underlying thesis for high-speed rail? And what does this look like spatially? What was the logic behind the selection of Florida over other possible corridors? Is this line going to strengthen our national economy and GDP? Clarity on this score will help ensure the project is a success and offers a high return on investment. Lessons from this accessibility study say that places with high population levels and GDP output offer the greatest accessibility and therefore success.

It would be a pity if the U.S. finally jumped on the high-speed bandwagon but still missed the train.

Authors

Publication: The Avenue, The New Republic
Image Source: © Franck Prevel / Reuters
     
 
 




ring

The Arab Spring Five Years Later

The dilemma felt by Arab youth was captured in Tunisia by the self-immolation in 2010 of Mohamed Bouazizi, who was frustrated by restrictions on his small street-vending business. His death became the catalyst that seemed to light up revolts throughout the Middle East. The frustration had been building for some time:  large segments of society…

       




ring

The Arab Spring Five Years Later: Vol 2

Volume 1 of The Arab Spring Five Years Later is based on extensive research conducted by scholars from a variety of backgrounds, including many associated with the Japan International Cooperation Agency. Now the original research papers are gathered in volume 2 and are available for readers who wish to go even further in understanding the…

       




ring

The Arab Spring Five Years Later: Vol. 1 & Vol. 2

This two-volume set explores in-depth the economic origins and repercussions of the Arab Spring revolts. Volume 1 of The Arab Spring Five Years Later is based on extensive research conducted by scholars from a variety of backgrounds, including many associated with the Japan International Cooperation Agency (JICA). The original research papers are gathered in volume…

       




ring

The Arab Spring five years later: Toward greater inclusiveness

Five years have passed since the self-immolation of Mohamed Bouazizi in Tunisia sparked revolts around the Arab world and the beginning of the Arab Spring. Despite high hopes that the Arab world was entering a new era of freedom, economic growth, and social justice, the transition turned out to be long and difficult, with the…

       




ring

Global Manufacturing: Entering a New Era


Event Information

November 19, 2012
9:30 AM - 11:30 AM EST

Saul/Zilkha Rooms
Brookings Institution
1775 Massachusetts Avenue NW
Washington, DC 20036

A decade into the 21st century, the role of manufacturing in global and metropolitan economies continues to evolve. After 20 years of rapid globalization in which manufacturing production shifted to emerging markets, demand for consumption is growing there, too. Emerging market demand, in fact, has unprecedented momentum as 1.8 billion people enter the global consuming class. At the same time, a robust pipeline of product innovation and manufacturing processes has opened new ways for U.S. manufacturing companies to compete.

On November 19, the Metropolitan Policy Program at Brookings hosted a forum to release a report from the McKinsey Global Institute that examines the role of manufacturing in advanced and developing economies and the choices that manufacturers grapple with in this new era of global competition. Following presentations by the authors, an expert panel discussed the key trends shaping manufacturing competitiveness, global strategies, the next era of manufacturing innovation, and what these changes imply for growth and employment in manufacturing across the globe.

Video

Audio

Transcript

Event Materials

      
 
 




ring

Is Manufacturing "Cool" Again?


Once upon a time, ambitious young people with a knack for math and science went to work in manufacturing. They designed planes, computers, and furniture, figured out how to lay out an assembly line, helped to make new cars faster and refrigerators more efficient, pushed the limits of computer chips, and invented new medicines. But, as the role of manufacturing diminished in advanced economies, the brightest talents tended to gravitate to finance and other service fields that were growing rapidly – and paying well.

But here’s some news: global manufacturing has the potential to stage a renaissance and once again become a career of choice for the most talented.

Of course, any manufacturing rebound in the advanced economies will not generate mass employment; but it will create many high-quality jobs. There will be more demand for software programmers, engineers, designers, robotics experts, data analytics specialists, and myriad other professional and service-type positions. In some manufacturing sectors, more such people may be hired than will be added on the factory floor.

Exploding demand in developing economies and a wave of innovation in materials, manufacturing processes, and information technology are driving today’s new possibilities for manufacturing. Even as the share of manufacturing in global GDP has fallen – from about 20% in 1990 to 16% in 2010 – manufacturing companies have made outsize contributions to innovation, funding as much as 70% of private-sector R&D in some countries. From nanotechnologies that make possible new types of microelectronics and medical treatments to additive manufacturing systems (better known as 3D printing), emerging new materials and methods are set to revolutionize how products are designed and made.

But, to become a genuine driver of growth, the new wave of manufacturing technology needs a broad skills base. For example, it will take many highly-trained and creative workers to move 3D printing from an astounding possibility to a practical production tool.

Consider, too, the challenges of the auto industry, which is shifting from conventional, steel-bodied cars with traditional drive trains to lighter, more fuel-efficient vehicles in which electronics are as important as mechanical parts. The Chevrolet Volt has more lines of software code than the Boeing 787. So the car industry needs people fluent in mechanical engineering, battery chemistry, and electronics.

Manufacturing is already an intensive user of “big data” – the use of massive data sets to discover new patterns, perform simulations, and manage complex systems in real-time. Manufacturing stores more data than any other sector – an estimated two exabytes (two quintillion bytes) in 2010. By enabling more sophisticated simulations that discover glitches at an early stage, big data has helped Toyota, Fiat and Nissan cut the time needed to develop new models by 30-50%.

Manufacturers in many other branches are using big data to monitor the performance of machinery and equipment, fine-tune maintenance routines, and ferret out consumer insights from social-media chatter. But there aren’t enough people with big-data skills. In the United States alone, there is a potential shortfall of 1.5 million data-savvy managers and analysts needed to drive the emerging data revolution in manufacturing.

The shift of manufacturing demand to developing economies also requires new skills. A recent McKinsey survey of multinationals based in the U.S. and Europe found that, on average, these companies derive only 18% of sales from developing economies. But these economies are projected to account for 70% of global sales of manufactured goods (both consumer and industrial products) by 2025. To develop these markets, companies will need talented people, from ethnographers (to understand consumers’ customs and preferences) to engineers (to design products that fit a new definition of value).

Perhaps most important, manufacturing is becoming more “democratic,” and thus more appealing to bright young people with an entrepreneurial bent. Not only has design technology become more accessible, but an extensive virtual infrastructure exists that enables small and medium-size companies to outsource design, manufacturing, and logistics. Large and small companies alike are crowd-sourcing ideas online for new products and actual designs. “Maker spaces” – shared production facilities built around a spirit of open innovation – are proliferating.

And yet, across the board, manufacturing is vulnerable to a potential shortage of high-skill workers. Research by the McKinsey Global Institute finds that the number of college graduates in 2020 will fall 40 million short of what employers around the world need, largely owing to rapidly aging workforces, particularly in Europe, Japan, and China. In some manufacturing sectors, the gaps could be dauntingly large. In the U.S., workers over the age of 55 make up 40% of the workforce in agricultural chemicals manufacturing and more than one-third of the workforce in ceramics. Some 8% of the members of the National Association of Manufacturers report having trouble filling positions vacated by retirees.

Indeed, when the NAM conducted a survey of high-school students in Indianapolis, Indiana (which is already experiencing a manufacturing revival), the results were alarming: only 3% of students said that they were interested in careers in manufacturing. In response, the NAM launched a program to change students’ attitudes. But not only young people need persuading: surveys of engineers who leave manufacturing for other fields indicate that a lack of career paths and slow advancement cause some to abandon the sector.

Manufacturing superstars such as Germany and South Korea have always attracted the brightest and the best to the sector. But now manufacturers in economies that do not have these countries’ superior track record must figure out how to be talent magnets. Manufacturing’s rising coolness quotient should prove useful, but turning it into a highly sought-after career requires that companies in the sector back up the shiny new image with the right opportunities – and the right rewards.

Publication: Project Syndicate
Image Source: © Gary Cameron / Reuters
      
 
 




ring

Alternative methods for measuring income and inequality


Editor’s note: The following remarks were prepared and delivered by Gary Burtless at a roundtable sponsored by the American Tax Policy Institute on January 7, 2016. Video of Burtless’ remarks are also available on the Institute’s website. Download the related slides at the right. 

We are here to discuss income inequality, alternative ways to evaluate its size and trend over time, and how it might be affected by tax policy.  My job is to introduce you to the problem of defining income and to show how the definition affects our understanding of inequality.

To eliminate suspense from the start: Nothing I am about to say undermines the popular narrative about recent inequality trends.  For the past 35 years, U.S. inequality has increased.  Inequality has increased noticeably, no matter what income definition you care to use.  A couple of things you read in the newspaper are untrue under some income definitions. For example, under a comprehensive income definition it is false to claim that all the income gains of the past 2 or 3 decades have gone to the top 1 percent, or the top 5 percent, or the top 10 percent of income recipients.  Middle- and low-income Americans have managed to achieve income gains, too, as we shall see.

Tax policy certainly affects overall inequality, but I shall leave it for Scott, David, and Tracy to take that up. Let me turn to my main job, which is to distinguish between different reasonable income measures.

The crucial thing to know is that contradictory statements can be made about some income trends because of differences in the definition of income.  In general, the most pessimistic statements about trends rely on an income definition that is restrictive in some way.  The definition may exclude important income items, items, for example, that tend to equalize or boost family incomes.  The definition may leave out adjustments to income … adjustments that tend to boost the rate of income gain for low- or middle-income recipients, but not for top-income recipients.

The narrowest income definition commonly used to evaluate income trends is Definition #1 in my slide, “pretax private, cash income.”  Columnists and news reporters are unknowingly using this income definition when they make pronouncements about the income share of the “top 1 percent.”  The data about income under this definition are almost always based on IRS income tax returns, supplemented with a bit of information from the Commerce Department’s National Income and Product Account (NIPA) data file.

The single most common income definition used to assess income trends and inequality is the Census Bureau’s “money income” definition, Definition #2 on the slide.  It is just the same as the first definition I mentioned, except this income concept also includes government cash transfer payments – Social Security, unemployment insurance, cash public assistance, Veterans’ benefits, etc.

A slightly more expansive definition (#3) also adds food stamp (or SNAP) benefits plus other government benefits that are straightforward to evaluate. Items of this kind include the implicit rent subsidy low-income families receive in publicly-subsidized housing, school lunch subsides, and means-tested home heating subsidies.

Now we come to subtractions from income. These typically reflect families’ tax obligations.  The Census Bureau makes estimates of state and federal income tax liabilities as well as payroll taxes owed by workers (though not by their employers).  Since income and payroll taxes subtract from the income available to pay for other stuff families want to buy, it seems logical to also subtract them from countable income. This is done under income Definition #4.  Some tax obligations – notably the Earned Income Credit (EIC) – are in fact subtractions from taxes owed, which would not be a problem in the case of families that still owe positive taxes to the government.  However, the EIC is refundable to taxpayers, meaning that some families have negative tax liabilities:  The government owes them money.  In this case, if you do not take taxes into account you understate low-income families’ incomes, even as you’re overstating the net incomes available to middle- and high-income families.

Now let’s get a bit more complicated.  Forget what I said about taxes, because our next income definition (#5) also ignores them.  It is an even-more-comprehensive definition of gross or pretax income.  In addition to all those cash and near-cash items I mentioned in Definition #3, Definition #5 includes imputed income items, such as: 

• The value of your employer’s premium contribution to your employee health plan;
• The value of the government’s subsidy to your public health plan – Medicare, Medicaid, state CHIP plans, etc.
• Realized taxable gains from the sale of assets; and
• Corporate income that is earned by companies in which you own a share even though it is not income that is paid directly to you.

This is the most comprehensive income definition of which I am aware that refers to gross or pre-tax income.

Finally we have Definition #6, which subtracts your direct and indirect tax payments.  The only agency that uses this income definition is principally interested in the Federal budget, so the subtractions are limited to Federal income and payroll taxes, Federal corporate income taxes, and excise taxes.

Before we go into why you should care about any of these definitions, let me mention a somewhat less important issue, namely, how we define the income-sharing group over which we estimate inequality.  The most common assessment unit for income included under Definition #1 (“Pre-tax private cash income”) is the Federal income tax filing unit.  Sometimes this unit has one person; sometimes 2 (a married couple); and sometimes more than 2, including dependents.

The Census Bureau (and, consequently, most users of Census-published statistics) mainly uses “households” as reference units, without any adjustment for variations in the size of different households.  The Bureau’s median income estimate, for example, is estimated using the annual “money income” of households, some of which contain 1 person, some contain 2, some contain 3, and so on.

Many economists and sociologists find this unsatisfactory because they think a $20,000 annual income goes a lot farther if it is supporting just one person rather than 12.  Therefore, a number of organizations—notably, the Luxembourg Income Study (LIS), the Organisation of Economic Cooperation and Development (OECD), and the Congressional Budget Office (CBO)—assume household income is equally shared within each household, but that household “needs” increase with the square root of the number of people in the household.  That is, a household containing 9 members is assumed to require 1½ times as much income to enjoy the same standard of living as a family containing 4 members.  After an adjustment is made to account for the impact of household size, these organizations then calculate inequality among persons rather than among households.

How are these alternative income definitions estimated?  Who uses them?  What do the estimates show?  I’ll only consider a two or three basic cases.

First, pretax, private, cash income. By far the most famous users of this definition are Professors Thomas Piketty and Emmanuel Saez.  Their most celebrated product is an annual estimate of the share of total U.S. income (under this restricted definition) that is received by the top 1 percent of tax filing units.

Here is their most famous chart, showing the income share of the top 1 percent going back to 1913. (I use the Piketty-Saez estimates that exclude realized capital gains in the calculation of taxpayers’ incomes.) The notable feature of the chart is the huge rise in the top income share between 1970—when it was 8 percent of all pretax private cash income—and last year—when the comparable share was 18 percent.  

I have circled one part of the line—between 1986 and 1988—to show you how sensitive their income definition is to changes in the income tax code.  In 1986 Congress passed the Tax Reform Act of 1986 (TRA86). By 1988 the reform was fully implemented.  Wealthy taxpayers noticed that TRA86 sharply reduced the payoff to holding corporate earnings inside a separately taxed corporate entity. Rich business owners or shareholders could increase their after-tax income by arranging things so their business income was taxed only once, at the individual level.  The result was that a lot of income, once earned by and held within corporations, was now passed through to the tax returns of rich individual taxpayers. These taxpayers appeared to enjoy a sudden surge in their taxable incomes between 1986 and 1988.  No one seriously believes rich people failed to get the benefits of this income before 1987.  Before 1987 the same income simply showed up on corporate rather than on individual income tax returns.

A final point:  The chart displayed in SLIDE #6 is the source of the widely believed claim that U.S. inequality is nowadays about the same as it was at the end of the Roaring 1920s, before the Great Depression.  That is close to being true – under this income definition.

Census “money income”: This income definition is very similar to the one just discussed, except that it includes cash government transfer payments.  The producer of the series is the Census Bureau, and its most famous uses are to measure trends in real median household income and the official U.S. poverty rate. Furthermore, the Census Bureau uses the income definition to compile estimates of the Gini coefficient of household income inequality and the income shares received by each one-fifth of households, ranked from lowest to highest income, and received by the top 5 percent of households.

Here is a famous graph based on the Bureau’s “median household income” series.  I have normalized the historical series using the 1999 real median income level (1999 and 2000 were the peak income years according to Census data).  Since 1999 and 2000, median income has fallen about 10 percent.  If we accept this estimate without qualification, it certainly represents bad news for living standards of the nation’s middle class. The conclusion is contradicted by other government income statistics that use a broader, more inclusive income definition, however.

And here is the Bureau’s most widely cited distributional statistic (after its “official poverty rate” estimate).  Since 1979, the Gini coefficient has increased 17 percent under this income definition. (It is worth noting, however, that the portion of the increase that occurred between 1992 and 1993 is mainly the result of methodological changes in the way the Census Bureau ascertained incomes in its 1994 income survey.)

When you hear U.S. inequality compared with that in other rich countries, the numbers are most likely based on calculations of the LIS or OECD.  Their income definition is basically “Cash and Near-cash Public and Private income minus Income and Payroll taxes owed by households.”  Under this income definition, the U.S. looks relatively very unequal and America appears to have an exceptionally high poverty rate.  U.S. inequality has been rising under this income definition, as indeed has also been the case in most other rich countries. The increase in the United States has been above average, however, helping us to retain our leadership position, both in income inequality and in relative poverty.

We turn last to the most expansive income definition:  CBO’s measure of net after-tax income.  I will use CBO’s tabulations using this income definition to shed light on some of the inequality and living standard trends implied by the narrower income definitions discussed above.

Let’s consider some potential limitations of a couple of those definitions.  The limitations do not necessarily make them flawed or uninteresting.  They do mean the narrower income measures cannot tell us some of the things that users claim they tell us.

An obvious shortcoming of the “cash pretax private income” definition is that it excludes virtually everything the government does to equalize Americans’ incomes.  Believe it or not, the Federal tax system is mildly progressive.  It claims a bigger percentage of the (declared) incomes of the rich than it does of middle-income families’ and especially the poor.  Any pretax income measure will miss that redistribution.

More seriously, it excludes all government transfer payments.  You may think the rich get a bigger percentage of their income from government handouts compared with middle class and poorer households.  That is simply wrong.  The rich get a lot less.  And the percentage of total personal income that Americans derive from government transfer payments has gone way up over the years.  In the Roaring 1920s, Americans received almost nothing in the form of government transfers. Less than 1 percent of Americans’ incomes were received as transfer payments.  By 1970—near the low point of inequality according to the Piketty-Saez measure—8.3 percent of Americans’ personal income was derived from government transfers.  Last year, the share was 17 percent. None of the increase in government transfers is reflected in Piketty and Saez’s estimates of the trend in inequality.  Inequality is nowadays lower than it was in the late 1920s, mainly because the government does more redistribution through taxes and transfers.

Both the Piketty-Saez and the Census “money income” statistics are affected by the exclusion of government- and employer-provided health benefits from the income definition. This slide contains numbers, starting in 1960, that show the share of total U.S. personal consumption consisting of personal health care consumption.  I have divided the total into two parts. The first is the share that is paid for out of our own cash incomes (the blue part at the bottom).  This includes our out-of-pocket spending for doctors’ charges, hospital fees, pharmaceutical purchases, and other provider charges as well as our out-of-pocket spending on health insurance premiums. The second is the share of our personal health consumption that is paid out of government subsidies to Medicare, Medicaid, CHIP, etc., or out of employer subsidies to employee health plans (the red part). 

As everyone knows, the share of total consumption that consists of health consumption has gone way up.  What few people recognize is that the share that is directly paid by consumers—through payments to doctors, hospitals, and household health insurance premium payments—has remained unchanged.  All of the increase in the health consumption share since 1960 has been financed through government and employer subsidies to health insurance plans. None of those government or employer contributions is counted as “income” under the Piketty-Saez and Census “money income” definitions.  You would have to be quite a cynic to claim the subsidies have brought households no living standard improvements since 1960, yet that is how they are counted under the Piketty-Saez and Census “money income” definitions.

Final slide: How much has inequality gone up under income definitions that count all income sources and subtract the Federal income, payroll, corporation, and excise taxes we pay?  CBO gives us the numbers, though unfortunately its numbers end in 2011.

Here are CBO’s estimates of real income gains between 1979 and 2011.  These numbers show that real net incomes increased in every income category, from the very bottom to the very top.  They also show that real incomes per person have increased much faster at the top—over on the right—than in the middle or at the bottom—over on the left.  Still, contrary to a common complaint that all the income gains in recent years have been received by folks at the top, the CBO numbers suggest net income gains have been nontrivial among the poor and middle class as well as among top income recipients.

Suppose we look at trends in the more recent past, say, between 2000 and 2011.  That lower panel in this slide presents a very different picture from the one implied by the Census Bureau’s “money income” statistics.  Unlike the “money income numbers” [SLIDE #9], these show that inequality has declined since 2000.  Unlike the “money income numbers” [SLIDE #8], these show that incomes of middle-income families have improved since 2000.  There are a variety of explanations for the marked contrast between the Census Bureau and CBO numbers.  But a big one is the differing income definitions the two conclusions are based on.  The more inclusive measure of income shows faster real income gains among middle-income and poorer households, and it suggests a somewhat different trend in inequality.


Authors

Image Source: © Kim Kyung Hoon / Reuters
     
 
 




ring

Six ways to handle Trump’s impeachment during holiday dinners

It is a holiday dinner and all hell is about to break out in the dining room. One of your relatives asks what you think about the President Donald Trump impeachment proceedings. There is silence around the table because your family is dreading what is about to happen. Everyone knows Uncle Charley loves Trump while…

       




ring

The Arab Spring is 2011, Not 1989

The Arab revolutions are beginning to destroy the cliché of an Arab world incapable of democratic transformation. But another caricature is replacing it: according to the new narrative, the crowds in Cairo, Benghazi or Damascus, mobilized by Facebook and Twitter, are the latest illustration of the spread of Western democratic ideals; and while the “rise…

       




ring

Statement of Martin Neil Baily to the public hearing concerning the Department of Labor’s proposed conflict of interest rule


Introduction

I would like to thank the Department for giving me the opportunity to testify on this important issue. The document I submitted to you is more general than most of the comments you have received, talking about the issues facing retirement savers and policymakers, rather than engaging in a point-by-point discussion of the detailed DOL proposal1.

Issues around Retirement Saving

1. Most workers in the bottom third of the income distribution will rely on Social Security to support them in retirement and will save little. Hence it is vital that we support Social Security in roughly its present form and make sure it remains funded, either by raising revenues or by scaling back benefits for higher income retirees, or both.

2. Those in the middle and upper middle income levels must now rely on 401k and IRA funds to provide income support in retirement. Many and perhaps most households lack a good understanding of the amount they need to save and how to allocate their savings. This is true even of many savers with high levels of education and capabilities.

3. The most important mistakes made are: not saving enough; withdrawing savings prior to retirement; taking Social Security benefits too early2 ; not managing tax liabilities effectively; and failing to adequately manage risk in investment choices. This last category includes those who are too risk averse and choose low-return investments as well as those that overestimate their own ability to pick stocks and time market movements. These points are discussed in the paper I submitted to DoL in July. They indicate that retirement savers can benefit substantially from good advice.

4. The market for investment advice is one where there is asymmetric information and such markets are prone to inefficiency. It is very hard to get incentives correctly aligned. Professional standards are often used as a way of dealing with such markets but these are only partially successful. Advisers may be compensated through fees paid by the investment funds they recommend, either a load fee or a wrap fee. This arrangement can create an incentive for advisers to recommend high fee plans.

5. At the same time, advisers who encourage increased saving, help savers select products with good returns and adequate diversification, and follow a strategy of holding assets until retirement provide benefits to their clients.

Implications for the DoL’s proposed conflicted interest rule

1. Disclosure. There should be a standardized and simple disclosure form provided to all households receiving investment advice, detailing the fees they will be paying based on the choices they make. Different investment choices offered to clients should be accompanied by a statement describing how the fees received by the adviser would be impacted by the alternative recommendations made to the client.

2. Implications for small-scale savers. The proposed rule will bring with it increased compliance costs. These costs, combined with a reluctance to assume more risk and a fear of litigation, may make some advisers less likely to offer retirement advice to households with modest savings. These households are the ones most in need of direction and education, but because their accounts will not turn profits for advisors, they may be abandoned. According to the Employee Benefits Security Administration (EBSA), the proposed rule will save families with IRAs more than $40 billion over the next decade. However, this benefit must be weighed against the attendant costs of implementing the rule. It is possible that the rule will leave low- and medium-income households without professional guidance, further widening the retirement savings gap. The DoL should consider ways to minimize or manage these costs. Options include incentivizing advisors to continue guiding small-scale savers, perhaps through the tax code, and promoting increased financial literacy training for households with modest savings. Streamlining and simplifying the rules would also help.

3. Need for Research on Online Solutions. The Administration has argued that online advice may be the solution for these savers, and for some fraction of this group that may be a good alternative. Relying on online sites to solve the problem seems a stretch, however. Maybe at some time in the future that will be a viable option but at present there are many people, especially in the older generation, who lack sufficient knowledge and experience to rely on web solutions. The web offers dangers as well as solutions, with the potential for sub-optimal or fraudulent advice. I urge the DoL to commission independent research to determine how well a typical saver does when looking for investment advice online. Do they receive good advice? Do they act on that advice? What classes of savers do well or badly with online advice? Can web advice be made safer? To what extent do persons receiving online advice avoid the mistakes described earlier?

4. Pitfalls of MyRA. Another suggestion by the Administration is that small savers use MyRA as a guide to their decisions and this option is low cost and safe, but the returns are very low and will not provide much of a cushion in retirement unless households set aside a much larger share of their income than has been the case historically.

5. Clarifications about education versus advice. The proposed rule distinguished education from advisement. An advisor can share general information on best practices in retirement planning, including making age-appropriate asset allocations and determining the ideal age at which to retire, without triggering fiduciary responsibility. This is certainly a useful distinction. However, some advisors could frame this general information in a way that encourages clients to make decisions that are not in their own best interest. The DoL ought to think carefully about the line between education and advice, and how to discourage advisors from sharing information in a way that leads to future conflicts of interest. One option may be standardizing the general information that may be provided without triggering fiduciary responsibility.

6. Implications for risk management. Under the proposed rule advisors may be reluctant to assume additional risk and worry about litigation. In addition to pushing small-scale savers out of the market, the rule may encourage excessive risk aversion in some advisors. General wisdom suggests that young savers should have relatively high-risk portfolios, de-risking as they age, and ending with a relatively low-risk portfolio at the end of the accumulation period. The proposed rule could cause advisors to discourage clients from taking on risk, even when the risk is generally appropriate and the investor has healthy expectations. Extreme risk aversion could decrease both market returns for investors and the “value-add” of professional advisors. The DoL should think carefully about how it can discourage conflicted advice without encouraging overzealous risk reductions.

The proposed rule is an important effort to increase consumer protection and retirement security. However, in its current form, it may open the door to some undesirable or problematic outcomes. With some thoughtful revisions, I believe the rule can provide a net benefit to the country.



1. Baily’s work has been assisted by Sarah E. Holmes. He is a Senior Fellow at the Brookings Institution and a Director of The Phoenix Companies, but the views expressed are his alone.

2. As you know, postponing Social Security benefits yields an 8 percent real rate of return, far higher than most people earn on their investments. For most of those that can manage to do so, postponing the receipt of benefits is the best decision.

Downloads

Publication: Public Hearing - Department of Labor’s Proposed Conflict of Interest Rule
Image Source: © Steve Nesius / Reuters
     
 
 




ring

The World Bank and IMF need reform but it may be too late to bring China back


Mercutio: I am hurt. A plague a’ both your houses! I am sped. Is he gone and hath nothing? — Romeo and Juliet, Act 3, scene 1, 90–92

The eurozone crisis, which includes the Greek crisis but is not restricted to it, has undermined the credibility of the EU institutions and left millions of Europeans disillusioned with the European Project. The euro was either introduced too early, or it included countries that should never have been included, or both were true. High rates of inflation left countries in the periphery uncompetitive and the constraint of a single currency removed a key adjustment mechanism. Capital flows allowed this problem to be papered over until the global financial crisis hit.

The leaders of the international institutions, the European Commission, the European Central Bank, and the International Monetary Fund, together with the governments of the stronger economies, were asked to figure out a solution and they emphasized fiscal consolidation, which they made a condition for assistance with heavy debt burdens. The eurozone as a whole has paid the price, with real GDP in the first quarter of 2015 being about 1.5 percent below its peak in the first quarter of 2008, seven years earlier, and with a current unemployment rate of 11 percent. By contrast, the sluggish U.S. recovery looks rocket-powered, with GDP 8.6 percent above its previous peak and an unemployment rate of 5.5 percent.

The burden of the euro crisis has been very unevenly distributed, with Greece facing unemployment of 25 percent and rising, Spain 23 percent, Italy 12 percent, and Ireland 9.7 percent, while German unemployment is 4.7 percent. It is not surprising that so many Europeans are unhappy with their policy leaders who moved too quickly into a currency union and then dealt with the crisis in a way that pushed countries into economic depression. The common currency has been a boon to Germany, with its $287 billion current account surplus, but the bane of the southern periphery. Greece bears considerable culpability for its own problems, having failed to collect taxes or open up an economy full of competitive restrictions, but that does not excuse the policy failures among Europe’s leaders. A plague on both sides in the Greek crisis!

During the Great Moderation, it seemed that the Bretton Woods institutions were losing their usefulness because private markets could provide needed funding. The financial crisis and the global recession that followed it shattered this belief. The IMF did not foresee the crisis, nor was it a central player in dealing with the period of greatest peril from 2007 to 2009. National treasuries, the Federal Reserve, and the European Central Bank were the only institutions that had the resources and the power to deal with the bank failures, the shortage of liquidity, and the freezing up of markets. Still, the IMF became relevant again and played an important role in the euro crisis, although at the cost of sharing the unpopularity of the policy response to that crisis.

China’s new Asian Infrastructure Investment Bank is the result of China’s growing power and influence and the failure of the West, particularly the United States, to come to terms with this seismic shift. The Trans-Pacific Partnership trade negotiations have deliberately excluded China, the largest economy in Asia and largest trading partner in the world. Reform of the governance structure of the World Bank and the IMF has stalled with disproportionate power still held by the United States and Europe. Unsurprisingly, China has decided to exercise its influence in other ways, establishing the new Asian bank and increasing the role of the yuan in international transactions. U.S. policymakers underestimated China’s strength and the willingness of other countries to cooperate with it, and the result has been to reduce the role and influence of the Bretton Woods institutions.

Can the old institutions be reinvented and made more effective? In Europe, the biggest problem is that bad decisions were made by national governments and by the international institutions (although the ECB policies have been generally good). The World Bank and IMF do need to reform their governance, but it may be too late to bring China back into the fold.


This post originally appeared in the International Economy: Does the Industrialized World’s Economic and Financial Statecraft Need to Be Reinvented? (p.19)

Publication: The International Economy
Image Source: © Kim Kyung Hoon / Reuters;
     
 
 




ring

U.S. manufacturing may depend on automation to survive and prosper


Can this sector be saved? We often hear sentiments like: "Does America still produce anything?" and "The good jobs in manufacturing have all gone." There is nostalgia for the good old days when there were plentiful well-paid jobs in manufacturing. And there is anger that successive U.S. administrations of both parties have negotiated trade deals, notably NAFTA and the admission of China into the World Trade Organization, that have undercut America's manufacturing base.

Those on the right suggest that if burdensome regulations were lifted, this would fire up a new era of manufacturing prowess. On the left, it is claimed that trade agreements are to blame and, at the very least, we should not sign any more of them. Expanding union power and recruiting are another favorite solution. Despite his position on the right, Donald Trump has joined those on the left blaming China for manufacturing’s problems.

What is the real story and what needs to be done to save this sector? The biggest factor transforming manufacturing has been technology; and technology will largely determine its future.

Disappearing jobs

Employment in the manufacturing sector declined slowly through the 1980s and 1990s, but since 2000, the decline has been much faster falling by over 6 million workers between 2000 and 2010. There were hopes that manufacturing jobs would regain much of their lost ground once the recession ended, but the number of jobs has climbed by less than a million in the recovery so far and employment has been essentially flat since the first quarter of 2015. Manufacturing used to be a road to the middle class for millions of workers with just a high school education, but that road is much narrower today—more like a footpath. In manufacturing’s prime, although not all jobs were good jobs, many were well paid and offered excellent fringe benefits. Now there are many fewer of these.

Sustained but slow output growth

The real output of the manufacturing sector from 2000 to the present gives a somewhat more optimistic view of the sector, with output showing a positive trend growth, with sharp cyclical downturns. There was a peak of manufacturing production in 2000 with the boom in technology goods, most of which were still being produced in the U.S. But despite the technology bust and the shift of much of high-tech manufacturing overseas, real output in the sector in 2007 was still nearly 11 percent higher than its peak in 2000.

Production fell in the Great Recession at a breathtaking pace, dropping by 24 percent starting in Q3 2008. Manufacturing companies were hit by a bomb that wiped out a quarter of their output. Consumers were scared and postponed the purchase of anything they did not need right away. The production of durable goods, like cars and appliances, fell even more than the total. Unlike employment in the sector, output has reclaimed it previous peak and, by the third quarter of 2015, was 3 percent above that peak. The auto industry has recovered particularly strongly. While manufacturing output growth is not breaking any speed records, it is positive.

Understanding the pattern

The explanation for the jobs picture is not simple, but the Cliff Notes version is as follows: manufacturing employment has been declining as a share of total economy-wide employment for 50 years or more—a pattern that holds for all advanced economies, even Germany, a country known for its manufacturing strength. The most important reason for U.S. manufacturing job loss is that the overall economy is not creating jobs the way it once did, especially in the business sector. This conclusion probably comes as a surprise to most Americans who believe that international trade, and trade with China in particular, is the key reason for the loss of jobs. In reality, trade is a factor in manufacturing weakness, but not the most important one.

The most important reason for U.S. manufacturing job loss is that the overall economy is not creating jobs the way it once did, especially in the business sector.

The existence of our large manufacturing trade deficit with Asia means output and employment in the sector are smaller than they would be with balanced trade. Germany, as noted, has seen manufacturing employment declines also, but the size of their manufacturing sector is larger than ours, running huge trade surplus. In addition, right now that there is global economic weakness that has caused a shift of financial capital into the U. S. looking for safety, raising the value of the dollar and thus hurting our exports. In the next few years, it is unlikely that the U.S. trade deficit will improve—and it may well worsen.

Even though it will not spark a jobs revival, manufacturing is still crucial for the future of the U.S. economy, remaining a center for innovation and productivity growth and if the U.S. trade deficit is to be substantially reduced, then manufacturing must become more competitive. The services sector runs a small trade surplus and new technologies are eliminating our energy trade deficit. Nevertheless a substantial expansion of manufactured exports is needed if there is to be overall trade balance.

Disruptive innovation in manufacturing

The manufacturing sector is still very much alive and reports of its demise are not just premature but wrong. If we want to encourage the development of a robust competitive manufacturing sector, industry leaders and policymakers must embrace new technologies. The sector will be revived not by blocking new technologies with restrictive labor practices or over-regulation but by installing them—even if that means putting robots in place instead of workers. To speed the technology revolution, however, help must be provided to those whose jobs are displaced. If they end up as long-term unemployed, or in dead-end or low-wage jobs, then not only do these workers lose out but also the benefits to society of the technology investment and the productivity increase are lost.

The manufacturing sector performs 69 percent of all the business R&D in the U.S. which is powering a revolution that will drive growth not only in manufacturing but also in the broader economy as well. The manufacturing revolution can be described by three key developments:

  1. In the internet of things, sensors are embedded in machines, transmitting information that allows them to work together and report impending maintenance problems before there is a breakdown.
  2. Advanced manufacturing includes 3-D printing, new materials and the “digital thread” which connects suppliers to the factory and the factory to customers; it breaks down economies of scale allowing new competitors to enter; and it enhances speed and flexibility.
  3. Distributed innovation allows crowdsourcing is used to find radical solutions to technical challenges much more quickly and cheaply than with traditional R&D.

In a June 2015 Fortune 500 survey, 72 percent of CEOs reported their biggest challenge is that technology is changing fast, naming it as their number one challenge. That new technology churn is especially acute in manufacturing. The revolution is placing heavy demands on managers who must adapt their businesses to become software companies, big data companies, and even media companies (as they develop a web presence). Value and profit in manufacturing is shifting to digital assets. The gap between current practice and what it takes to be good at these skills is wide for many manufacturers, particularly in their ability to find the talent they need to transform their organizations.

Recent OECD analysis highlighted the large gap between best-practice companies and average companies. Although the gap is smaller in manufacturing than in services because of the heightened level of global competition in manufacturing, it is a sign that manufacturers must learn how to take advantage of new technologies quickly or be driven out of business.

Closing the trade deficit

A glaring weakness of U.S. manufacturing is its international trade performance. Chronic trade deficits have contributed to the sector’s job losses and have required large-scale foreign borrowing that has made us a net debtor to the rest of the world -- to the tune of nearly $7 trillion by the end of 2014. Running up endless foreign debts is a disservice to our children and was one source of the instability that led to the financial crisis. America should try to regain its balance as a global competitor and that means, at the least, reducing the manufacturing trade deficit. Achieving a significant reduction in the trade deficit will be a major task, including new investment and an adjustment of today’s overvalued dollar.

The technology revolution provides an opportunity, making it profitable to manufacture in the U.S. using highly automated methods. Production can be brought home, but it won’t bring back a lot of the lost jobs. Although the revolution in manufacturing is underway and its fate is largely in the hands of the private sector, the policy environment can help speed it up and make sure the broad economy benefits.

First, policymakers must accept that trying to bring back the old days and old jobs is a mistake. Continuing to chase yesterday’s goals isn’t productive, and at this point it only puts off the inevitable. Prioritizing competitiveness, innovativeness, and the U.S. trade position over jobs could be politically difficult, however, so policymakers should look for ways to help workers who lose jobs and communities that are hard hit. Government training programs have a weak track record, but if companies do the training or partner with community colleges, then the outcomes are better. Training vouchers and wage insurance for displaced workers can help them start new careers that will mostly be in the service sector where workers with the right skills can find good jobs, not just dead-end ones.

Second, a vital part of the new manufacturing is the ecosystem around large companies. There were 50,000 fewer manufacturing firms in 2010 than in 2000, with most of the decline among smaller firms. Some of that was inevitable as the sector downsized, but it creates a problem because as large firms transition to the new manufacturing, they rely on small local firms to provide the skills and even the technologies they do not have in-house. The private sector has the biggest stake in developing the ecosystems it needs, but government can and has helped, particularly at the state and local level. Sometimes infrastructure investment is needed, land can be set aside, mentoring programs can be established for young firms, help can be given in finding funding, and simplified and expedited permitting processes instituted.

It is hard to let go of old ways of thinking. Policymakers have been trying for years to restore the number of manufacturing jobs, but that is not an achievable goal. Yes manufacturing matters; it is a powerhouse of innovation for our economy and a vital source of competitiveness. There will still be good jobs in manufacturing but it is no longer a conveyor belt to the middle class. Policymakers need to focus on speeding up the manufacturing revolution, funding basic science and engineering, and ensuring that tech talent and best-practice companies want to locate in the United States.

     
 
 




ring

Could an Embassy in Jerusalem Bring Us Closer to Peace?

      
 
 




ring

Measuring growth democratically

Abhijit Banerjee and Esther Duflo, two of this year’s recipients of the Nobel Memorial Prize in Economic Sciences, are the latest among leading economists to remind us that gross domestic product is an imperfect measure of human welfare. The Human Development Index, published by the United Nations Development Programme, aggregates indicators of life expectancy, education,…

       




ring

Empowering young people to end Chicago’s gun violence problem

Former U.S. Secretary of Education Arne Duncan sits down with young men from Chicago CRED (Creating Real Economic Diversity) to discuss the steps they have taken to disrupt the cycle of gun violence in their community and transition into the legal economy. http://directory.libsyn.com/episode/index/id/6400344 Also in this episode, meet David M. Rubenstein Fellow Randall Akee in…

       




ring

What’s the government done to relieve student loan borrowers of their burden during the corona crisis?

Forty-two million Americans, or one in every eight, have student loans, and they owe a total of $1.6 trillion, the second largest pool of consumer credit after mortgages. According to the Federal Reserve, 20 percent of adult borrowers who borrowed for their own educations were behind on their payments in 2018. Of those who are…

       




ring

Making sense of the monthly jobs report during the COVID-19 pandemic

The monthly jobs report—the unemployment rate from one survey and the change in employer payrolls from another survey—is one of the most closely watched economic indicators, particularly at a time of an economic crisis like today. Here’s a look at how these data are collected and how to interpret them during the COVID-19 pandemic. What…

       




ring

Made in Africa: manufacturing and economic growth on the continent

In this week’s episode, John Page, a senior fellow in the Global Economy and Development Program, assesses the potential role of several economic strategies in transforming Africa’s industrial development for the global economy. “Between now and about 2030, the estimates are that as many as 85 million jobs at [the] bottom end of manufacturing will…

      
 
 




ring

The US-Africa Business Forum: Africa’s “middle class” and the “in-between” sector—A new opening for manufacturing?

Editor’s Note: On September 21, the Department of Commerce and Bloomberg Philanthropies are hosting the second U.S.-Africa Business Forum. Building on the forum in 2014, this year’s meeting again hosts heads of state, U.S. CEOs, and African business leaders, but aims to go beyond past commitments and towards effective implementation. This year’s forum will focus on six sectors important…

      
 
 




ring

The Dangerous Price of Ignoring Syria

Vali Nasr says that President Obama has resisted American involvement in Syria because it challenges a central aim of his foreign policy: shrinking the U.S. footprint in the Middle East and downplaying the region’s importance to global politics. Nasr examines why doing more on Syria would reverse the U.S. retreat from the region.

      
 
 




ring

Campaign Reform in the Networked Age: Fostering Participation through Small Donors and Volunteers

Event Information

January 14, 2010
10:30 AM - 12:00 PM EST

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

The 2008 elections showcased the power of the Internet to generate voter enthusiasm, mobilize volunteers and increase small-donor contributions. After the political world has been arguing about campaign finance policy for decades, the digital revolution has altered the calculus of participation.

On January 14, a joint project of the Campaign Finance Institute, American Enterprise Institute and the Brookings Institution unveiled a new report that seeks to change the ongoing national dialogue about money in politics. At this event, the four authors of the report will detail their findings and recommendations. Relying on lessons from the record-shattering 2008 elections and the rise of Internet campaigning, experts will present a new vision of how campaign finance and communications policy can help further democracy through broader participation.

Video

Audio

Transcript

Event Materials

      
 
 




ring

How to increase financial support during COVID-19 by investing in worker training

It took just two weeks to exhaust one of the largest bailout packages in American history. Even the most generous financial support has limits in a recession. However, I am optimistic that a pandemic-fueled recession and mass underemployment could be an important opportunity to upskill the American workforce through loans for vocational training. Financially supporting…

       




ring

Innovation and manufacturing labor: a value-chain perspective


Policies and initiatives to promote U.S. manufacturing would be well advised to take a value chain perspective of this economic sector. Currently, our economic statistics do not include pre-production services to manufacturing such as research and development or design or post-production services such as repair and maintenance or sales. Yet, manufacturing firms invest heavily in these services because they are crucial to the success of their business. 

In a new paper, Kate Whitefoot and Walter Valdivia offer a fresh insight into the sector’s labor composition and trends by examining employment in manufacturing from a value chain perspective. While the manufacturing sector shed millions of jobs in the 2002-2010 period—a period that included the Great Recession—employment in upstream services expanded 26 percent for market analysis, 13 percent for research and development, and 23 percent for design and technical services. Average wages for these services increased over 10 percent in that period. Going forward, this pattern is likely to be repeated. Technical occupations, particularly in upstream segments are expected to have the largest increases in employment and wages.

In light of the findings, the authors offer the following recommendations: 

  • Federal manufacturing policy: Expand PCAST’s Advanced Manufacturing Partnership recommendations—specifically, for developing a national system of certifications for production skills and establishing a national apprenticeship program for skilled trades in manufacturing—to include jobs outside the factory such as those in research and development, design and technical services, and market analysis.
  • Higher education: Institutions of higher education should consider some adjustment to their curriculum with a long view of the coming changes to high-skill occupations, particularly with respect to problem identification and the management of uncertainty in highly automated work environments. In addition, universities and colleges should disseminate information among prospect and current students about occupations where the largest gains of employment and higher wage premiums are expected. 
  • Improve national statistics: Supplement the North American Industry Classification System (NAICS) with data that permits tracking the entire value chain, including the development of a demand-based classification system. This initiative could benefit from adding survey questions to replicate the data collection of countries with a Value Added Tax—without introducing the tax, that is—allowing in this manner a more accurate estimation of the value added by each participant in a production network.

Whitefoot and Valdivia stress that any collective efforts aimed at invigorating manufacturing must seize the opportunities throughout the entire value chain including upstream and downstream services to production.

Downloads

Authors

Image Source: © Jeff Tuttle / Reuters
     
 
 




ring

Patent infringement suits have a reputational cost for universities


Universities cash handsome awards on infringement cases

Last month, a jury found Apple Inc. guilty of infringing a patent of the University of Wisconsin-Madison (UW) and ordered the tech giant to pay $234 million. The university scored a big financial victory, but this hardly meant any gain for the good name of the university.

The plaintiffs argued successfully in court that Apple infringed their 1998 patent on a predictor circuit that greatly improved the efficiency of microchips used in the popular iPhone 5s, 6, and 6 Plus. Apple first responded by challenging the validity of the patent, but the US Patent and Trademark Office ruled in favor of the university. Apple plans to appeal, but the appellate court is not likely to reverse the lower court’s decision.

This is not the first time this university has asserted its patents rights (UW sued Intel in 2008 for this exact same patent and reportedly settled for $110 million). Nor is this the first time universities in general have taken infringers to court. Prominent cases in recent memory include Boston University, which sued several companies for infringement of a patent for blue light-emitting diodes and settled out of court with most of them, and Carnegie Mellon, who was awarded $237 million by the federal appellate court on its infringement suit against Marvell, a semiconductor company, for its use of an enhanced detector of data in hard drives called Kavcic detectors.

Means not always aligned with aims in patent law

When university inventions emerge from federal research grants, universities can also sue the infringers, but in those cases they would be testing the accepted interpretations of current patent law.

The Bayh-Dole Act of 1980 extended patent law and gave small-business and universities the right to take title to patents from federal grants—later it was amended to extend the right to all federal grantees regardless of size. The ostensible aim of this act is to “to promote the utilization of inventions arising from federally supported research or development.” Under the law, a condition for universities to keep their exclusive rights on those patents is that they or their licensees take “effective steps to achieve practical application” of those patents. Bayh-Dole was not designed to create a new source of revenue for universities. If companies are effectively using university technologies, Bayh-Dole’s purpose is served without need of the patents.

To understand this point, consider a counterfactual: What if the text of Bayh-Dole had been originally composed to grant a conditional right to patents for federal research grantees? The condition could be stated like this: “This policy seeks to promote the commercialization of federally funded research and to this end it will use the patent system. Grantees may take title to patents if and only if other mechanisms for disseminating and developing those inventions into useful applications prove unsuccessful.” Under this imagined text, the universities could still take title to patents on their inventions if they or the U.S. Patent and Trademark Office were not aware that the technologies were being used in manufactures.

But no court would find their infringement claim meritorious if the accused companies could demonstrate that, absent of willful infringement, they had in fact used the technologies covered by university patents in their commercial products. In this case, other mechanisms for disseminating and developing the technologies would have proven successful indeed. The reality that Bayh-Dole did not mandate such a contingent assignation of rights creates a contradiction between its aims and the means chosen to advance those aims for the subset of patents that were already in use by industry.

I should clarify that the predictor circuit, the blue-light diode, and the Kavcic detectors are not in that subset of patents. But even in they were, there is no indication that the University of Wisconsin-Madison would have exercised its patent rights with any less vigor just because the original research was funded by public funds. Today, it is fully expected from universities to aggressively assert their patent rights regardless of the source of funding for the original research.

You can have an answer for every question and still lose the debate

It is this litigious attitude that puts off many observers. While the law may very well allow universities to be litigious, universities could still refuse to exercise their rights under circumstances in which those rights are not easily reconciled with the public mission of the university.

Universities administrators, tech transfer personnel, and particularly the legal teams winning infringement cases have legitimate reasons to wonder why universities are publicly scorned. After all, they are acting within the law and simply protecting their patent rights; they are doing what any rational person would do. They may be really surprised when critics accuse universities of becoming allies of patent trolls, or of aiding and abetting their actions. Such accusations are unwarranted. Trolls are truants; the universities are venerable institutions. Patent trolls would exploit the ambiguities of patent law and the burdens of due process to their own benefit and to the detriment of truly productive businesses and persons. In stark contrast, universities are long established partners of democracy, respected beyond ideological divides for their abundant contributions to society.

The critics may not be fully considering the intricacies of patent law. Or they may forget that universities are in need of additional revenue—higher education has not seen public financial support increase in recent years, with federal grants roughly stagnated and state funding falling drastically in some states. Critics may also ignore that revenues collected from licensing of patents, favorable court rulings, and out-of-court settlements, are to a large extent (usually two thirds of the total) plugged back into the research enterprise.

University attorneys may have an answer for every point that critics raise, but the overall concern of critics should not be dismissed outright. Given that many if not most university patents can be traced back to research funded by tax dollars, there is a legitimate reason for observers to expect universities to manage their patents with a degree of restraint. There is also a legitimate reason for public disappointment when universities do not seem to endeavor to balance the tensions between their rights and duties.

Substantive steps to improve the universities’ public image

Universities can become more responsive to public expectations about their character not only by promoting their good work, but also by taking substantive steps to correct misperceptions.

First, when universities discover a case of proven infringement, they should take companies to court as a measure of last resort. If a particular company refuses to negotiate in good faith and an infringement case ends up in court, the universities should be prepared to demonstrate to the court of public opinion that they have tried, with sufficient insistence and time, to negotiate a license and even made concessions in pricing the license. In the case of the predictor circuit patent, it seems that the University of Wisconsin-Madison tried to license the technology and Apple refused, but the university would be in a much better position if it could demonstrate that the licensing deals offered to Apple would have turned to be far less expensive for the tech company.

Second, universities would be well advised not to join any efforts to lobby Congress for stronger patent protection. At least two reasons substantiate this suggestion. First, as a matter of principle, the dogmatic belief that without patents there is no innovation is wrong. Second, as a matter of material interest, universities as a group do not have a financial interest in patenting. It’s worth elaborating these points a bit more.

Neither historians nor social science researchers have settled the question about the net effects of patents on innovation. While there is evidence of social benefits from patent-based innovation, there is also evidence of social costs associated with patent-monopolies, and even more evidence of momentous innovations that required no patents. What’s more, the net social benefit varies across industries and over time. Research shows economic areas in which patents do spur innovation and economic sectors where it actually hinders them. This research explains, for instance, why some computer and Internet giants lobby Congress in the opposite direction to the biotech and big pharma industries. Rigorous industrial surveys of the 1980s and 1990s found that companies in most economic sectors did not use patents as their primary tool to protect their R&D investments.

Yet patenting has increased rapidly over the past four decades. This increase includes industries that once were uninterested in patents. Economic analyses have shown that this new patenting is a business strategy against patent litigation. Companies are building patent portfolios as a defensive strategy, not because they are innovating more. The university’s public position on patent policy should acknowledge that the debate on the impact of patents on innovation is not settled and that this impact cannot be observed in the aggregate, but must be considered in the context of each specific economic sector, industry, or even market. From this vantage point, universities could then turn up or down the intensity with which they negotiate licenses and pursue compensation for infringement. Universities would better assert their commitment to their public mission if they compute on a case by case basis the balance between social benefits and costs for each of its controversial patents.

As to the material interest in patents, it is understandable that some patent attorneys or the biotech lobby publicly espouse the dogma of patents, that there is no innovation without patents. After all, their livelihood depends on it. However, research universities as a group do not have any significant financial interest in stronger patent protection. As I have shown in a previous Brookings paper, the vast majority of research universities earn very little from their patent portfolios and about 87% of tech transfer offices operate in the red. Universities as a group receive so little income from licensing and asserting their patents relative to the generous federal support (below 3%), that if the federal government were to declare that grant reviewers should give a preference to universities that do not patent, all research universities would stop the practice at once. It is true that a few universities (like the University of Wisconsin-Madison) raise significant revenue from their patent portfolio, and they will continue to do so regardless of public protestations. But the majority of universities do not have a material interest in patenting.

Time to get it right on anti-troll legislation

Last year, the House of Representative passed legislation closing loopholes and introducing disincentives for patent trolls. Just as mirror legislation was about to be considered in the Senate, Sen. Patrick Leahy withdrew it from the Judiciary Committee. It was reported that Sen. Harry Reid forced the hand of Mr. Leahy to kill the bill in committee. In the public sphere, the shrewd lobbying efforts to derail the bill were perceived to be pro-troll interests. The lobbying came from pharmaceutical companies, biotech companies, patent attorneys, and, to the surprise of everyone, universities.  Little wonder that critics overreacted and suggested universities were in partnership with trolls: even if they were wrong, these accusations stung.

University associations took that position out of a sincere belief in the dogma of patents and out of fear that the proposed anti-troll legislation limited their ability to sue patent infringers. However, their convictions stand on shaky ground and their material interests are not those of the vast majority of universities.

A reversal of that position is not only possible, but would be timely. When anti-troll legislation is again introduced in Congress, universities should distance themselves from efforts to protect the policy status quo that so benefits patent trolls. It is not altogether improbable that Congress sees fit to exempt universities from some of the requirements that the law would impose. University associations could show Congress the merit of such exemptions in consideration of the universities’ constant and significant contributions to states, regions, and the nation. However, no such concessions could ever be expected if the universities continue to place themselves in the company of those who profit from patent management.

No asset is more valuable for universities than their prestige. It is the ample recognition of their value in society that guarantees tax dollars will continue to flow into universities. While acting legally to protect their patent rights, universities are nevertheless toying with their own legitimacy. Let those universities that stand to gain from litigation act in their self-interest, but do not let them speak for all universities. When university associations advocate for stronger patent protection, they do the majority of universities a disservice. These associations should better represent the interests of all their members by advocating a more neutral position about patent reform, by publicly praising universities’ restraint on patent litigation, and by promoting a culture and readiness in technology transfer offices to appraise each patent not by its market value but by its social value. At the same time, the majority of universities that obtain neither private nor social benefits from patenting should press their political representatives to adopt a more balanced approach to policy advocacy, lest they squander the reputation of the entire university system.

Image Source: © Stephen Lam / Reuters
      
 
 




ring

Patent infringement suits have a reputational cost for universities


This post originally appeared on the Center for Technology Innovation’s TechTank blog.

Universities cash handsome awards on infringement cases

This October, a jury found Apple Inc. guilty of infringing a patent of the University of Wisconsin-Madison (UW) and ordered the tech giant to pay $234 million. The university scored a big financial victory, but this hardly meant any gain for the good name of the university.

The plaintiffs argued successfully in court that Apple infringed their 1998 patent on a predictor circuit that greatly improved the efficiency of microchips used in the popular iPhone 5s, 6, and 6 Plus. Apple first responded by challenging the validity of the patent, but the US Patent and Trademark Office ruled in favor of the university. Apple plans to appeal, but the appellate court is not likely to reverse the lower court’s decision.

This is not the first time this university has asserted its patents rights (UW sued Intel in 2008 for this exact same patent and reportedly settled for $110 million). Nor is this the first time universities in general have taken infringers to court. Prominent cases in recent memory include Boston University, which sued several companies for infringement of a patent for blue light-emitting diodes and settled out of court with most of them, and Carnegie Mellon, who was awarded $237 million by the federal appellate court on its infringement suit against Marvell, a semiconductor company, for its use of an enhanced detector of data in hard drives called Kavcic detectors.

Means not always aligned with aims in patent law

When university patented inventions emerge from federal research grants, infringement suits test the accepted interpretations of current patent law.

The Bayh-Dole Act of 1980 extended patent law and gave small-business and universities the right to take title to patents from federal research grants—later it was amended to extend the right to all federal grantees regardless of size. The ostensible aim of this act is to “to promote the utilization of inventions arising from federally supported research or development.” Under the law, a condition for universities (or any other government research performers) to keep their exclusive rights on those patents is that they or their licensees take “effective steps to achieve practical application” of those patents. Bayh-Dole was not designed to create a new source of revenue for universities. If companies are effectively using university technologies, Bayh-Dole’s purpose is served without need of patents.

To understand this point, consider a counterfactual: What if the text of Bayh-Dole had been originally composed to grant a conditional right to patents for federal research grantees? The condition could be stated like this: “This policy seeks to promote the commercialization of federally funded research and to this end it will use the patent system. Grantees may take title to patents if and only if other mechanisms for disseminating and developing those inventions into useful applications prove unsuccessful.” Under this imagined text, the universities could still take title to patents on their inventions if they or the U.S. Patent and Trademark Office were not aware that the technologies were being used in manufactures.

But no court would find their infringement claim meritorious if the accused companies could demonstrate that, absent of willful infringement, they had in fact used the technologies covered by university patents in their commercial products. In this case, other mechanisms for disseminating and developing the technologies would have proven successful indeed. The reality that Bayh-Dole did not mandate such a contingent assignation of rights creates a contradiction between its aims and the means chosen to advance those aims for the subset of patents that were already in use by industry.

I should remark that UW’s predictor circuit resulted from grants from NSF and DARPA and there is no indication that the university exercised its patent rights with any less vigor just because the original research was funded by public funds. In fact, it is fully expected from universities to aggressively assert their patent rights regardless of the source of funding for the original research.

You can have an answer for every question and still lose the debate

It is this litigious attitude that puts off many observers. While the law may very well allow universities to be litigious, universities could still refuse to exercise their rights under circumstances in which those rights are not easily reconciled with the public mission of the university.

Universities administrators, tech transfer personnel, and particularly the legal teams winning infringement cases have legitimate reasons to wonder why universities are publicly scorned. After all, they are acting within the law and simply protecting their patent rights; they are doing what any rational person would do. They may be really surprised when critics accuse universities of becoming allies of patent trolls, or of aiding and abetting their actions. Such accusations are unwarranted. Trolls are truants; the universities are venerable institutions. Patent trolls would exploit the ambiguities of patent law and the burdens of due process to their own benefit and to the detriment of truly productive businesses and persons. In stark contrast, universities are long established partners of democracy, respected beyond ideological divides for their abundant contributions to society.

The critics may not be fully considering the intricacies of patent law. Or they may forget that universities are in need of additional revenue—higher education has not seen public financial support increase in recent years, with federal grants roughly stagnated and state funding falling drastically in some states. Critics may also ignore that revenues collected from licensing of patents, favorable court rulings, and out-of-court settlements, are to a large extent (usually two thirds of the total) plugged back into the research enterprise.

University attorneys may have an answer for every point that critics raise, but the overall concern of critics should not be dismissed outright. Given that many if not most university patents can be traced back to research funded by tax dollars, there is a legitimate reason for observers to expect universities to manage their patents with a degree of restraint. There is also a legitimate reason for public disappointment when universities do not seem to endeavor to balance the tensions between their rights and duties.

Substantive steps to improve the universities’ public image

Universities can become more responsive to public expectations about their character not only by promoting their good work, but also by taking substantive steps to correct misperceptions.

First, when universities discover a case of proven infringement, they should take companies to court as a measure of last resort. If a particular company refuses to negotiate in good faith and an infringement case ends up in court, the universities should be prepared to demonstrate to the court of public opinion that they have tried, with sufficient insistence and time, to negotiate a license and even made concessions in pricing the license. In the case of the predictor circuit patent, it seems that the University of Wisconsin-Madison tried to license the technology and Apple refused, but the university would be in a much better position if it could demonstrate that the licensing deals offered to Apple would have turned to be far less expensive for the tech company.

Second, universities would be well advised not to join any efforts to lobby Congress for stronger patent protection. At least two reasons substantiate this suggestion. First, as a matter of principle, the dogmatic belief that without patents there is no innovation is wrong. Second, as a matter of material interest, universities as a group do not have a financial interest in patenting. It’s worth elaborating these points a bit more.

Neither historians nor social science researchers have settled the question about the net effects of patents on innovation. While there is evidence of social benefits from patent-based innovation, there is also evidence of social costs associated with patent-monopolies, and even more evidence of momentous innovations that required no patents. What’s more, the net social benefit varies across industries and over time. Research shows economic areas in which patents do spur innovation and economic sectors where it actually hinders them. This research explains, for instance, why some computer and Internet giants lobby Congress in the opposite direction to the biotech and big pharma industries. Rigorous industrial surveys of the 1980s and 1990s found that companies in most economic sectors did not use patents as their primary tool to protect their R&D investments.

Yet patenting has increased rapidly over the past four decades. This increase includes industries that once were uninterested in patents. Economic analyses have shown that this new patenting is a business strategy against patent litigation. Companies are building patent portfolios as a defensive strategy, not because they are innovating more. The university’s public position on patent policy should acknowledge that the debate on the impact of patents on innovation is not settled and that this impact cannot be observed in the aggregate, but must be considered in the context of each specific economic sector, industry, or even market. From this vantage point, universities could then turn up or down the intensity with which they negotiate licenses and pursue compensation for infringement. Universities would better assert their commitment to their public mission if they compute on a case by case basis the balance between social benefits and costs for each of its controversial patents.

As to the material interest in patents, it is understandable that some patent attorneys or the biotech lobby publicly espouse the dogma of patents, that there is no innovation without patents. After all, their livelihood depends on it. However, research universities as a group do not have any significant financial interest in stronger patent protection. As I have shown in a previous Brookings paper, the vast majority of research universities earn very little from their patent portfolios and about 87% of tech transfer offices operate in the red. Universities as a group receive so little income from licensing and asserting their patents relative to the generous federal support (below 3%), that if the federal government were to declare that grant reviewers should give a preference to universities that do not patent, all research universities would stop the practice at once. It is true that a few universities (like the University of Wisconsin-Madison) raise significant revenue from their patent portfolio, and they will continue to do so regardless of public protestations. But the majority of universities do not have a material interest in patenting.

Time to get it right on anti-troll legislation

Last year, the House of Representative passed legislation closing loopholes and introducing disincentives for patent trolls. Just as mirror legislation was about to be considered in the Senate, Sen. Patrick Leahy withdrew it from the Judiciary Committee. It was reported that Sen. Harry Reid forced the hand of Mr. Leahy to kill the bill in committee. In the public sphere, the shrewd lobbying efforts to derail the bill were perceived to be pro-troll interests. The lobbying came from pharmaceutical companies, biotech companies, patent attorneys, and, to the surprise of everyone, universities.  Little wonder that critics overreacted and suggested universities were in partnership with trolls: even if they were wrong, these accusations stung.

University associations took that position out of a sincere belief in the dogma of patents and out of fear that the proposed anti-troll legislation limited the universities’ ability to sue patent infringers. However, their convictions stand on shaky ground and only a few universities sue for infringement. In taking that policy position, university associations are representing neither the interests nor the beliefs of the vast majority of universities.

A reversal of that position is not only possible, but would be timely. When anti-troll legislation is again introduced in Congress, universities should distance themselves from efforts to protect the policy status quo that so benefits patent trolls. It is not altogether improbable that Congress sees fit to exempt universities from some of the requirements that the law would impose. University associations could show Congress the merit of such exemptions in consideration of the universities’ constant and significant contributions to states, regions, and the nation. However, no such concessions could ever be expected if the universities continue to place themselves in the company of those who profit from patent management.

No asset is more valuable for universities than their prestige. It is the ample recognition of their value in society that guarantees tax dollars will continue to flow into universities. While acting legally to protect their patent rights, universities are nevertheless toying with their own legitimacy. Let those universities that stand to gain from litigation act in their self-interest, but do not let them speak for all universities. When university associations advocate for stronger patent protection, they do the majority of universities a disservice. These associations should better represent the interests of all their members by advocating a more neutral position about patent reform, by publicly praising universities’ restraint on patent litigation, and by promoting a culture and readiness in technology transfer offices to appraise each patent not by its market value but by its social value. At the same time, the majority of universities that obtain neither private nor social benefits from patenting should press their political representatives to adopt a more balanced approach to policy advocacy, lest they squander the reputation of the entire university system.

Editor's Note: The post was corrected to state that UW’s predictor circuit did originate from federally funded research.

Image Source: © Stephen Lam / Reuters
      
 
 




ring

The fair compensation problem of geoengineering


The promise of geoengineering is placing average global temperature under human control, and is thus considered a powerful instrument for the international community to deal with global warming. While great energy has been devoted to learning more about the natural systems that it would affect, questions of political nature have received far less consideration. Taking as a given that regional effects will be asymmetric, the nations of the world will only give their consent to deploying this technology if they can be given assurances of a fair compensation mechanism, something like an insurance policy. The question of compensation reveals that the politics of geoengineering are far more difficult than the technical aspects.

What is Geoengineering?

In June 1991, Mount Pinatubo exploded, throwing a massive amount of volcanic sulfate aerosols into the high skies. The resulting cloud dispersed over weeks throughout the planet and cooled its temperature on average 0.5° Celsius over the next two years. If this kind of natural phenomenon could be replicated and controlled, the possibility of engineering the Earth’s climate is then within reach.

Spraying aerosols in the stratosphere is one method of solar radiation management (SRM), a class of climate engineering that focuses on increasing the albedo, i.e. reflectivity, of the planet’s atmosphere. Other SRM methods include brightening clouds by increasing their content of sea salt. A second class of geo-engineering efforts focuses on carbon removal from the atmosphere and includes carbon sequestration (burying it deep underground) and increasing land or marine vegetation. Of all these methods, SRM is appealing for its effectiveness and low costs; a recent study put the cost at about $5 to $8 billion per year.1

Not only is SRM relatively inexpensive, but we already have the technological pieces that assembled properly would inject the skies with particles that reflect sunlight back into space. For instance, a fleet of modified Boeing 747s could deliver the necessary payload. Advocates of geoengineering are not too concerned about developing the technology to effect SRM, but about its likely consequences, not only in terms of slowing global warming but the effects on regional weather. And there lies the difficult question for geoengineering: the effects of SRM are likely to be unequally distributed across nations.

Here is one example of these asymmetries: Julia Pongratz and colleagues at the department of Global Ecology of the Carnegie Institution for Science estimated a net increase in yields of wheat, corn, and rice from SRM modified weather. However, the study also found a redistributive effect with equatorial countries experiencing lower yields.2 We can then expect that equatorial countries will demand fair compensation to sign on the deployment of SRM, which leads to two problems: how to calculate compensation, and how to agree on a compensation mechanism.

The calculus of compensation

What should be the basis for fair compensation? One view of fairness could be that, every year, all economic gains derived from SRM are pooled together and distributed evenly among the regions or countries that experience economic losses.

If the system pools gains from SRM and distributes them in proportion to losses, questions about the balance will only be asked in years in which gains and losses are about the same. But if losses are far greater than the gains; then this would be a form of insurance that cannot underwrite some of the incidents it intends to cover. People will not buy such an insurance policy; which is to say, some countries will not authorize SRM deployment. In the reverse, if the pool has a large balance left after paying out compensations, then winners of SRM will demand lower compensation taxes.

Further complicating the problem is the question of how to separate gains or losses that can be attributed to SRM from regional weather fluctuations. Separating the SRM effect could easily become an intractable problem because regional weather patterns are themselves affected by SRM.  For instance, any year that El Niño is particularly strong, the uncertainty about the net effect of SRM will increase exponentially because it could affect the severity of the oceanic oscillation itself. Science can reduce uncertainty but only to a certain degree, because the better we understand nature, the more we understand the contingency of natural systems. We can expect better explanations of natural phenomena from science, but it would be unfair to ask science to reduce greater understanding to a hard figure that we can plug into our compensation equation.

Still, greater complexity arises when separating SRM effects from policy effects at the local and regional level. Some countries will surely organize better than others to manage this change, and preparation will be a factor in determining the magnitude of gains or losses. Inherent to the problem of estimating gains and losses from SRM is the inescapable subjective element of assessing preparation. 

The politics of compensation

Advocates of geoengineering tell us that their advocacy is not about deploying SRM; rather, it is about better understanding the scientific facts before we even consider deployment. It’s tempting to believe that the accumulating science on SRM effects would be helpful. But when we consider the factors I just described above, it is quite possible that more science will also crystalize the uncertainty about exact amounts of compensation. The calculus of gain or loss, or the difference between the reality and a counterfactual of what regions and countries will experience requires certainty, but science only yields irreducible uncertainty about nature.

The epistemic problems with estimating compensation are only to be compounded by the political contestation of those numbers. Even within the scientific community, different climate models will yield different results, and since economic compensation is derived from those models’ output, we can expect a serious contestation of the objectivity of the science of SRM impact estimation. Who should formulate the equation? Who should feed the numbers into it? A sure way to alienate scientists from the peoples of the world is to ask them to assert their cognitive authority over this calculus. 

What’s more, other parts of the compensation equation related to regional efforts to deal with SRM effect are inherently subjective. We should not forget the politics of asserting compensation commensurate to preparation effort; countries that experience low losses may also want compensation for their efforts preparing and coping with natural disasters.

Not only would a compensation equation be a sham, it would be unmanageable. Its legitimacy would always be in question. The calculus of compensation may seem a way to circumvent the impasses of politics and define fairness mathematically. Ironically, it is shot through with subjectivity; is truly a political exercise.

Can we do without compensation?

Technological innovations are similar to legislative acts, observed Langdon Winner.3 Technical choices of the earliest stage in technical design quickly “become strongly fixed in material equipment, economic investment, and social habit, [and] the original flexibility vanishes for all practical purposes once the initial commitments are made.” For that reason, he insisted, "the same careful attention one would give to the rules, roles, and relationships of politics must also be given to such things as the building of highways, the creation of television networks, and the tailoring of seeming insignificant features on new machines."

If technological change can be thought of as legislative change, we must consider how such a momentous technology as SRM can be deployed in a manner consonant with our democratic values. Engineering the planet’s weather is nothing short of passing an amendment to Planet Earth’s Constitution. One pesky clause in that constitutional amendment is a fair compensation scheme. It seems so small a clause in comparison to the extent of the intervention, the governance of deployment and consequences, and the international commitments to be made as a condition for deployment (such as emissions mitigation and adaptation to climate change). But in the short consideration afforded here, we get a glimpse of the intractable political problem of setting up a compensation scheme. And yet, if the clause were not approved by a majority of nations, a fair compensation scheme has little hope to be consonant with democratic aspirations.


1McClellan, Justin, David W Keith, Jay Apt. 2012. Cost analysis of stratospheric albedo modification delivery systems. Environmental Research Letters 7(3): 1-8.

2Pongratz, Julia, D. B. Lobell, L. Cao, K. Caldeira. 2012. Nature Climate Change 2, 101–105.

3Winner, Langdon. 1980. Do artifacts have politics? Daedalus (109) 1: 121-136.

Image Source: © Antara Photo Agency / Reuters
      
 
 




ring

In administering the COVID-19 stimulus, the president’s role model should be Joe Biden

As America plunges into recession, Congress and President Donald Trump have approved a series of aid packages to assist businesses, the unemployed, and others impacted by COVID-19. The first three aid packages will likely be supplemented by at least a fourth package, as the nation’s leaders better understand the depth and reach of the economic…

       




ring

Policy insights from comparing carbon pricing modeling scenarios

Carbon pricing is an important policy tool for reducing greenhouse gas pollution. The Stanford Energy Modeling Forum exercise 32 convened eleven modeling teams to project emissions, energy, and economic outcomes of an illustrative range of economy-wide carbon price policies. The study compared a coordinated reference scenario involving no new policies with policy scenarios that impose…

       




ring

Trump and military burden-sharing


Editors’ Note: Trump’s explanation of the economics of America’s security alliances misses several core realities, argues Michael O’Hanlon. The benefits of certain alliances can be debated—but they hardly constitute the wholesale drain on American coffers that he has made them out to be. This article was originally published on The National Interest.

In his April 27 foreign-policy speech in Washington, Donald Trump leveled a number of critiques at U.S. allies around the world. He began to flesh out his now-familiar critiques of how America’s many allies and security partners—which number about sixty around the world—fail to do their fair share for the common defense.

It is only fair to acknowledge that some of Trump’s arguments about military burden sharing have merit. Most notably, America dramatically outspends most allies on its armed forces. Of course, the United States has the largest economy of any Western ally and thus, rather naturally, the largest defense budget by far. But relative to GDP, its contributions are still disproportionate. The United States spends about 3 percent of gross domestic product on its military. NATO allies are pledged to devote 2 percent of GDP each to their armed forces, but the alliance average is less than 1.4 percent. Only the UK, France, Poland, Greece, and Estonia are near or above 2 percent. Germany is at just 1.1 percent of GDP; Italy and the Netherlands and Turkey check in at 1.2 percent; Belgium and Canada do not even reach 1.0 percent. Yes, some of these countries contribute impressively—more than the United States does, relative to national economic strength—in areas such as development assistance and refugee receptivity, but Trump still has a fair point on this basic and important measure of military preparedness.

On balance, however, Trump’s explanation of the economics of America’s security alliances misses several core realities. The benefits of certain alliances can be debated—but they hardly constitute the wholesale drain on American coffers that he has made them out to be.

Trump’s explanation of the economics of America’s security alliances misses several core realities.

First and foremost, counting the United States as well, the broad coalition of U.S.-led Western alliances accounts for some two-thirds of world GDP and two-thirds of global military spending. This situation is exceedingly advantageous to America. Never before in history has such a powerful strategic block of countries been created, especially in the absence of a clear central threat. Of course, America’s allies do not always do as it would wish. But today’s situation is far better than having two or more rivalrous groups of strong countries jostling for position with each other, and potentially engaging in arms races or open conflict.

In terms of military burden sharing per se, other major alliances and security partnerships do a bit better than NATO, on average. In East Asia, South Korea devotes roughly 2.5 percent of GDP to its military. Taiwan and Australia are close to 2 percent. Japan is at 1.0 percent of GDP—but Washington has favored this level for decades itself, out of worry that higher spending could cause counterreactions among East Asian states fearing (rightly or wrongly) a return to Japanese militarism. In the Middle East, most of America’s security partners spend well over 5 percent of GDP on their militaries—for example, 6 percent for Israel, and more than 12 percent each for Iraq and Saudi Arabia.

The United States does not squander huge sums of money basing troops on the territories of its wealthy allies. The United States may spend $10 billion a year at most basing forces in key industrial or Western states—Japan, Korea and Australia in the Pacific region; Germany, Italy, and the UK in particular in Europe. That is only 2 percent of the defense budget. And of that $10 billion, half or more is paid by the host nations themselves. (The situation is different in places like Afghanistan, where specific crises or conflicts have led to more recent deployments of U.S. firepower, and where foreign basing is in fact quite expensive.)

Foreign basing of American forces can often save the United States money.

Foreign basing of American forces can often save the United States money. For example, homeporting an aircraft carrier battle group in Japan obviates the need to have perhaps three more carrier battle groups in the U.S. Navy’s overall fleet (at an investment cost approaching $50 billion) to sustain the same level of presence in the broader western Pacific region. U.S. airfields in Germany facilitate deployments to the Middle East and Afghanistan; the alternative to such bases could well be a need for huge additional numbers of refueling aircraft.

Returning to the original argument: Trump is indeed right that the United States spends a great deal of its large defense budget to defend allies abroad. It is tough to attribute specific amounts to each region, because America’s military forces are flexible. Most are based in the United States in peacetime; most can swing east or west in times of need. But in broad terms, it is not unreasonable to divide up America’s $600 billion defense budget today into roughly four major categories: central defense needs (such as research and development, homeland security, global intelligence assets and operations), forces for Europe, forces for the Asia-Pacific and forces for the broader Middle East. This logic should not be taken too literally, but one could apportion roughly $100 billion to $200 billion for each of these four main purposes of U.S. military power.

In theory, Trump could propose eliminating the forces and defense expenditures that America devotes to any of these key strategic regions where local allies do not wind up doing their fair share, as he has insisted they must. With such a bold stroke, for example, one could imagine pulling the United States out of NATO and reducing the $600 billion annual defense budget to something less than $500 billion. However, Trump says that America’s military should be built up regardless of what happens with these various key alliances, arguing that spending on the nation’s armed forces is one of the most appealing possible investments the country could make. I tend to agree with that latter point—but it contradicts the earlier proposal to scale back U.S. defense spending for any region that shirks its own duties.

The verdict is simple: Trump raises a couple of valid specific critiques about alliance burden sharing in the world today. But he gets several specific points wrong, and misses the big picture: on balance, America’s alliances help this country to undergird a global security system that has dramatically reduced the prevalence of interstate war in modern times, while currently costing the country only 3 percent of its gross domestic product. To paraphrase Trump himself, this is a very good investment—and one that the U.S. global system of alliances and bases does much to make possible.

Publication: The National Interest
     
 
 




ring

District Mineral Foundation funds crucial resource for ensuring income security in mining areas post COVID-19

The Prime Minister of India held a meeting on April 30, 2020 to consider reforms in the mines and coal sector to jump-start the Indian economy in the backdrop of COVID-19. The mining sector, which is a primary supplier of raw materials to the manufacturing and infrastructure sectors, is being considered to play a crucial…

       




ring

Will Sharing Cyberthreat Information Help Defend the United States?

On Tuesday January 13th, 2015, the White House published several legislative proposals concerning cybersecurity. The purpose of one of the initiatives is to “codify mechanisms for enabling cybersecurity information sharing between private and government entities, as well as among private entities, to better protect information systems and more effectively respond to cybersecurity incidents.” How should…

       




ring

Sharing Threat Intelligence: Necessary but Not Sufficient?

Chairman Johnson, ranking member Carper, members of the Committee, thank you for the opportunity to testify. I am Richard Bejtlich, Chief Security Strategist at FireEye. I am also a nonresident senior fellow at the Brookings Institution, and I am pursuing a PhD in war studies from King’s College London. I began my security career as…

       




ring

Covering politics in a “post-truth” America

The American media were much criticized for their coverage (or lack thereof) of the candidates and issues during the 2016 presidential election cycle. But Susan Glasser, editor of Politico throughout the 2016 campaign, has a controversial opinion: that political journalism has never been better. Instead, she’s worried about something else. Although digital news organizations are […]

      
 
 




ring

Remembering Helmut Sonnenfeldt, a major figure in US foreign policy

Helmut Sonnenfeldt was a consequential figure in 20th century American foreign policy. A career State Department Soviet affairs specialist and major architect of U.S. policy toward the Soviet Union, he served alongside Secretary of State Henry Kissinger during a highly uncertain period. Born in Berlin, he fled from Nazi Germany in 1938, spent six years…

       




ring

The G-20 Los Cabos Summit 2012: Bolstering the World Economy Amid Growing Fears of Recession


Leaders will head to the G-20 Summit in Los Cabos, Mexico, among renewed serious concern about the world economy. The turmoil that started with the U.S. subprime mortgage crisis has resulted in now almost five years of ongoing instability. The emerging market economies fared much better than the advanced economies and pulled out of the crisis already in 2009, but the slowdown we are now facing in 2012 is again global, demonstrating the interdependence in the world economy. The emerging market economies have stronger underlying trend growth rates, but they remain vulnerable to a downturn in the advanced economies. The center of concern is now squarely on Europe, with a recession threatening most European countries, even those that had reasonably good performances so far. After an encouraging start in 2012, the U.S. economy, while not close to a recession, is also showing signs of a slowdown rather than the hoped for steady acceleration of growth. And the slowdown is spreading across the globe.

At a time like this it would be desirable and necessary that the G-20 show real initiative and cohesion. The essays in this collection look at the challenge from various angles. There is concern that the G-20 is losing its sense of purpose, that cohesion is decreasing rather than increasing, and that policy initiatives are reactive to events rather than proactive. Let us hope that at this moment of great difficulty, the G-20 will succeed in giving the world economy a new sense of direction and confidence. It is much needed.

Download » (PDF)

Image Source: Andrea Comas / Reuters
     
 
 




ring

Getting to Scale : How to Bring Development Solutions to Millions of Poor People


Brookings Institution Press 2013 240pp.

Winner of Choice Magazine's Outstanding Academic Title of 2014!

The global development community is teeming with different ideas and interventions to improve the lives of the world’s poorest people. Whether these succeed in having a transformative impact depends not just on their individual brilliance but on whether they can be brought to a scale where they reach millions of poor people.

Getting to Scale explores what it takes to expand the reach of development solutions beyond an individual village or pilot program, but to poor people everywhere. Each of the essays in this book documents one or more contemporary case studies, which together provide a body of evidence on how scale can be pursued. It suggests that the challenge of scaling up can be divided into two: financing interventions at scale, and managing delivery to large numbers of beneficiaries. Neither governments, donors, charities, nor corporations are usually capable of overcoming these twin challenges alone, indicating that partnerships are key to success.

Scaling up is mission critical if extreme poverty is to be vanquished in our lifetime. Getting to Scale provides an invaluable resource for development practitioners, analysts, and students on a topic that remains largely unexplored and poorly understood.

ABOUT THE EDITORS

Laurence Chandy
Akio Hosono
Akio Hosono is the director of the Research Institute of the Japanese International Cooperation Agency.
Homi Kharas
Johannes F. Linn

Downloads

Ordering Information:
  • {9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 978-0-8157-2419-3, $29.95 Add to Cart
      
 
 




ring

The six keys to securing ethical government: A U.S. view


Editor's Note: On Thursday Brookings Visiting Fellow, Amb. Norm Eisen addressed the Italian Parliament to discuss ethics in government, highlighting efforts in the US to improve transparency and accountability. In the speech, Amb. Eisen argues that while ethics reform can be difficult, it is an absolutely essentially part of any democratic system.


As Prepared For Delivery

Signora Presidente Boldrini, Madam President Brasseur, honorevoli Parlamentari, fellow panelists and distinguished guests, buon pomerigo. Thanks for inviting me to address the urgent subject of ethical standards in political life. It is an honor to be here in the Italian Chamber of Deputies, in this beautiful and ancient city, to which we Americans owe so many elements of our system of government. And in my town of Washington, DC we also borrowed a lot of your architecture, so we owe you for that as well.

In exchange for all that, as a small form of repayment, I would like to offer some perspectives from the U.S. as you consider the adoption of a code of ethical conduct for the Italian Parliament.

Since we are in the Chamber of Deputies, the equivalent of our U.S. Congress’ House of Representatives, I will start with best practices in that body, based on years of my professional life—too many—spent addressing alleged violations of its codes of conduct, including as a defense lawyer and later as the co-founder of a government watchdog group.

And I think there are also important lessons to be drawn from the new, innovative code of conduct for White House officials we established while I served as the "Ethics Czar" of President Obama. At his direction I helped write that new code, the Obama "ethics pledge," and although I am biased I think it has been effective so far, knock wood, there have been no major White House scandals. So I will talk about the lessons of that code of conduct a bit as well. My reflections are those of a friend and partner nation with plenty of challenges of our own. So I approach the issue with genuine humility in sharing our successes and failures.

I. Government ethics while standing on one leg

So—what is our U.S. view of best practices for the contents of government codes of ethics? In the U.S. and dare I say internationally, there is a pretty well developed set of best practices. In our House of Representatives, for example, our equivalent of this Chamber of Deputies, Rule XXIII is the Code of Official Conduct. It provides rules in four core areas;

  • one, for regulating conflicts of interest, that is, situations where personal interests or financial holdings may conflict with official parliamentary duties;
  • two, for gifts, particularly those from lobbyists and other persons interested in parliamentary decisions;
  • three, for outside employment of parliamentarians before, during and after government service, particularly with lobbyists, a situation which we call in the US the revolving door; and
  • four, for parliamentarians’ proper use of official resources, that is, hiring, staff, budget, travel and such.

There is much more detail in our code of conduct, and a few other rules as well, but those four items—conflicts, gifts, employment and resources—are the key. These same four key areas are also at the center of our codes of conduct for employees of our executive branch, as codified in our statutes and regulations, as well as in the Obama ethics pledge.

I emphasize these four key items because, having helped draft one code of conduct, and having often delved into many other codes, I sometimes find that I lose the forest for the trees when working with these codes, that the priorities at least for me sometimes get lost in the detail. So I try to keep the core always in mind, though I should add that the content of any such code must of course be particularized for the circumstances of particular government bodies and jurisdictions. Thus our U.S. House code is five pages long, elaborating on those four core items, and the House Ethics Manual of official guidance for the code is 456 pages long. Our Obama ethics pledge we got onto one page, we were proud of that. And we made everyone read and sign that page. To be fair, we could do that because we built on and added to other rules which already existed, and we did have several pages of definitions and references attached to the pledge.

II. Enforcement and transparency

But a good code is only the beginning. In our U.S. experience, just as important as the code, maybe even more important, is its enforcement. And here is where I want to share some lessons drawn from U.S. challenges in recent years, and how we responded. I am going to add two more items to our check list: enforcement and transparency.

Candidly, even with our parliamentary code of conduct in the U.S., our enforcement has sometime lagged. That is in part because under our Constitution, the ultimate enforcers are the parliamentarians themselves, and so they can at times be understandably reluctant to sanction their colleagues and friends. It's human nature.

For example, from about 1998 to 2004, there was a seven-year truce in filing complaints in our House of Representatives. The government watchdog organization I co-founded helped end that in 2004 by writing a complaint together with a brave but lonely member of Congress who was willing to file it with the House Ethics Committee. The resulting investigation resulted in the discipline of the member investigated, and ultimately helped lead to his party losing majority control of the body.

Out of all of that came a new enforcement tool in 2008, in our House of Representatives, that I strongly recommend to you: the creation of a new, independent entity, the Office of Congressional Ethics (OCE). This is a nonpartisan fact-finding body that investigates allegations from any source, including whistleblowers who might otherwise be afraid to step forward. OCE then recommends action to the parliamentarians who constitutionally maintain the ultimate decision-making power. Most importantly, the OCE referrals become public, allowing press, civil society and voter accountability. As a result, I believe, since the creation of the OCE for our House in 2008, there have been a significantly higher number of meritorious investigations there than in our Senate, which does not have a comparable body. The total is about 46 OCE referrals and about 20 House disciplinary actions versus just four letters of admonition by our Senate in that period. To be fair the Senate is a smaller body—but not that much smaller!

That last aspect of OCE enforcement—transparency, and the accountability it brings from media, NGOs and the public—is the sixth and final point I want to emphasize. In our U.S. parliamentary ethics system we have many transparency mechanisms: asset disclosures that our parliamentarians file, disclosures that lobbyists must make about their activities, information in campaign finance filings, and more.

To explain the value of transparency, I would like to close by turning to one of our Obama White House ethics transparency innovations. Starting in 2009, we for the first time put on the Internet virtually all visitor records of those coming to the White House. It used to be that just to get a handful of these records you had to file litigation and wait for years to know who was coming to the White House, who they were meeting with and what the subject of the meeting was. Now millions of Obama White House visitor records are online, each with a dozen or so basic categories of information: the name of the visitor, the person visited, the subject of the meeting and so on.

Why is that important? I began by referencing the Obama White House's record in avoiding major scandal. I think there are a number of reasons for that, including the President's own integrity and the new code of conduct we put into place. But an important part of that success story has also been the fact that records of White House meetings go on the Internet for everyone to see. That transparency brings accountability from the press, civil society and the public. That transparency and accountability has in turn powerfully reinforced the code of conduct: it has discouraged people from having meetings they shouldn't have, and if you don't have the meeting, you can't get in trouble for it.

So the U.S. view in one sentence: regulate conflicts, gifts, employment, and resource use, with strong enforcement and above all transparency. Thanks again for inviting me to share the U.S. perspective. Grazie!

Authors

      




ring

Three keys to reforming government: Lessons from repairing the VA


On June 20, I moderated a conversation on the future of the Department of Veterans Affairs with Secretary Robert McDonald. When he took office almost two years ago, Secretary McDonald inherited an organization in crisis: too many veterans faced shockingly long wait-times before they received care, VA officials had allegedly falsified records, and other allegations of mismanagement abounded.

Photo: Paul Morigi

Since he was sworn into office, Secretary McDonald has led the VA through a period of ambitious reform, anchored by the MyVA program. He and his team have embraced three core strategies that are securing meaningful change. They are important insights for all government leaders, and private sector ones as well.

1. Set bold goals

Secretary McDonald’s vision is for the VA to become the number one customer-service agency in the federal government. But he and his team know that words alone won’t make this happen. They developed twelve breakthrough priorities for 2016 that will directly improve service to veterans. These actionable short-term objectives support the VA’s longer term aim to deliver an exceptional experience for our veterans. By aiming high, and also drafting a concrete roadmap, the VA has put itself on a path to success.

2. Hybridize the best of public and private sectors

To accomplish their ambitious goal, VA leadership is applying the best practices of customer-service businesses around the nation. The Secretary and his colleagues are leveraging the goodwill, resources, and expertise of both the private and public sector. To do that, the VA has brought together diverse groups of business leaders, medical professionals, government executives, and veteran advocates under their umbrella MyVA Advisory Committee. Following the examples set by private sector leaders in service provision and innovation, the VA is developing user-friendly mobile apps for veterans, modernizing its website, and seeking to make hiring practices faster, more competitive, and more efficient. And so that no good idea is left unheard, the VA has created a "shark tank” to capture and enact suggestions and recommendations for improvement from the folks who best understand daily VA operations—VA employees themselves.

3. Data, data, data

The benefits of data-driven decision making in government are well known. As led by Secretary McDonald, the VA has continued to embrace the use of data to inform its policies and improve its performance. Already a leader in the collection and publication of data, the VA has recently taken even greater strides in sharing information between its healthcare delivery agencies. In addition to collecting administrative and health-outcomes information, the VA is gathering data from veterans about what they think . Automated kiosks allow veterans to check in for appointments, and to record their level of satisfaction with the services provided.

The results that the Secretary and his team have achieved speak for themselves:

  • 5 million more appointments completed last fiscal year over the previous fiscal year
  • 7 million additional hours of care for veterans in the last two years (based on an increase in the clinical workload of 11 percent over the last two years)
  • 97 percent of appointments completed within 30 days of the veteran’s preferred date; 86 percent within 7 days; 22 percent the same day
  • Average wait times of 5 days for primary care, 6 days for specialty care, and 2 days for mental health are
  • 90 percent of veterans say they are satisfied or completely satisfied with when they got their appointment (less than 3 percent said they were dissatisfied or completely dissatisfied).
  • The backlog for disability claims—once over 600,000 claims that were more than 125 days old—is down almost 90 percent.

Thanks to Secretary McDonald’s continued commitment to modernization, the VA has made significant progress. Problems, of course, remain at the VA and the Secretary has more work to do to ensure America honors the debt it owes its veterans, but the past two years of reform have moved the Department in the right direction. His strategies are instructive for managers of change everywhere.

Fred Dews and Andrew Kenealy contributed to this post.

Authors

Image Source: © Jim Bourg / Reuters
       




ring

Climate change brings disasters on steroids

Editor’s Note: Nonresident Senior Fellow Jane McAdam says that climate change-related displacement is happening now and band aid solutions to natural disasters are simply not enough. The time is now to be proactive, because the cost of inaction will be much higher. This article was originally published in The Sydney Morning Herald and on smh.com.au.…

      
 
 




ring

Measuring effects of the Common Core


Part II of the 2015 Brown Center Report on American Education

Over the next several years, policy analysts will evaluate the impact of the Common Core State Standards (CCSS) on U.S. education.  The task promises to be challenging.  The question most analysts will focus on is whether the CCSS is good or bad policy.  This section of the Brown Center Report (BCR) tackles a set of seemingly innocuous questions compared to the hot-button question of whether Common Core is wise or foolish.  The questions all have to do with when Common Core actually started, or more precisely, when the Common Core started having an effect on student learning.  And if it hasn’t yet had an effect, how will we know that CCSS has started to influence student achievement? 

The analysis below probes this issue empirically, hopefully persuading readers that deciding when a policy begins is elemental to evaluating its effects.  The question of a policy’s starting point is not always easy to answer.  Yet the answer has consequences.  You can’t figure out whether a policy worked or not unless you know when it began.[i] 

The analysis uses surveys of state implementation to model different CCSS starting points for states and produces a second early report card on how CCSS is doing.  The first report card, focusing on math, was presented in last year’s BCR.  The current study updates state implementation ratings that were presented in that report and extends the analysis to achievement in reading.  The goal is not only to estimate CCSS’s early impact, but also to lay out a fair approach for establishing when the Common Core’s impact began—and to do it now before data are generated that either critics or supporters can use to bolster their arguments.  The experience of No Child Left Behind (NCLB) illustrates this necessity.

Background

After the 2008 National Assessment of Educational Progress (NAEP) scores were released, former Secretary of Education Margaret Spellings claimed that the new scores showed “we are on the right track.”[ii] She pointed out that NAEP gains in the previous decade, 1999-2009, were much larger than in prior decades.  Mark Schneider of the American Institutes of Research (and a former Commissioner of the National Center for Education Statistics [NCES]) reached a different conclusion. He compared NAEP gains from 1996-2003 to 2003-2009 and declared NCLB’s impact disappointing.  “The pre-NCLB gains were greater than the post-NCLB gains.”[iii]  It is important to highlight that Schneider used the 2003 NAEP scores as the starting point for assessing NCLB.  A report from FairTest on the tenth anniversary of NCLB used the same demarcation for pre- and post-NCLB time frames.[iv]  FairTest is an advocacy group critical of high stakes testing—and harshly critical of NCLB—but if the 2003 starting point for NAEP is accepted, its conclusion is indisputable, “NAEP score improvement slowed or stopped in both reading and math after NCLB was implemented.” 

Choosing 2003 as NCLB’s starting date is intuitively appealing.  The law was introduced, debated, and passed by Congress in 2001.  President Bush signed NCLB into law on January 8, 2002.  It takes time to implement any law.  The 2003 NAEP is arguably the first chance that the assessment had to register NCLB’s effects. 

Selecting 2003 is consequential, however.  Some of the largest gains in NAEP’s history were registered between 2000 and 2003.  Once 2003 is established as a starting point (or baseline), pre-2003 gains become “pre-NCLB.”  But what if the 2003 NAEP scores were influenced by NCLB? Experiments evaluating the effects of new drugs collect baseline data from subjects before treatment, not after the treatment has begun.   Similarly, evaluating the effects of public policies require that baseline data are not influenced by the policies under evaluation.   

Avoiding such problems is particularly difficult when state or local policies are adopted nationally.  The federal effort to establish a speed limit of 55 miles per hour in the 1970s is a good example.  Several states already had speed limits of 55 mph or lower prior to the federal law’s enactment.  Moreover, a few states lowered speed limits in anticipation of the federal limit while the bill was debated in Congress.  On the day President Nixon signed the bill into law—January 2, 1974—the Associated Press reported that only 29 states would be required to lower speed limits.  Evaluating the effects of the 1974 law with national data but neglecting to adjust for what states were already doing would obviously yield tainted baseline data.

There are comparable reasons for questioning 2003 as a good baseline for evaluating NCLB’s effects.  The key components of NCLB’s accountability provisions—testing students, publicizing the results, and holding schools accountable for results—were already in place in nearly half the states.  In some states they had been in place for several years.  The 1999 iteration of Quality Counts, Education Week’s annual report on state-level efforts to improve public education, entitled Rewarding Results, Punishing Failure, was devoted to state accountability systems and the assessments underpinning them. Testing and accountability are especially important because they have drawn fire from critics of NCLB, a law that wasn’t passed until years later.

The Congressional debate of NCLB legislation took all of 2001, allowing states to pass anticipatory policies.  Derek Neal and Diane Whitmore Schanzenbach reported that “with the passage of NCLB lurking on the horizon,” Illinois placed hundreds of schools on a watch list and declared that future state testing would be high stakes.[v] In the summer and fall of 2002, with NCLB now the law of the land, state after state released lists of schools falling short of NCLB’s requirements.  Then the 2002-2003 school year began, during which the 2003 NAEP was administered.  Using 2003 as a NAEP baseline assumes that none of these activities—previous accountability systems, public lists of schools in need of improvement, anticipatory policy shifts—influenced achievement.  That is unlikely.[vi]

The Analysis

Unlike NCLB, there was no “pre-CCSS” state version of Common Core.  States vary in how quickly and aggressively they have implemented CCSS.  For the BCR analyses, two indexes were constructed to model CCSS implementation.  They are based on surveys of state education agencies and named for the two years that the surveys were conducted.  The 2011 survey reported the number of programs (e.g., professional development, new materials) on which states reported spending federal funds to implement CCSS.  Strong implementers spent money on more activities.  The 2011 index was used to investigate eighth grade math achievement in the 2014 BCR.  A new implementation index was created for this year’s study of reading achievement.  The 2013 index is based on a survey asking states when they planned to complete full implementation of CCSS in classrooms.  Strong states aimed for full implementation by 2012-2013 or earlier.      

Fourth grade NAEP reading scores serve as the achievement measure.  Why fourth grade and not eighth?  Reading instruction is a key activity of elementary classrooms but by eighth grade has all but disappeared.  What remains of “reading” as an independent subject, which has typically morphed into the study of literature, is subsumed under the English-Language Arts curriculum, a catchall term that also includes writing, vocabulary, listening, and public speaking.  Most students in fourth grade are in self-contained classes; they receive instruction in all subjects from one teacher.  The impact of CCSS on reading instruction—the recommendation that non-fiction take a larger role in reading materials is a good example—will be concentrated in the activities of a single teacher in elementary schools. The burden for meeting CCSS’s press for non-fiction, on the other hand, is expected to be shared by all middle and high school teachers.[vii] 

Results

Table 2-1 displays NAEP gains using the 2011 implementation index.  The four year period between 2009 and 2013 is broken down into two parts: 2009-2011 and 2011-2013.  Nineteen states are categorized as “strong” implementers of CCSS on the 2011 index, and from 2009-2013, they outscored the four states that did not adopt CCSS by a little more than one scale score point (0.87 vs. -0.24 for a 1.11 difference).  The non-adopters are the logical control group for CCSS, but with only four states in that category—Alaska, Nebraska, Texas, and Virginia—it is sensitive to big changes in one or two states.  Alaska and Texas both experienced a decline in fourth grade reading scores from 2009-2013.

The 1.11 point advantage in reading gains for strong CCSS implementers is similar to the 1.27 point advantage reported last year for eighth grade math.  Both are small.  The reading difference in favor of CCSS is equal to approximately 0.03 standard deviations of the 2009 baseline reading score.  Also note that the differences were greater in 2009-2011 than in 2011-2013 and that the “medium” implementers performed as well as or better than the strong implementers over the entire four year period (gain of 0.99).

Table 2-2 displays calculations using the 2013 implementation index.  Twelve states are rated as strong CCSS implementers, seven fewer than on the 2011 index.[viii]  Data for the non-adopters are the same as in the previous table.  In 2009-2013, the strong implementers gained 1.27 NAEP points compared to -0.24 among the non-adopters, a difference of 1.51 points.  The thirty-four states rated as medium implementers gained 0.82.  The strong implementers on this index are states that reported full implementation of CCSS-ELA by 2013.  Their larger gain in 2011-2013 (1.08 points) distinguishes them from the strong implementers in the previous table.  The overall advantage of 1.51 points over non-adopters represents about 0.04 standard deviations of the 2009 NAEP reading score, not a difference with real world significance.  Taken together, the 2011 and 2013 indexes estimate that NAEP reading gains from 2009-2013 were one to one and one-half scale score points larger in the strong CCSS implementation states compared to the states that did not adopt CCSS.

Common Core and Reading Content

As noted above, the 2013 implementation index is based on when states scheduled full implementation of CCSS in classrooms.  Other than reading achievement, does the index seem to reflect changes in any other classroom variable believed to be related to CCSS implementation?  If the answer is “yes,” that would bolster confidence that the index is measuring changes related to CCSS implementation. 

Let’s examine the types of literature that students encounter during instruction.  Perhaps the most controversial recommendation in the CCSS-ELA standards is the call for teachers to shift the content of reading materials away from stories and other fictional forms of literature in favor of more non-fiction.  NAEP asks fourth grade teachers the extent to which they teach fiction and non-fiction over the course of the school year (see Figure 2-1). 

Historically, fiction dominates fourth grade reading instruction.  It still does.  The percentage of teachers reporting that they teach fiction to a “large extent” exceeded the percentage answering “large extent” for non-fiction by 23 points in 2009 and 25 points in 2011.  In 2013, the difference narrowed to only 15 percentage points, primarily because of non-fiction’s increased use.  Fiction still dominated in 2013, but not by as much as in 2009.

The differences reported in Table 2-3 are national indicators of fiction’s declining prominence in fourth grade reading instruction.  What about the states?  We know that they were involved to varying degrees with the implementation of Common Core from 2009-2013.  Is there evidence that fiction’s prominence was more likely to weaken in states most aggressively pursuing CCSS implementation? 

Table 2-3 displays the data tackling that question.  Fourth grade teachers in strong implementation states decisively favored the use of fiction over non-fiction in 2009 and 2011.  But the prominence of fiction in those states experienced a large decline in 2013 (-12.4 percentage points).  The decline for the entire four year period, 2009-2013, was larger in the strong implementation states (-10.8) than in the medium implementation (-7.5) or non-adoption states (-9.8).  

Conclusion

This section of the Brown Center Report analyzed NAEP data and two indexes of CCSS implementation, one based on data collected in 2011, the second from data collected in 2013.  NAEP scores for 2009-2013 were examined.  Fourth grade reading scores improved by 1.11 scale score points in states with strong implementation of CCSS compared to states that did not adopt CCSS.  A similar comparison in last year’s BCR found a 1.27 point difference on NAEP’s eighth grade math test, also in favor of states with strong implementation of CCSS.  These differences, although certainly encouraging to CCSS supporters, are quite small, amounting to (at most) 0.04 standard deviations (SD) on the NAEP scale.  A threshold of 0.20 SD—five times larger—is often invoked as the minimum size for a test score change to be regarded as noticeable.  The current study’s findings are also merely statistical associations and cannot be used to make causal claims.  Perhaps other factors are driving test score changes, unmeasured by NAEP or the other sources of data analyzed here. 

The analysis also found that fourth grade teachers in strong implementation states are more likely to be shifting reading instruction from fiction to non-fiction texts.  That trend should be monitored closely to see if it continues.  Other events to keep an eye on as the Common Core unfolds include the following:

1.  The 2015 NAEP scores, typically released in the late fall, will be important for the Common Core.  In most states, the first CCSS-aligned state tests will be given in the spring of 2015.  Based on the earlier experiences of Kentucky and New York, results are expected to be disappointing.  Common Core supporters can respond by explaining that assessments given for the first time often produce disappointing results.  They will also claim that the tests are more rigorous than previous state assessments.  But it will be difficult to explain stagnant or falling NAEP scores in an era when implementing CCSS commands so much attention.   

2.  Assessment will become an important implementation variable in 2015 and subsequent years.  For analysts, the strategy employed here, modeling different indicators based on information collected at different stages of implementation, should become even more useful.  Some states are planning to use Smarter Balanced Assessments, others are using the Partnership for Assessment of Readiness for College and Careers (PARCC), and still others are using their own homegrown tests.   To capture variation among the states on this important dimension of implementation, analysts will need to use indicators that are up-to-date.

3.  The politics of Common Core injects a dynamic element into implementation.  The status of implementation is constantly changing.  States may choose to suspend, to delay, or to abandon CCSS.  That will require analysts to regularly re-configure which states are considered “in” Common Core and which states are “out.”  To further complicate matters, states may be “in” some years and “out” in others.

A final word.  When the 2014 BCR was released, many CCSS supporters commented that it is too early to tell the effects of Common Core.  The point that states may need more time operating under CCSS to realize its full effects certainly has merit.  But that does not discount everything states have done so far—including professional development, purchasing new textbooks and other instructional materials, designing new assessments, buying and installing computer systems, and conducting hearings and public outreach—as part of implementing the standards.  Some states are in their fifth year of implementation.  It could be that states need more time, but innovations can also produce their biggest “pop” earlier in implementation rather than later.  Kentucky was one of the earliest states to adopt and implement CCSS.  That state’s NAEP fourth grade reading score declined in both 2009-2011 and 2011-2013.  The optimism of CCSS supporters is understandable, but a one and a half point NAEP gain might be as good as it gets for CCSS.



[i] These ideas were first introduced in a 2013 Brown Center Chalkboard post I authored, entitled, “When Does a Policy Start?”

[ii] Maria Glod, “Since NCLB, Math and Reading Scores Rise for Ages 9 and 13,” Washington Post, April 29, 2009.

[iii] Mark Schneider, “NAEP Math Results Hold Bad News for NCLB,” AEIdeas (Washington, D.C.: American Enterprise Institute, 2009).

[iv] Lisa Guisbond with Monty Neill and Bob Schaeffer, NCLB’s Lost Decade for Educational Progress: What Can We Learn from this Policy Failure? (Jamaica Plain, MA: FairTest, 2012).

[v] Derek Neal and Diane Schanzenbach, “Left Behind by Design: Proficiency Counts and Test-Based Accountability,” NBER Working Paper No. W13293 (Cambridge: National Bureau of Economic Research, 2007), 13.

[vi] Careful analysts of NCLB have allowed different states to have different starting dates: see Thomas Dee and Brian A. Jacob, “Evaluating NCLB,” Education Next 10, no. 3 (Summer 2010); Manyee Wong, Thomas D. Cook, and Peter M. Steiner, “No Child Left Behind: An Interim Evaluation of Its Effects on Learning Using Two Interrupted Time Series Each with Its Own Non-Equivalent Comparison Series,” Working Paper 09-11 (Evanston, IL: Northwestern University Institute for Policy Research, 2009).

[vii] Common Core State Standards Initiative. “English Language Arts Standards, Key Design Consideration.” Retrieved from: http://www.corestandards.org/ELA-Literacy/introduction/key-design-consideration/

[viii] Twelve states shifted downward from strong to medium and five states shifted upward from medium to strong, netting out to a seven state swing.

« Part I: Girls, boys, and reading Part III: Student Engagement »

Downloads

Authors

     
 
 




ring

The Death of the Fringe Suburb

Drive through any number of outer-ring suburbs in America, and you’ll see boarded-up and vacant strip malls, surrounded by vast seas of empty parking spaces. These forlorn monuments to the real estate crash are not going to come back to life, even when the economy recovers. And that’s because the demand for the housing that…

       




ring

Kobe Bryant and his enduring impact on the Sino-American friendship

The tragic loss of Kobe Bryant on January 26, 2020 came as a devastating shock to sports fans around the world, including millions of people in China who awoke to this terrible news. Two circumstantial factors made the emotional reaction by the Chinese people­­––and their heartfelt affection and admiration for this legendary basketball player and…

       




ring

Restoring the Balance: A Middle East Strategy for the Next President

When President-elect Barack Obama assumes office in January, he will face a series of critical, complex and interrelated challenges in the Middle East. Each of these issues demands immediate attention: the ongoing war in Iraq; Iran’s regional and nuclear aspirations; the faltering Israeli-Palestinian peace process; and weak governments in Lebanon and Palestine.Recognizing the critical nature…