al

Made in Africa: Toward an industrialization strategy for the continent

Since 1995, Africa’s explosive economic growth has taken place without the changes in economic structure that normally occur as incomes per person rise. In particular, Africa’s experience with industrialization has been disappointing, especially as, historically, industry has been a driving force behind structural change. The East Asian “Miracle” is a manufacturing success story, but sub-Saharan…

      
 
 




al

Africa’s industrialization in the era of the 2030 Agenda: From political declarations to action on the ground

Although African countries enjoyed fast economic growth based on high commodity prices over the past decade, this growth has not translated into the economic transformation the continent needs to eradicate extreme poverty and enjoy economic prosperity. Now, more than ever, the necessity for Africa to industrialize is being stressed at various international forums, ranging from…

      
 
 




al

Overcoming barriers: Sustainable development, productive cities, and structural transformation in Africa

Against a background of protracted decline in global commodity prices and renewed focus on the Africa rising narrative, Africa is proving resilient, underpinned by strong economic performance in non-commodity exporting countries. The rise of African cities contains the potential for new engines for the continent’s structural transformation, if harnessed properly. However, the susceptibility of Africa’s…

      
 
 




al

Africa Industrialization Day: Moving from rhetoric to reality

Sunday, November 20 marked another United Nations “Africa Industrialization Day.” If anything, the level of attention to industrializing Africa coming from regional organizations, the multilateral development banks, and national governments has increased since the last one. This year, the new president of the African Development Bank flagged industrial development as one of his “high five”…

      
 
 




al

Italy’s hazardous new experiment: Genetically modified populism

Finally, three months after its elections, Italy has produced a new creature in the political biosphere: a “populist but technocratic” government. What we will be watching is not really the result of a Frankenstein experiment, rather something closer to a genetically modified organism. Such a pairing is probably something unheard of in history: Into a…

       




al

“The people vs. finance”: Europe needs a new strategy to counter Italian populists

Rather than Italy leaving the euro, it’s now that the euros are leaving Italy. In the recent weeks, after doubts emerged about the government’s will to remain in the European monetary union, Italians have transferred dozens of billions of euros across the borders.  Only a few days after the formation of the new government, the financial situation almost slid out of control. Italy’s liabilities with the euro-area (as tracked by…

       




al

Secular divergence: Explaining nationalism in Europe

Executive summary The doctrine of nationalism will continue eroding Europe’s integration until its hidden cause is recognized and addressed. In order to do so, Europe’s policymakers must acknowledge a new, powerful, and pervasive factor of social and political change: divergence within countries, sectors, jobs, or local communities. The popularity of the nationalist rhetoric should not…

       




al

Europe votes: How populist Italy is missing out

According to the current projections, after the European Parliament elections this weekend Italy might find itself excluded from Europe’s decisionmaking. A sense of marginalization and distance from the EU might grow in Italy’s public opinion, with hard-to-fathom political consequences. Both parties forming the current government coalition—the League and the Five Star Movement (M5S)—are likely to…

       




al

Italy’s political turmoil shows that parliaments can confront populists

Italy has a certain experience in changes of government, having seen 68 different governments in 73 years. However, even by Italian standards, what happened this summer to the first populist government in an advanced economy is unusual, to say the least. It is also instructive for other countries, showing the key roles of parliaments and…

       




al

Why Italy cannot exit the euro

The rise of strong euroskeptic parties in Italy in recent years had raised serious concerns about whether the country will permanently remain in the euro area. Although anti-euro rhetoric is now more muted, the fear of an “Italexit” still lingers in the economy. Italy’s notoriously high public debt is generally considered sustainable and not at…

       




al

First Thing We Do, Let’s Deregulate All the Lawyers

Not many Americans think of the legal profession as a monopoly, but it is. Abraham Lincoln, who practiced law for nearly twenty-five years, would likely not have been allowed to practice today. Without a law degree from an American Bar Association–sanctioned institution, a would-be lawyer is allowed to practice law in only a few states. […]

      
 
 




al

The U.S. Should Focus on Asia: All of Asia

President Obama made "pivoting" away from the Middle East and toward Asia the cornerstone of his foreign policy. Vali Nasr explains why Washington's renewed attention to East Asia shouldn't come at the expense of the rest of the continent.

      
 
 




al

American Foreign Policy in Retreat? A Discussion with Vali Nasr

On May 14, Foreign Policy at Brookings hosted Vali Nasr, author of The Dispensable Nation: American Foreign Policy in Retreat (Knopf Doubleday Publishing, 2013), for a discussion on the state of U.S. power globally and whether American foreign policy under the Obama administration is in retreat.

      
 
 




al

Iran, Turkey’s New Ally?

A bribery and corruption scandal has plunged Turkey into crisis. Vali Nasr writes that by improving ties with Iran, Prime Minister Recep Tayyip Erdogan has an opportunity repair his weakened authority and to restore Turkey's international standing if he shows that Turkey can once again play a central role in the Middle East.

      
 
 




al

Understanding Iran beyond the deal

On October 15, the Center for Middle East Policy hosted a conversation with Suzanne Maloney, deputy director of Brookings Foreign Policy program and author of the recently released book, Iran’s Political Economy since the Revolution (Cambridge University Press, 2015); Javier Solana, Brookings distinguished fellow and former EU High Representative for the Common Foreign and Security Policy; and Vali Nasr, Dean of Johns Hopkins University School of Advanced International Studies and nonresident senior fellow at Brookings. The three experts discussed Iran today, the implications of the nuclear agreement, and more.

      
 
 




al

Campaign 2008: The Final Weeks

Event Information

October 31, 2008
10:00 AM - 11:30 AM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

With the presidential debates completed, the campaigns of Senators John McCain and Barack Obama are focusing on persuading remaining undecided voters and mobilizing their supporters for Election Day. The Opportunity 08 project at Brookings and Princeton University examined key questions in the final stretch of the 2008 campaign, including money, ads and mobilization.

Have the candidates’ ads been effective at swaying voters thus far, and what form will they take in the campaign’s final week? With Obama taking the unprecedented step of opting out of public funding for the general election, has McCain been able to leverage party resources to keep pace? Will either candidate be able to match the Republican National Committee’s massive get-out-the-vote efforts of 2004? To examine these and related matters, the Brookings Institution’s Opportunity 08 project, in partnership with the Center for the Study of Democratic Politics at Princeton University’s Woodrow Wilson School of Public and International Affairs, hosted the final roundtable discussion on key questions about American electoral politics in connection with the 2008 campaign.

Featuring panelists Anthony Corrado, a nonresident senior fellow at Brookings and professor at Colby College; Diana Mutz, a nonresident senior fellow at Brookings and professor at the University of Pennsylvania; Lynn Vavreck of UCLA; Mike Allen of Politico; and moderated by Larry Bartels of Princeton and Thomas Mann of Brookings, the session explored how money, ads and mobilization are likely to affect the outcome of the presidential election.

After initial presentations, panelists took audience questions.

Event Materials

View Anthony Corrado's handout »
View Diana Mutz's handout »
 

Transcript

Event Materials

      
 
 




al

Campaign Reform in the Networked Age: Fostering Participation through Small Donors and Volunteers

Event Information

January 14, 2010
10:30 AM - 12:00 PM EST

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

The 2008 elections showcased the power of the Internet to generate voter enthusiasm, mobilize volunteers and increase small-donor contributions. After the political world has been arguing about campaign finance policy for decades, the digital revolution has altered the calculus of participation.

On January 14, a joint project of the Campaign Finance Institute, American Enterprise Institute and the Brookings Institution unveiled a new report that seeks to change the ongoing national dialogue about money in politics. At this event, the four authors of the report will detail their findings and recommendations. Relying on lessons from the record-shattering 2008 elections and the rise of Internet campaigning, experts will present a new vision of how campaign finance and communications policy can help further democracy through broader participation.

Video

Audio

Transcript

Event Materials

      
 
 




al

Beyond great forces: How individuals still shape history

       




al

Artificial Intelligence Won’t Save Us From Coronavirus

       




al

Webinar: Telehealth before and after COVID-19

The coronavirus outbreak has generated an immediate need for telehealth services to prevent further infections in the delivery of health care. Before the global pandemic, federal and state regulations around reimbursement and licensure requirements limited the use of telehealth. Private insurance programs and Medicaid have historically excluded telehealth from their coverage, and state parity laws…

       




al

COVID-19 has taught us the internet is critical and needs public interest oversight

The COVID-19 pandemic has graphically illustrated the importance of digital networks and service platforms. Imagine the shelter-in-place reality we would have experienced at the beginning of the 21st century, only two decades ago: a slow internet and (because of that) nothing like Zoom or Netflix. Digital networks that deliver the internet to our homes, and…

       




al

Removing regulatory barriers to telehealth before and after COVID-19

Introduction A combination of escalating costs, an aging population, and rising chronic health-care conditions that account for 75% of the nation’s health-care costs paint a bleak picture of the current state of American health care.1 In 2018, national health expenditures grew to $3.6 trillion and accounted for 17.7% of GDP.2 Under current laws, national health…

       




al

How to increase financial support during COVID-19 by investing in worker training

It took just two weeks to exhaust one of the largest bailout packages in American history. Even the most generous financial support has limits in a recession. However, I am optimistic that a pandemic-fueled recession and mass underemployment could be an important opportunity to upskill the American workforce through loans for vocational training. Financially supporting…

       




al

Artificial Intelligence Won’t Save Us From Coronavirus

       




al

France needs its own National Counterterrorism Center

The horrific attack in Nice last week underscores the acute terrorist threat France is facing, writes Bruce Riedel. The French parliamentary recommendation to create a French version of the National Counterterrorism Center is a smart idea that Paris should implement.

       
 
 




al

Was Saudi King Salman too sick to attend this week’s Arab League summit?

King Salman failed to show at the Arab League summit this week in Mauritania, allegedly for health reasons. The king’s health has been a question since his accession to the throne last year.

       
 
 




al

In defense of John Allen

This past weekend, retired Chairman of the Joint Chiefs of Staff General Martin Dempsey criticized retired General John Allen for his involvement in this year’s presidential race in support of Hillary Clinton and in strong opposition to Donald Trump. Allen (who is currently on a leave of absence from Brookings) believes the latter could cause a historic crisis […]

      
 
 




al

Congo’s political crisis: What is the way forward?

On August 15, the Africa Security Initiative, part of the Brookings Center for 21st Century Security and Intelligence, will host an event focused on Congo and the broader region.

      
 
 




al

Hey, Kremlin: Americans can make loose talk about nukes, too

Over the past several years, Vladimir Putin and senior Russian officials have talked loosely about nuclear weapons, suggesting the Kremlin might not fully comprehend the awful consequences of their use. That has caused a degree of worry in the West. Now, the West has in Donald Trump—the Republican nominee to become the next president of […]

      
 
 




al

The Marketplace of Democracy : Electoral Competition and American Politics


Brookings Institution Press and Cato Institute 2006 312pp.

Since 1998, U.S. House incumbents have won a staggering 98 percent of their reelection races. Electoral competition is also low and in decline in most state and primary elections. The Marketplace of Democracy combines the resources of two eminent research organizations—the Brookings Institution and the Cato Institute—to address the startling lack of competition in our democratic system. The contributors consider the historical development, legal background, and political aspects of a system that is supposed to be responsive and accountable yet for many is becoming stagnant, self-perpetuating, and tone-deaf. How did we get to this point, and what—if anything—should be done about it?

In The Marketplace of Democracy, top-tier political scholars also investigate the perceived lack of competition in arenas only previously speculated on, such as state legislative contests and congressional primaries. Michael McDonald, John Samples, and their colleagues analyze previous reform efforts such as direct primaries and term limits, and the effects they have had on electoral competition. They also examine current reform efforts in redistricting and campaign finance regulation, as well as the impact of third parties. In sum, what does all this tell us about what might be done to increase electoral competition?

Elections are the vehicles through which Americans choose who governs them, and the power of the ballot enables ordinary citizens to keep public officials accountable. This volume considers different policy options for increasing the competition needed to keep American politics vibrant, responsive, and democratic.


Brookings Forum: "The Marketplace of Democracy: A Groundbreaking Survey Explores Voter Attitudes About Electoral Competition and American Politics," October 27, 2006.

Podcast: "The Marketplace of Democracy: Electoral Competition and American Politics," a Capitol Hill briefing featuring Michael McDonald and John Samples, September 22, 2006.


Contributors: Stephen Ansolabehere (Massachusetts Institute of Technology), William D. Berry (Florida State University), Bruce Cain (University of California-Berkeley), Thomas M. Carsey (Florida State University), James G. Gimpel (University of Maryland), Tim Groseclose (University of California-Los Angeles), John Hanley (University of California-Berkeley), John mark Hansen (University of Chicago), Paul S. Herrnson (University of Maryland), Shigeo Hirano (Columbia University), Gary C. Jacobson (University of California-San Diego), Thad Kousser (University of California-San Diego), Frances E. Lee (University of Maryland), John C. Matsusaka (University of Southern California), Kenneth R. Mayer (University of Wisconsin-Madison), Michael P. McDonald (Brookings Institution and George Mason University), Jeffrey Milyo (University of Missouri-Columbia), Richard G. Niemi (University of Rochester), Natheniel Persily (University of Pennsylvania Law School), Lynda W. Powell (University of Rochester), David Primo (University of Rochester), John Samples (Cato Institute), James M. Snyder Jr. (Massachusetts Institute of Technology), Timothy Werner (University of Wisconsin-Madison), and Amanda Williams (University of Wisconsin-Madison).

ABOUT THE EDITORS

John Samples
John Samples directs the Center for Representative Government at the Cato Institute and teaches political science at Johns Hopkins University.
Michael P. McDonald

Downloads

Ordering Information:
  • {9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 978-0-8157-5579-1, $24.95 Add to Cart
  • {CD2E3D28-0096-4D03-B2DE-6567EB62AD1E}, 978-0-8157-5580-7, $54.95 Add to Cart
     
 
 




al

The Marketplace of Democracy: A Groundbreaking Survey Explores Voter Attitudes About Electoral Competition and American Politics

Event Information

October 27, 2006
10:00 AM - 12:00 PM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

Despite the attention on the mid-term races, few elections are competitive. Electoral competition, already low at the national level, is in decline in state and primary elections as well. Reformers, who point to gerrymandering and a host of other targets for change, argue that improving competition will produce voters who are more interested in elections, better-informed on issues, and more likely to turn out to the polls.

On October 27, the Brookings Institution—in conjunction with the Cato Institute and The Pew Research Center—presented a discussion and a groundbreaking survey exploring the attitudes and opinions of voters in competitive and noncompetitive congressional districts. The survey, part of Pew's regular polling on voter attitudes, was conducted through the weekend of October 21. A series of questions explored the public's perceptions, knowledge, and opinions about electoral competitiveness.

The discussion also explored a publication that addresses the startling lack of competition in our democratic system. The Marketplace of Democracy: Electoral Competition and American Politics (Brookings, 2006), considers the historical development, legal background, and political aspects of a system that is supposed to be responsive and accountable, yet for many is becoming stagnant, self-perpetuating, and tone-deaf. Michael McDonald, editor and Brookings visiting fellow, moderated a discussion among co-editor John Samples, director of the Center for Representative Government at the Cato Institute, and Andrew Kohut and Scott Keeter from The Pew Research Center, who also discussed the survey.

Transcript

Event Materials

     
 
 




al

The Generational Turnout War

Senator Barack Obama’s Iowa victory has been largely attributed to his success among young voters.  According to the entrance polls, not only did he win an outright majority of the youth vote, the 24-and-under crowd also turned out to vote with unusual strength.

Can he do it again in New Hampshire and beyond?

The Iowa caucuses are unusual in three key respects when it comes to mobilization of young voters and their influence on the election outcome.

First, Obama and the other candidates have spent the last year building impressive organizations within Iowa to mobilize their supporters.  In this decade, campaigns have retooled their get-out-the-vote efforts to emphasize person-to-person contact, which has been demonstrated to significantly increase turnout among all voters.  Turnout in both parties’ caucuses—particularly the record 236,000 on the Democratic side—benefited from peaked voter interest and this new campaign tactic.

Unlike previous efforts to mobilize young voters by concerts and celebrities, young voters are particularly energized when encouraged to vote by their peers.  Obama’s campaign specifically tailored mobilization efforts to young voters.  It clearly worked, as the youth were a larger share of caucus attendees than they were four years ago.

Second, the caucuses occur in the evening when people with families, and/or working night shifts, are unable to participate.  The caucuses favor turnout among people who have time on their hands, like students who have yet to return to college from their winter break. 

Third, despite the historically high turnout on the Democratic side of the Iowa caucuses, the caucuses are still low-turnout affairs, with only about 16 percent of eligible Iowans participating on January 3.  Where organization and time can galvanize youth relative to other Iowa caucus attendees, it is highly unlikely that young voters will be as large a share of the electorate in primary states like New Hampshire where more people participate simply because voting is less burdensome.

These factors suggest that Obama will be disadvantaged in upcoming elections. 

But surprisingly, no; it is Hillary Clinton who will be disadvantaged because of the age of her supporters.

Where Obama’s support comes from the youth, Clinton’s comes from the elderly.  She was just shy of winning a majority of their vote in the Iowa caucuses.

Like the youth, the elderly also traditionally constitute a larger share of Iowa caucus attendees than of primary voters.  Older Americans are habitual voters and have time on their hands.

When candidate support among the different ages of Iowa caucus attendees are applied to the age distribution of the 2004 New Hampshire Democratic primary electorate, support for Obama and John Edwards rises, while support for Clinton actually decreases. 

Obama’s strength among people in their 30’s—a demographic he also won—will likely pack a larger wallop among the larger New Hampshire electorate, offsetting the youth’s lower share of the electorate.

Edwards, who eked out a win among middle-aged voters, benefits from their higher turnout. Edward's attacks on Clinton following Iowa make strategic sense. He believes that if he can become the alternative to Obama, Clinton's older supporters will flock to him, setting up all out generational war on the Democratic side.

Clinton sees her elderly support base diminish, and it is not replenished with fresh voters elsewhere.

Of course, the situation is still fluid.  2008 is not 2004, New Hampshire is not Iowa and we have yet to see where Joe Biden’s and Chris Dodd’s supporters go now that those contenders are out. 

Yet, Obama’s eggs are not all in one basket.  He does not need to rely on young voters solely to win New Hampshire; he just needs them to be as animated as they were in Iowa to add to his support among their slightly older peers. 

On the Republican side, we have to look back eight years to the last contested Republican nomination to understand what increased youth turnout means to the election outcome. It does not appear to be much. The age profile of the Republican Iowa 2000 electorate looks similar to that of 2008, with the exception that the 2008 Republican electorate is more middle-aged. When the Republican contest moved from the Iowa caucuses to the New Hampshire primary in 2000, the age profile remained relatively steady with the exception that the share of the electorate of those in their 30's increased while those 60 and older decreased.

Mike Huckabee won every age demographic category in 2008, but so did George W. Bush in 2000. John McCain came roaring back from an Iowa fifth place finish in 2000 to win New Hampshire and is poised to do so again. The difference between Iowa and New Hampshire Republican electorates is more about their ideologies rather than their ages.

There may still be something to learn from the age distribution of support for the Republican candidates. McCain drew his support in 2000 and from middle-aged and older voters, who together will likely make up a majority of the New Hampshire Republican electorate. Will he do it again in 2008?

Looking past Huckabee's Iowa's support, McCain and Mitt Romney both drew more support from older voters. There are thus three candidates vying for votes from older New Hampshire independents, who may choose to vote in either the Democratic or Republican primary: McCain, Romney, and Clinton. This may favor Obama, too, as his independent supporters are not faced with the same difficult choice of which primary to vote in as Clinton's are.

     
 
 




al

@ Brookings Podcast: The Politics and Process of Congressional Redistricting

Now that the 2010 Census is concluded, states will begin the process of reapportionment—re-drawing voting district lines to account for population shifts. Nonresident Senior Fellow Michael McDonald says redistricting has been fraught with controversy and corruption since the nation’s early days, when the first “gerrymandered” district was drawn. Two states—Arizona and California—have instituted redistricting commissions intended to insulate the process from political shenanigans, but politicians everywhere will continue to work the system to gain electoral advantage and the best chance of re-election for themselves and their parties.

Subscribe to audio and video podcasts of Brookings events and policy research »

Video

Audio

      
 
 




al

A Status Report on Congressional Redistricting


Event Information

July 18, 2011
10:00 AM - 11:30 AM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

Full video archive of this event is also available via C-SPAN here.

The drawing of legislative district boundaries is arguably among the most self-interested and least transparent systems in American democracy. Every ten years redistricting authorities, usually state legislatures, redraw congressional and legislative lines in accordance with Census reapportionment and population shifts within states. Most state redistricting authorities are in the midst of their redistricting process, while others have already finished redrawing their state and congressional boundaries. A number of initiatives—from public mapping competitions to independent shadow commissions—have been launched to open up the process to the public during this round of redrawing district lines.

On July 18, Brookings hosted a panel of experts to review the results coming in from the states and discuss how the rest of the process is likely to unfold. Panelists focused on evidence of partisan or bipartisan gerrymandering, the outcome of transparency and public mapping initiatives, and minority redistricting.

After the panel discussion, participants took audience questions.

Video

Audio

Transcript

Event Materials

      
 
 




al

Early Voting: A Live Web Chat with Michael McDonald


Event Information

September 26, 2012
12:30 PM - 1:00 PM EDT

Online Only
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

Thousands of Americans are already casting their votes in the 2012 elections through a variety of vote-by-mail and in-person balloting that allows citizens to cast their votes well in advance of November 6. From military personnel posted overseas to absentee voters, these early voting opportunities give voters the opportunity to make their voices heard even when they can’t stand in line on Election Day. However, there are pitfalls in the process.

Expert Michael McDonald says that while a great deal of attention has been focused on voter fraud, the untold story is that during the last presidential election, some 400,000 absentee ballots were discarded as improperly submitted. How can early voters make sure their voices are heard? What effect will absentee and other early voting programs have in this election year? On September 26, McDonald took your questions and comments in a live web chat moderated by Vivyan Tran of POLITICO.

12:30 Vivyan Tran: Welcome everyone, let's get started.

12:30 Michael McDonald: Early voting was 30% of all votes cast in the 2008 election. My expectation is that 35% of all votes in 2012 will be cast prior to Election Day. In some states, the volume will be much higher. In the battleground state of CO, about 85% of the votes will be cast early; 70% in FL; and 45% in Ohio.

What does it all mean? Hopefully I will be able to answer that question in today's chat!

12:30 Comment from JMC: At what point do you think that the in person early voters become less partisan types eager to cast their vote and more "regular folks" who would be more swayed by debate performances, TV ads, and the like?

12:30 Comment from Jason: 400,000 absentee ballots were discarded in 2008? How?

12:30 Michael McDonald: Reasons why election officials reject mail ballots: unsigned, envelope not sealed, multiple ballots in one envelope, etc. 400K rejected in 2008 does not include the higher rate of spoiled ballots that typically occur with paper mail ballots compared to electronic recording devices used in polling places. Moral: make sure you follow closely the proper procedures to cast your mail ballot!

12:31 Michael McDonald: @JMC: If they are going to vote early, most people wait until the week prior to the election. Those voting now have already made up their minds. But, the polls indicate many people have already done so, so maybe we see more early voting in 2012 as a consequence.

12:31 Comment from User: It was my understanding that absentee ballots are never counted unless the race is incredibly close in a particular state? Is that true - or do the rules for that vary by state?

12:32 Michael McDonald: No, all early votes are counted. What may not be counted, depending on state law and if the election is close enough for them to matter, are provisional ballots.

12:33 Comment from Damion: The blurb here says 400,000 early votes were discarded. Shouldn't the board of elections be reprimanded for that? Who was at fault and what consequences were there?

12:33: Michael McDonald: No, these are ballots "discarded" because people did not follow proper procedures and they must be rejected by law.

12:33 Comment from Shirley: Can you Facebook your vote in?

12:34 Michael McDonald: No. However, election officials are transmitting ballots electronically to overseas citizens and military voters. Voters must print the ballot, fill it out, sign it, scan it, and return. There are ways for these voters to verify that their ballot was received.

12:35 Comment from Karen K: What kind of impact could these discards have on the 2012 election?

12:36 Michael McDonald: Difficult to say. More Republicans vote by mail (excluding all mail ballot states). But, we don't know much about those who fail to follow the procedures. They might be less educated or elderly, and thus might counter the overall trend we see in mail balloting. Who knows?

12:37 Comment from User: This is the first I've heard of so many early votes getting discarded. Is this an issue people are addressing in a serious way?

12:38 Michael McDonald: Unfortunately, we are too focused on issues like voter fraud, which are low occurrence events, when there are many more important ways in which votes are lost in the system. Hopefully we can get the message out so fewer people disenfranchise themselves.

12:39 Comment from Anonymous: What do we know so far about absentee votes for 2012? Can we tell who they're leaning toward in specific states and how?

12:40 Michael McDonald: It's a little early :) yet. One of the major changes from 2008 is that the overseas civilian ballots -- a population that leans D -- was sent ballots much earlier this year than in 2008. We'll get a much better sense of the state of play in the two weeks prior to the election.

12:41 Michael McDonald: That said, the number of absentee ballot requests is running about the same as in 2008, if not a little higher, suggesting that the early vote will indeed be higher than in 2008, and perhaps that overall turnout will be on par with 2008, too.

12:41 Comment from Leslie: So, how can I ensure my early ballot is counted? There are so many rules and regulations, I'm never sure I've brought/filled out the paperwork.

12:42 Michael McDonald: Many states and localities allow people to check on-line the status of their ballot. Do a search for your local election official's webpage to see if that is available to you.

12:42 Comment from Daryyl: Can you define provisional ballots then?

12:44 Michael McDonald: Provisional ballots are required under federal law to allow people to vote if there is a problem with their voter registration. Election officials work after the election to resolve the situation.

If you vote in-person early, then you can resolve provisional ballot situations much sooner, which is good.

12:45 Michael McDonald: Some states use provisional ballots for other purposes: e.g., for a person who does not have the required id or to manage a change in voter registration address. One of the untold stories of this cycle is that FL will manage change of reg. address through provisional ballots. OH does so, and 200K provisionals were cast in 2008. Expect 300K in FL, which may mean we will not know the outcome in FL until weeks after the election. Can you say 2000?

12:45 Comment from Mark, Greenbelt: Is early voting a new phenomenon, or is it increasing? It seems we should make it easier for people to vote when they can.

12:46 Michael McDonald: We are seeing more people vote early, particularly in states that offer the option. However, only MD changed its law from 2008 to allow in-person early voting. OH is sending absentee ballot requests to all registered voters, which is not a change in law, but a change in procedure that is expected to significantly increase early voting there.

12:47 Comment from Jennifer S. : Why do we vote on Tuesday? It seems inconvenient. Wouldn't more people vote if we did it on the weekend? Or over a period of days that offered both morning and evening hours?

12:48 Michael McDonald: We used to have early voting in the US! Back at the Founding, elections were held over several days to allow people living in remote areas to get to the courthouse (the polling place back in the day) to vote. In the mid-1840s, the federal gov't set the current single day for voting because -- what else? -- claims of vote fraud. That people could vote more than once.

12:49 Comment from Winston: What percentage of the U.S. population votes? And, if you could make one change that would increase voting in the U.S. what would be?

12:50 Michael McDonald: I also calculate turnout rates for the country for the media and academics. 62.2% of the eligible voters cast a ballot that counted in 2008. If I were to wave a magic wand, I would have election day registration. California just adopted it yesterday (but starting 2015). States with EDR have +5-7 percentage points of turnout.

12:50 Comment from Bernie S.: One of your colleagues at Brookings, Bill Galston, has suggested that we make voting mandatory, as they do in Australia. What do you think of that idea? Is it even possible here?

12:51 Michael McDonald: That will never happen in a county that values individual freedom so deeply as the US. Fun fact: a few years back, AZ voters rejected a ballot initiative to have voters entered into a lottery.

12:51 Comment from James: If early voting becomes more and more common, shouldn't candidates start campaigning earlier?

12:53 Michael McDonald: They do. In fact, you will see the presidential candidates visit battleground states that have in-person early voting at the start of the period. In 2008, you could see how early voting increased in places where Obama held rallies.

12:53 Comment from Devi P. : What are the factors that drive turnout? How do we get people to the polls? And what can you say about the "microtargeting" strategies the political parties are using to get their voters out?

12:54 Michael McDonald: One of the major ways in which elections have changed in the past decade is that campaigns now place more effort into voter contacts. Over 50% of people reported a contact in 2008. These contacts are known to increase turnout rates by upwards of 10 percentage points. Even contacts from Facebook friends seems to matter!

12:54 Comment from Wendy P, Ohio: What's your position on electronic voting? Can't every voting machine be hacked? Isn't plain old paper balloting more secure?

12:56 Michael McDonald: I went to Caltech, so I am sensitive to the potential for hacking. That said, I encourage experimentation so that we can build a better system. There are counties that do hold electronic elections!

12:56 Comment from Leslie: 400,000 seems like a lot - does this actually have impact on the electoral votes, and if so, should we be worried in this coming election that a lengthy recall may occur?

12:57 Michael McDonald: It could affect the outcome. So please spread the word through your networks. This is the #1 way in which votes are lost in the system!

12:57 Comment from JVotes: Perhaps we should microtarget with ballot issues. Many Americans seem disappointed with the two candidates we have to choose from.

12:58 Michael McDonald: Actually, ballot issues are known to increase turnout. But only a small amount in a presidential election, about 1 percentage point. People vote in the main show: the presidential election.

12:58 Michael McDonald: Interesting aside on that: early voting seems to have a small turnout effect in presidential election, but a larger effect in state and local elections.

12:58 Comment from Jaime Ravenet: Is there a reading of the new voter ID requirements (in at least the 9 most contested states) that does not constitute an "abridgment" of citizens' voting rights?

1:00 Michael McDonald: Perhaps under state constitutions. But the US Supreme Court has already ruled in favor of Indiana's id law. Still, that does not mean that lawyers will try to find some way under federal law to overturn them. TX was blocked because their law was determined to be discriminatory, per Sec. 5 of the Voting Rights Act.

1:00 Vivyan Tran: Thanks for the questions everyone, see you next week!

      
 
 




al

Social Security Smörgåsbord? Lessons from Sweden’s Individual Pension Accounts

President Bush has proposed adding optional personal accounts as one of the central elements of a major Social Security reform proposal. Although many details remain to be worked out, the proposal would allow individuals who choose to do so to divert part of the money they currently pay in Social Security taxes into individual investment…

       




al

Bridging the Social Security Divide: Lessons From Abroad

Executive Summary Efforts by President George W. Bush to promote major reforms in the Social Security retirement program have not led to policy change, but rather to increased polarization between the two parties. And the longer we wait to address Social Security’s long-term funding problem, the bigger and more painful the changes will need to…

       




al

Target Compliance: The Final Frontier of Policy Implementation

Abstract Surprisingly little theoretical attention has been devoted to the final step of the public policy implementation chain: understanding why the targets of public policies do or do not “comply” — that is, behave in ways that are consistent with the objectives of the policy. This paper focuses on why program “targets” frequently fail to…

       




al

But Will It Work?: Implementation Analysis to Improve Government Performance

Executive Summary Problems that arise in the implementation process make it less likely that policy objectives will be achieved in many government programs. Implementation problems may also damage the morale and external reputations of the agencies in charge of implementation. Although many implementation problems occur repeatedly across programs and can be predicted in advance, legislators…

       




al

Policy Leadership and the Blame Trap: Seven Strategies for Avoiding Policy Stalemate

Editor’s Note: This paper is part of the Governance Studies Management and Leadership Initiative. Negative messages about political opponents increasingly dominate not just election campaigns in the United States, but the policymaking process as well.  And politics dominated by negative messaging (also known as blame-generating) tends to result in policy stalemate. Negative messaging is attractive…

       




al

Can We Design A Good Technical Fix?


Wouldn’t it be great if complex social problems could be solved by technology? Alvin Weinberg suggested in 1967 that technical engineering could work better than social engineering; the argument advocated quick fixes to the most urgent problems of humanity at least to alleviate pain while more complete solutions were worked out. However controversial was this idea, our reliance on technology has only increased since then. Still, over the same period, we have also come to appreciate better the unanticipated consequences of technological advancement. In light of our experience leaping forward as well as our tripping and tumbling along the way, we should make two considerations in designing a technological fix.

Consideration 1: Serious attention to unwanted consequences

A consideration of first-order is the study of unwanted effects and tradeoffs introduced by the technology. Take for instance nanoparticles—particles in the range of one to a hundred nanometers—that enable new properties in materials in which they are mixed; for instance, maintaining permeability in fine-particle filtration to make available inexpensive water purification devices for vulnerable populations. Once these nano-enabled filters reach the end of their usable life and are discarded, those minuscule particles could be released in the environment and exponentially increase the toxicity of the resulting waste.

No less important than health and environmental effects are social, economic, and cultural consequences. Natural and social sciences are thus partners in the design of this kind of technological solution and transdisciplinary research is needed to improve our understanding of the various dimensions relevant to these projects. What is more, the incremental choices that set a particularly technology along a developmental pathway demand a different kind of knowledge because those choices are not merely technical, they involve values and preferences.

Consideration 2: Stakeholder engagement

But whose values and preferences matter? Surely everyone with a stake in the problem the fix is trying to solve will want to answer that question. If tech fixes are meant to address a specific social problem, those who will live with the consequences must have a say in the development of that solution. This prescription does not imply doing away with the current division of labor in technological development completely. Scientists and engineers need a degree of autonomy to work productively. Yet, input from and participation by stakeholders must occur far in advance of the completion of the development process because along the way a host of questions arise as to what trade-offs are acceptable. Non-experts are perfectly capable of answering questions about their values and preferences.

The market system provides to some extent this kind of check for technologies advancing incrementally. In an ideal market scenario, one of high competition, the stakeholders on the demand side vote with their wallets, and companies refine their products to gain market share. But the development of a technological fix is neither incremental nor distributed in that manner. It is generally concentrated in a few hands and it is, by design, disruptive and revolutionary. That’s why stakeholders must have a say in key developmental decisions so as to calibrate carefully those technologies to the values and preferences of the very people they intend to help.

Translating these considerations into policy

The federal government first funded in 1989 a program for the analysis of Ethical, Legal, Social implications (ELSI) within the Human Genome Project. The influence this program had in the direction and key decisions of the HGP was at best modest; rather, it practically institutionalized a separation between the hard science and the understanding of human and social dimensions of the science.[i]  By the time the National Nanotechnology Initiative was launched in 1999, some ELSI-type programs sought to breach the separation. With grants from the National Science Foundation, two centers for the study of nanotechnology in society were established at the University of California Santa Barbara and Arizona State University. CNS-UCSB and CNS-ASU have become hubs for research on the governance of technological development that integrate the technical, social, and human dimensions. One such effort is a pilot program of real-time technology assessment (RTTA) that achieved a more robust engagement with the various stakeholders of emerging nanotechnologies (see citizens tech-forum) and tested interventions at several points in the research and development of nanotechnologies to integrate concerns from the social sciences and humanities (see socio-technical integration). Building upon those experiences, the future of federal funding of technological fixes must include ELSI analyses more like the aforementioned RTTA program, that contrary to being an addendum to technical programs are fully integrated in the decision structure of research and development efforts.

Whenever emerging technologies such as additive manufacturing, synthetic biology, big data, or climate engineering are considered as the kernel of a technological fix, developers must understand that engineering the artifact itself does not suffice. An effective solution requires also the careful analysis of unwanted effects and a serious effort for stakeholder engagement, lest the solution be worse than the problem.


[i] See the ELSI Research Planning and Evaluation Group (ERPEG) final report published in 2000. ERPEG was created in 1997 by the NIH’s Advisory Council on human genome research (NACHGR) and DOE’s Advisory Committee on biology and environment (BERAC) to evaluate ELSI within the HGP and propose new directions for the 1998 five-year plan. After the final report NIH and DOE ran ELSI programs separately, although with the ostensible intention to coordinate efforts. The separation between the technical and the social/human dimensions of scientific advancement institutionalized by the HGP ELSI program and the radical alternative to it proposed by RTTA within NNI, is elegantly described in Brice Laurent’s The Constitutional Effect of the Ethics of Emerging Technologies (2013, Ethics and Politics XV(1), 251-271).

Image Source: © Suzanne Plunkett / Reuters
     
 
 




al

Innovation Is Not an Unqualified Good


Innovation is the driver of long-term economic growth and a key ingredient for improvements in healthcare, safety, and security, not to mention those little comforts and conveniences to which we have grown so accustomed. But innovation is not an unqualified good; it taxes society with costs.

The market system internalizes only a portion of the total costs of innovation. Other costs, however, are not included in market prices. Among the most important sources for those unaccounted costs are creative destruction, externalities, and weak safeguards for unwanted consequences.

Creative Destruction and Innovation

Schumpeter described creative destruction as the process by which innovative entrepreneurs outcompete older firms who unable to adapt to a new productive platform go out of business, laying off their employees and writing off their productive assets. Innovation, thus, also produces job loss and wealth destruction. Externalities are side effects with costs not priced in the marketplace such as environmental degradation and pollution. While externalities are largely invisible in the accounting books, they levy very real costs to society in terms of human health and increased vulnerability to environmental shocks. In addition, new technologies are bound to have unwanted deleterious effects, some of which are harmful to workers and consumers, and often, even to third parties not participating in those markets. Yet, there are little financial or cultural incentives for innovators to design new technologies with safeguards against those effects.

Indeed, innovation imposes unaccounted costs and those costs are not allocated in proportion of the benefits. Nothing in the market system obligates the winners of creative destruction to compensate the unemployed of phased-out industries, nor mandates producers to compensate those shouldering the costs of externalities, nor places incentives to invest in preventing unwanted effects in new production processes and new products. It is the role of policy to create the appropriate incentives for a fair distribution of those social costs. As a matter of national policy we must continue every effort to foster innovation, but we must do so recognizing the trade-offs.

Strengthening the Social Safety Net

Society as a whole benefits from creative destruction; society as a whole must then strengthen the safety net for the unemployed and double up efforts to help workers retrain and find employment in emerging industries. Regulators and industry will always disagree on many things but they could agree to collaborate on a system of regulatory incentives to ease transition to productive platforms with low externality costs. Fostering innovation should also mean promoting a culture of anticipation to better manage unwanted consequences.

Let’s invest in innovation with optimism, but let’s be pragmatic about it. To reap the most net social benefit from innovation, we must work on two fronts, to maximize benefits and to minimize the social costs, particularly those costs not traditionally accounted. The challenge for policymakers is to do it fairly and smartly, creating a correspondence of benefits and costs, and not unnecessarily encumbering innovative activity.

Commentary published in The International Economy magazine, Spring 2014 issue, as part of a symposium of experts responding to the question: Does Innovation Lead to prosperity for all?

Image Source: © Suzanne Plunkett / Reuters
     
 
 




al

The Study of the Distributional Outcomes of Innovation: A Book Review


Editors Note: This post is an extended version of a previous post.

Cozzens, Susan and Dhanaraj Thakur (Eds). 2014. Innovation and Inequality: Emerging technologies in an unequal world. Northampton, Massachusetts: Edward Elgar.

Historically, the debate on innovation has focused on the determinants of the pace of innovation—on the premise that innovation is the driver of long-term economic growth. Analysts and policymakers have taken less interest on how innovation-based growth affects income distribution. Less attention even has received the question of how innovation affects other forms of inequality such as economic opportunity, social mobility, access to education, healthcare, and legal representation, or inequalities in exposure to insalubrious environments, be these physical (through exposure to polluted air, water, food or harmful work conditions) or social (neighborhoods ridden with violence and crime). The relation between innovation, equal political representation and the right for people to have a say in the collective decisions that affect their lives can also be added to the list of neglect.

But neglect has not been universal. A small but growing group of analysts have been working for at least three decades to produce a more careful picture of the relationship between innovation and the economy. A distinguished vanguard of this group has recently published a collection of case studies that illuminates our understanding of innovation and inequality—which is the title of the book. The book is edited by Susan Cozzens and Dhanaraj Thakur. Cozzens is a professor in the School of Public Policy and Vice Provost of Academic Affairs at Georgia Tech. She has studied innovation and inequality long before inequality was a hot topic and led the group that collaborated on this book. Thakur is a faculty member of the school College of Public Service and Urban Affairs at Tennessee State University (while writing the book he taught at the University of West Indies in Jamaica). He is an original and sensible voice in the study of social dimensions of communication technologies.

We’d like to highlight here three aspects of the book: the research design, the empirical focus, and the conceptual framework developed from the case studies in the book.

Edited volumes are all too often a collection of disparate papers, but not in this case. This book is patently the product of a research design that probes the evolution of a set of technologies across a wide variety of national settings and, at the same time, it examines the different reactions to new technologies within specific countries. The second part of the book devotes five chapters to study five emerging technologies—recombinant insulin, genetically modified corn, mobile phones, open-source software, and tissue culture—observing the contrasts and similarities of their evolution in different national environments. In turn, part three considers the experience of eight countries, four of high income—Canada, Germany, Malta, and the U.S.—and four of medium or low income—Argentina, Costa Rica, Jamaica, and Mozambique. The stories in part three tell how these countries assimilated these diverse technologies into to their economies and policy environments.

The second aspect to highlight is the deliberate choice of elements for empirical focus. First, the object of inquiry is not all of technology but a discreet set of emerging technologies gaining a specificity that would otherwise be negated if they were to handle the unwieldy concept of “technology” broadly construed. At the same time, this choice reveals the policy orientation of the book because these new entrants have just started to shape the socio-technical spaces they inhabit while the spaces of older technologies have likely ossified. Second, the study offers ample variance in terms of jurisdictions under study, i.e. countries of all income levels; a decision that makes at the same time theory construction more difficult and the test of general premises more robust.[i] We can add that the book avoids sweeping generalizations. Third, they focus on technological projects and their champions, a choice that increases the rigor of the empirical analysis. This choice, naturally, narrows the space of generality but the lessons are more precise and the conjectures are presented with according modesty. The combination of a solid design and clear empirical focus allow the reader to obtain a sense of general insight from the cases taken together that could not be derived from any individual case standing alone.

Economic and technology historians have tackled the effects of technological advancement, from the steam engine to the Internet, but those lessons are not easily applicable to the present because emerging technologies intimate at a different kind of reconfiguration of economic and social structures. It is still too early to know the long-term effects of new technologies like genetically modified crops or mobile phone cash-transfers, but this book does a good job providing useful concepts that begin to form an analytical framework. In addition, the mix of country case studies subverts the disciplinary separation between the economics of innovation (devoted mostly to high-income countries) and development studies (interested in middle and low income economies). As a consequence of these selections, the reader can draw lessons that are likely to apply to technologies and countries other than the ones discussed in this book.

The third aspect we would like to underscore in this review is the conceptual framework. Cozzens, Thakur and their colleagues have done a service to anyone interested in pursuing the empirical and theoretical analysis of innovation and inequality.

For these authors, income distribution is only one part of the puzzle. They observe that inequalities are also part of social, ethnic, and gender cleavages in society. Frances Stewart, from Oxford University, introduced the notion of horizontal inequalities or inequalities at the social group level (for instance, across ethnic groups or genders). She developed the concept to contrast vertical inequalities or inequalities operating at the individual level (such as household income or wealth). The authors of this book borrow Stewart’s concept and pay attention to horizontal inequalities in the technologies they examine and observe that new technologies enter marketplaces that are already configured under historical forms of exclusion. A dramatic example is the lack of access to recombinant insulin in the U.S., because it is expensive and minorities are less likely to have health insurance (see Table 3.1 in p. 80).[ii] Another example is how innovation opens opportunities for entrepreneurs but closes them for women in cultures that systematically exclude women from entrepreneurial activities.

Another key concept is that of complementary assets. A poignant example is the failure of recombinant insulin to reach poor patients in Mozambique who are sent home with old medicine even though insulin is subsidized by the government. The reason why doctors deny the poor the new treatment is that they don’t have the literacy and household resources (e.g. a refrigerator, a clock) necessary to preserve the shots, inject themselves periodically, and read sugar blood levels. Technologies aimed at fighting poverty require complementary assets to be already in place and in the absence of them, they fail to mitigate suffering and ultimately ameliorate inequality. Another illustration of the importance of complementary assets is given by the case of Open Source Software. This technology has a nominal price of zero; however, only individuals who have computers and the time, disposition, and resources to learn how to use open source operative systems benefit. Likewise, companies without the internal resources to adapt open software will not adopt it and remain economically tied to proprietary software.

These observations lead to two critical concepts elaborated in the book: distributional boundaries and the inequalities across technological transitions. Distributional boundaries refer to the reach of the benefits of new technologies, boundaries that could be geographic (as in urban/suburban or center/periphery) or across social cleavages or incomes levels. Standard models of technological diffusion assume the entire population will gradually adopt a new technology, but in reality the authors observe several factors intervene in limiting the scope of diffusion to certain groups. The most insidious factors are monopolies that exercise sufficient control over markets to levy high prices. In these markets, the price becomes an exclusionary barrier to diffusion. This is quite evident in the case of mobile phones (see table 5.1, p. 128) where monopolies (or oligopolies) have market power to create and maintain a distributional boundary between post-pay and high-quality for middle and high income clients and pre-pay and low-quality for poor customers. This boundary renders pre-pay plans doubly regressive because the per-minute rates are higher than post-pay and phone expenses represent a far larger percentage in poor people’s income. Another example of exclusion happens in GMOs because in some countries subsistence farmers cannot afford the prices for engineering seeds; a disadvantage that compounds to their cost and health problems as they have to use more and stronger pesticides.

A technological transition, as used here, is an inflection point in the adoption of a technology that re-shapes its distributional boundaries. When smart phones were introduced, a new market for second-hand or hand-down phones was created in Maputo; people who could not access the top technology get stuck with a sub-par system. By looking at tissue culture they find that “whether it provides benefits to small farmers as well as large ones depends crucially on public interventions in the lower-income countries in our study” (p. 190). In fact, farmers in Costa Rica enjoy much better protections compare to those in Jamaica and Mozambique because the governmental program created to support banana tissue culture was designed and implemented as an extension program aimed at disseminating know-how among small-farmers and not exclusively to large multinational-owned farms. When introducing the same technology, because of this different policy environment, the distributional boundaries were made much more extensive in Costa Rica.

This is a book devoted to present the complexity of the innovation-inequality link. The authors are generous in their descriptions, punctilious in the analysis of their case studies, and cautious and measured in their conclusions. Readers who seek an overarching theory of inequality, a simple story, or a test of causality, are bound to be disappointed. But those readers may find the highest reward from carefully reading all the case studies presented in this book, not only because of the edifying richness of the detail herein but also because they will be invited to rethink the proper way to understand and address the problem of inequality.[iii]
 


[i] These are clearly spelled out: “we assumed that technologies, societies, and inequalities co-evolved; that technological projects are always inherently distributional; and that the distributional aspects of individual projects and portfolios of projects are open to choice.” (p. 6)

[ii] This problem has been somewhat mitigated since the Affordable Healthcare Act entered into effect.

[iii] Kevin Risser contributed to this posting.

 

Image Source: © Akhtar Soomro / Reuters
     
 
 




al

Innovation and manufacturing labor: a value-chain perspective


Policies and initiatives to promote U.S. manufacturing would be well advised to take a value chain perspective of this economic sector. Currently, our economic statistics do not include pre-production services to manufacturing such as research and development or design or post-production services such as repair and maintenance or sales. Yet, manufacturing firms invest heavily in these services because they are crucial to the success of their business. 

In a new paper, Kate Whitefoot and Walter Valdivia offer a fresh insight into the sector’s labor composition and trends by examining employment in manufacturing from a value chain perspective. While the manufacturing sector shed millions of jobs in the 2002-2010 period—a period that included the Great Recession—employment in upstream services expanded 26 percent for market analysis, 13 percent for research and development, and 23 percent for design and technical services. Average wages for these services increased over 10 percent in that period. Going forward, this pattern is likely to be repeated. Technical occupations, particularly in upstream segments are expected to have the largest increases in employment and wages.

In light of the findings, the authors offer the following recommendations: 

  • Federal manufacturing policy: Expand PCAST’s Advanced Manufacturing Partnership recommendations—specifically, for developing a national system of certifications for production skills and establishing a national apprenticeship program for skilled trades in manufacturing—to include jobs outside the factory such as those in research and development, design and technical services, and market analysis.
  • Higher education: Institutions of higher education should consider some adjustment to their curriculum with a long view of the coming changes to high-skill occupations, particularly with respect to problem identification and the management of uncertainty in highly automated work environments. In addition, universities and colleges should disseminate information among prospect and current students about occupations where the largest gains of employment and higher wage premiums are expected. 
  • Improve national statistics: Supplement the North American Industry Classification System (NAICS) with data that permits tracking the entire value chain, including the development of a demand-based classification system. This initiative could benefit from adding survey questions to replicate the data collection of countries with a Value Added Tax—without introducing the tax, that is—allowing in this manner a more accurate estimation of the value added by each participant in a production network.

Whitefoot and Valdivia stress that any collective efforts aimed at invigorating manufacturing must seize the opportunities throughout the entire value chain including upstream and downstream services to production.

Downloads

Authors

Image Source: © Jeff Tuttle / Reuters
     
 
 




al

NASA considers public values in its Asteroid Initiative


NASA’s Asteroid Initiative encompasses efforts for the human exploration of asteroids—as well as the Asteroid Grand Challenge—to enhance asteroid detection capabilities and mitigate their threat to Earth. The human space flight portion of the initiative primarily includes the Asteroid Redirect Mission (ARM), which is a proposal to put an asteroid in orbit of the moon and send astronauts to it. The program originally contemplated two alternatives for closer study: capturing a small 10m diameter asteroid versus simply recovering a boulder from a much larger asteroid. Late in March, NASA offered an update of its plans. It has decided to retrieve a boulder from an asteroid near Earth’s orbit—candidates are the asteroids 2008 EV5, Bennu, and Itokawa—and will place the boulder on the moon’s orbit to further study it.

This mission will help NASA develop a host of technical capabilities. For instance, Solar Electric Propulsion uses solar electric power to charge atoms for spacecraft propulsion—in the absence of gravity, even a modicum of force can alter the trajectory of a body in outer space. Another related capability under development is the gravity tractor, which is based on the notion that even the modest mass of a spacecraft can exert sufficient gravitational force over an asteroid to ever so slightly change its orbit. The ARM spacecraft mass could be further increased by its ability to capture a boulder from the asteroid that is steering clear of the Earth, enabling a test of how humans might prevent asteroid threats in the future. Thus, NASA will have a second test of how to deflect near-Earth objects on a hazardous trajectory. The first test, implemented as part of the Deep Impact Mission, is a kinetic impactor; that is, crashing a spacecraft on an approaching object to change its trajectory.

The Asteroid Initiative is a partner of the agency’s Near Earth Object Observation (NEOO) program. The goal of this program is to discover and monitor space objects traveling on a trajectory that could pose the risk of hitting Earth with catastrophic effects. The program also seeks to develop mitigation strategies. The capabilities developed by ARM could also support other programs of NASA, such as the manned exploration of Mars.

NEOO has recently enjoyed an uptick of public support. It used to be funded at about $4 million in the 1990s and in 2010 was allocated a paltry $6 million. But then, a redirection of priorities—linked to the transition from the Bush to the Obama administrations—increased funding for NEOO to about $20 million in 2012 and $40 million in 2014—and NASA is seeking $50 million for 2015. It is clear that NASA officials made a compelling case for the importance of NEOO; in fact, what they are asking seems quite a modest amount if indeed asteroids pose an existential risk to life on earth. At the same time, the instrumental importance of the program and the public funds devoted to it beg the question as to whether taxpayers should have a say in the decisions NASA is making regarding how to proceed with the program.

NASA has done something remarkable to help answer this question.

Last November, NASA partnered with the ECAST network (Expert and Citizen Assessment of Science and Technology) to host a citizen forum assessing the Asteroid Initiative. ECAST is a consortium of science policy and advocacy organizations which specializes in citizen deliberations on science policy. The forum consisted of a dialogue with 100 citizens in Phoenix and Boston who learned more about the asteroid initiative and then commented on various aspects of the project.

The participants, who were selected to approximate the demographics of the U.S. population, were asked to assess mitigation strategies to protect against asteroids. They were introduced to four strategies: civil defense, gravity tractor, kinetic impactor, and nuclear blast deflection. As part of the deliberations, they were asked to consider the two aforementioned approaches to perform ARM. A consensus emerged about the boulder retrieval option primarily because citizens thought that option offered better prospects for developing planetary defense technologies.  This preference existed despite the excitement of capturing a full asteroid, which could potentially have additional economic impacts. The participants showed interest in promoting the development of mitigation capabilities at least as much as they wanted to protect traditional NASA goals such as the advancement of science and space flight technology. This is not surprising given that concerns about doomsday should reasonably take precedence over traditional research and exploration concerns.

NASA could have decided to set ARM along the path of boulder retrieval exclusively on technical merits, but having conducted a citizen forum, the agency is now able to claim that this decision is also socially robust, which is to say, is responsive to public values of consensus. In this manner, NASA has shown a promising method by which research mission federal agencies can increase their public accountability.

In the same spirit of responsible research and innovation, a recent Brookings paper I authored with David Guston—who is a co-founder of ECAST—proposes a number of other innovative ways in which the innovation enterprise can be made more responsive to public values and social expectations.

Kudos to NASA for being at the forefront of innovation in space exploration and public accountability.

Image Source: © Handout . / Reuters
     
 
 




al

The politics of federal R&D: A punctuated equilibrium analysis


The fiscal budget has become a casualty of political polarization and even functions that had enjoyed bipartisan support, like research and development (R&D), are becoming divisive issues on Capitol Hill. As a result, federal R&D is likely to grow pegged to inflation or worse, decline.

With the size of the pie fixed or shrinking, requests for R&D funding increases will trigger an inter-agency zero-sum game that will play out as pointless comparisons of agencies’ merit, or worse, as a contest to attract the favor of Congress or the White House. This insidious politics will be made even more so by the growing tendency of equating public accountability with the measurement of performance. Political polarization, tight budgets, and pressure for quantifiable results threaten to undermine the sustainability of public R&D. The situation begs the question: What can federal agencies do to deal with the changing politics of federal R&D?

In a new paper, Walter D. Valdivia and Benjamin Y. Clark apply punctuated equilibrium theory to examine the last four decades of federal R&D, both at the aggregate and the agency level. Valdivia and Clark observe a general upward trend driven by gradual increases. In turn, budget leaps or punctuations are few and far in between and do no appear to have lasting effects. As the politics of R&D are stirred up, federal departments and agencies are sure to find that proposing punctuations is becoming more costly and risky. Consequently, agencies will be well advised in securing stable growth in their R&D budgets in the long run rather than pushing for short term budget leaps.

While appropriations history would suggest the stability of R&D spending resulted from the character of the budget politics, in the future, stability will need the stewardship of R&D champions who work to institutionalize gradualism, this time, in spite of the politics.

Downloads

Authors

      
 
 




al

Federal R&D: Why is defense dominant yet less talked about?


Federal departments and agencies received just above $133 billion in R&D funds in 2013. To put that figure in perspective, World Bank data for 2013 shows that, 130 countries had a GDP below that level; U.S. R&D is larger than the entire economy of 60 percent of all countries in the world.

The chart below shows how those funds are allocated among the most important federal departments and agencies in terms of R&D.

Those looking at these figures for the first time may be surprised to see that the Department of Defense takes about half of the pie. It should be noted however that not all federal R&D is destined to preserve U.S. military preeminence in the world. From non-defense research, 42 percent is destined to the much-needed research conducted by the National Institutes of Health, 17 percent to the research of the Department of Energy—owner of 17 celebrated national laboratories—16 percent for space exploration, and 8 percent for understanding the natural and social worlds at a fundamental level. The balance category is only lumped together for visual display not for its importance; it includes for instance the significant work of the National Oceanic and Atmospheric Administration and the National Institute of Standards and Technology.

Despite the impressive size of defense R&D, we hear little about it. While much of defense research and development is classified, in time, civilian applications find their way into mainstream commercial uses—the Internet and GPS emerged from research done at DARPA. Far more visible than defense R&D is biomedical research, clean energy research, or news about truly impressive discoveries either in distant galaxies or in the depths of our oceans.

What produces this asymmetry of visibility of federal R&D work?

In a recent Brookings paper, a colleague and I suggest that the answer lies in the prominence of R&D in the agencies’ accounting books. In short: How visible is R&D and how much the agency seeks to discuss it in public fora depends not on the relative importance, but on how large a portion of the agency’s budget is dedicated to R&D.

From a budget perspective, we identified two types of agencies performing R&D: those agencies whose main mission is to perform research and development, and those agencies that perform many functions in addition to R&D. For the former, the share of R&D in the discretionary budget is consistently high, while for the latter group, R&D is only a small part of their total budget (see the chart below). This distinction influences how agencies will argue for their R&D money, because they will make their case on the most important uses of their budget. If agencies have a low R&D share, they will keep it mixed with other functions and programs; for instance, research efforts will be justified only as supporting the main agency mission. In turn, agencies with a high R&D share must argue for their budgets highlighting the social outcomes of their work. These include three agencies whose primary mission is research (NASA, NSF, NIH), and a fourth (DoE) where research is a significant element of its mission.

There is little question that the four agencies with high R&D share produce greatly beneficial research for society. Their strategy of promoting their work publicly is not only smart budget politics but also civic and pedagogical in the sense of helping taxpayers understand that their tax dollars are well-spent. However, it is interesting to observe that other agencies may be producing research of equal social impact that flies under the public radar, mainly because those agencies prefer as a matter of good budget policy to keep a low profile for their R&D work.

One interesting conclusion for institutional design from this analysis is that promoting a research agency to the level of departments of government or its director to a cabinet rank position may bring prominence to its research, not because more and better research will necessarily get done but simply because that agency will seek public recognition for their work in order to justify its budget. Likewise, placing a research agency within a larger department may help conceal and protect their R&D funding; the politics of the department will focus on its main goals and R&D would recede to a concern of secondary interest in political battles.

In the Politics of Federal R&D we discuss in more detail the changing politics of budget and how R&D agencies can respond. The general strategies of concealment and self-promotion are likely to become more important for agencies to protect a steady growth of their research and development budgets.

Data sources: R&D data from the American Association for the Advancement of Sciences historical trends in Federal R&D. Total non-discretionary spending by federal agency from the Office of Management and Budget.

Image Source: © Edgar Su / Reuters
      
 
 




al

Patent infringement suits have a reputational cost for universities


Universities cash handsome awards on infringement cases

Last month, a jury found Apple Inc. guilty of infringing a patent of the University of Wisconsin-Madison (UW) and ordered the tech giant to pay $234 million. The university scored a big financial victory, but this hardly meant any gain for the good name of the university.

The plaintiffs argued successfully in court that Apple infringed their 1998 patent on a predictor circuit that greatly improved the efficiency of microchips used in the popular iPhone 5s, 6, and 6 Plus. Apple first responded by challenging the validity of the patent, but the US Patent and Trademark Office ruled in favor of the university. Apple plans to appeal, but the appellate court is not likely to reverse the lower court’s decision.

This is not the first time this university has asserted its patents rights (UW sued Intel in 2008 for this exact same patent and reportedly settled for $110 million). Nor is this the first time universities in general have taken infringers to court. Prominent cases in recent memory include Boston University, which sued several companies for infringement of a patent for blue light-emitting diodes and settled out of court with most of them, and Carnegie Mellon, who was awarded $237 million by the federal appellate court on its infringement suit against Marvell, a semiconductor company, for its use of an enhanced detector of data in hard drives called Kavcic detectors.

Means not always aligned with aims in patent law

When university inventions emerge from federal research grants, universities can also sue the infringers, but in those cases they would be testing the accepted interpretations of current patent law.

The Bayh-Dole Act of 1980 extended patent law and gave small-business and universities the right to take title to patents from federal grants—later it was amended to extend the right to all federal grantees regardless of size. The ostensible aim of this act is to “to promote the utilization of inventions arising from federally supported research or development.” Under the law, a condition for universities to keep their exclusive rights on those patents is that they or their licensees take “effective steps to achieve practical application” of those patents. Bayh-Dole was not designed to create a new source of revenue for universities. If companies are effectively using university technologies, Bayh-Dole’s purpose is served without need of the patents.

To understand this point, consider a counterfactual: What if the text of Bayh-Dole had been originally composed to grant a conditional right to patents for federal research grantees? The condition could be stated like this: “This policy seeks to promote the commercialization of federally funded research and to this end it will use the patent system. Grantees may take title to patents if and only if other mechanisms for disseminating and developing those inventions into useful applications prove unsuccessful.” Under this imagined text, the universities could still take title to patents on their inventions if they or the U.S. Patent and Trademark Office were not aware that the technologies were being used in manufactures.

But no court would find their infringement claim meritorious if the accused companies could demonstrate that, absent of willful infringement, they had in fact used the technologies covered by university patents in their commercial products. In this case, other mechanisms for disseminating and developing the technologies would have proven successful indeed. The reality that Bayh-Dole did not mandate such a contingent assignation of rights creates a contradiction between its aims and the means chosen to advance those aims for the subset of patents that were already in use by industry.

I should clarify that the predictor circuit, the blue-light diode, and the Kavcic detectors are not in that subset of patents. But even in they were, there is no indication that the University of Wisconsin-Madison would have exercised its patent rights with any less vigor just because the original research was funded by public funds. Today, it is fully expected from universities to aggressively assert their patent rights regardless of the source of funding for the original research.

You can have an answer for every question and still lose the debate

It is this litigious attitude that puts off many observers. While the law may very well allow universities to be litigious, universities could still refuse to exercise their rights under circumstances in which those rights are not easily reconciled with the public mission of the university.

Universities administrators, tech transfer personnel, and particularly the legal teams winning infringement cases have legitimate reasons to wonder why universities are publicly scorned. After all, they are acting within the law and simply protecting their patent rights; they are doing what any rational person would do. They may be really surprised when critics accuse universities of becoming allies of patent trolls, or of aiding and abetting their actions. Such accusations are unwarranted. Trolls are truants; the universities are venerable institutions. Patent trolls would exploit the ambiguities of patent law and the burdens of due process to their own benefit and to the detriment of truly productive businesses and persons. In stark contrast, universities are long established partners of democracy, respected beyond ideological divides for their abundant contributions to society.

The critics may not be fully considering the intricacies of patent law. Or they may forget that universities are in need of additional revenue—higher education has not seen public financial support increase in recent years, with federal grants roughly stagnated and state funding falling drastically in some states. Critics may also ignore that revenues collected from licensing of patents, favorable court rulings, and out-of-court settlements, are to a large extent (usually two thirds of the total) plugged back into the research enterprise.

University attorneys may have an answer for every point that critics raise, but the overall concern of critics should not be dismissed outright. Given that many if not most university patents can be traced back to research funded by tax dollars, there is a legitimate reason for observers to expect universities to manage their patents with a degree of restraint. There is also a legitimate reason for public disappointment when universities do not seem to endeavor to balance the tensions between their rights and duties.

Substantive steps to improve the universities’ public image

Universities can become more responsive to public expectations about their character not only by promoting their good work, but also by taking substantive steps to correct misperceptions.

First, when universities discover a case of proven infringement, they should take companies to court as a measure of last resort. If a particular company refuses to negotiate in good faith and an infringement case ends up in court, the universities should be prepared to demonstrate to the court of public opinion that they have tried, with sufficient insistence and time, to negotiate a license and even made concessions in pricing the license. In the case of the predictor circuit patent, it seems that the University of Wisconsin-Madison tried to license the technology and Apple refused, but the university would be in a much better position if it could demonstrate that the licensing deals offered to Apple would have turned to be far less expensive for the tech company.

Second, universities would be well advised not to join any efforts to lobby Congress for stronger patent protection. At least two reasons substantiate this suggestion. First, as a matter of principle, the dogmatic belief that without patents there is no innovation is wrong. Second, as a matter of material interest, universities as a group do not have a financial interest in patenting. It’s worth elaborating these points a bit more.

Neither historians nor social science researchers have settled the question about the net effects of patents on innovation. While there is evidence of social benefits from patent-based innovation, there is also evidence of social costs associated with patent-monopolies, and even more evidence of momentous innovations that required no patents. What’s more, the net social benefit varies across industries and over time. Research shows economic areas in which patents do spur innovation and economic sectors where it actually hinders them. This research explains, for instance, why some computer and Internet giants lobby Congress in the opposite direction to the biotech and big pharma industries. Rigorous industrial surveys of the 1980s and 1990s found that companies in most economic sectors did not use patents as their primary tool to protect their R&D investments.

Yet patenting has increased rapidly over the past four decades. This increase includes industries that once were uninterested in patents. Economic analyses have shown that this new patenting is a business strategy against patent litigation. Companies are building patent portfolios as a defensive strategy, not because they are innovating more. The university’s public position on patent policy should acknowledge that the debate on the impact of patents on innovation is not settled and that this impact cannot be observed in the aggregate, but must be considered in the context of each specific economic sector, industry, or even market. From this vantage point, universities could then turn up or down the intensity with which they negotiate licenses and pursue compensation for infringement. Universities would better assert their commitment to their public mission if they compute on a case by case basis the balance between social benefits and costs for each of its controversial patents.

As to the material interest in patents, it is understandable that some patent attorneys or the biotech lobby publicly espouse the dogma of patents, that there is no innovation without patents. After all, their livelihood depends on it. However, research universities as a group do not have any significant financial interest in stronger patent protection. As I have shown in a previous Brookings paper, the vast majority of research universities earn very little from their patent portfolios and about 87% of tech transfer offices operate in the red. Universities as a group receive so little income from licensing and asserting their patents relative to the generous federal support (below 3%), that if the federal government were to declare that grant reviewers should give a preference to universities that do not patent, all research universities would stop the practice at once. It is true that a few universities (like the University of Wisconsin-Madison) raise significant revenue from their patent portfolio, and they will continue to do so regardless of public protestations. But the majority of universities do not have a material interest in patenting.

Time to get it right on anti-troll legislation

Last year, the House of Representative passed legislation closing loopholes and introducing disincentives for patent trolls. Just as mirror legislation was about to be considered in the Senate, Sen. Patrick Leahy withdrew it from the Judiciary Committee. It was reported that Sen. Harry Reid forced the hand of Mr. Leahy to kill the bill in committee. In the public sphere, the shrewd lobbying efforts to derail the bill were perceived to be pro-troll interests. The lobbying came from pharmaceutical companies, biotech companies, patent attorneys, and, to the surprise of everyone, universities.  Little wonder that critics overreacted and suggested universities were in partnership with trolls: even if they were wrong, these accusations stung.

University associations took that position out of a sincere belief in the dogma of patents and out of fear that the proposed anti-troll legislation limited their ability to sue patent infringers. However, their convictions stand on shaky ground and their material interests are not those of the vast majority of universities.

A reversal of that position is not only possible, but would be timely. When anti-troll legislation is again introduced in Congress, universities should distance themselves from efforts to protect the policy status quo that so benefits patent trolls. It is not altogether improbable that Congress sees fit to exempt universities from some of the requirements that the law would impose. University associations could show Congress the merit of such exemptions in consideration of the universities’ constant and significant contributions to states, regions, and the nation. However, no such concessions could ever be expected if the universities continue to place themselves in the company of those who profit from patent management.

No asset is more valuable for universities than their prestige. It is the ample recognition of their value in society that guarantees tax dollars will continue to flow into universities. While acting legally to protect their patent rights, universities are nevertheless toying with their own legitimacy. Let those universities that stand to gain from litigation act in their self-interest, but do not let them speak for all universities. When university associations advocate for stronger patent protection, they do the majority of universities a disservice. These associations should better represent the interests of all their members by advocating a more neutral position about patent reform, by publicly praising universities’ restraint on patent litigation, and by promoting a culture and readiness in technology transfer offices to appraise each patent not by its market value but by its social value. At the same time, the majority of universities that obtain neither private nor social benefits from patenting should press their political representatives to adopt a more balanced approach to policy advocacy, lest they squander the reputation of the entire university system.

Image Source: © Stephen Lam / Reuters
      
 
 




al

Patent infringement suits have a reputational cost for universities


This post originally appeared on the Center for Technology Innovation’s TechTank blog.

Universities cash handsome awards on infringement cases

This October, a jury found Apple Inc. guilty of infringing a patent of the University of Wisconsin-Madison (UW) and ordered the tech giant to pay $234 million. The university scored a big financial victory, but this hardly meant any gain for the good name of the university.

The plaintiffs argued successfully in court that Apple infringed their 1998 patent on a predictor circuit that greatly improved the efficiency of microchips used in the popular iPhone 5s, 6, and 6 Plus. Apple first responded by challenging the validity of the patent, but the US Patent and Trademark Office ruled in favor of the university. Apple plans to appeal, but the appellate court is not likely to reverse the lower court’s decision.

This is not the first time this university has asserted its patents rights (UW sued Intel in 2008 for this exact same patent and reportedly settled for $110 million). Nor is this the first time universities in general have taken infringers to court. Prominent cases in recent memory include Boston University, which sued several companies for infringement of a patent for blue light-emitting diodes and settled out of court with most of them, and Carnegie Mellon, who was awarded $237 million by the federal appellate court on its infringement suit against Marvell, a semiconductor company, for its use of an enhanced detector of data in hard drives called Kavcic detectors.

Means not always aligned with aims in patent law

When university patented inventions emerge from federal research grants, infringement suits test the accepted interpretations of current patent law.

The Bayh-Dole Act of 1980 extended patent law and gave small-business and universities the right to take title to patents from federal research grants—later it was amended to extend the right to all federal grantees regardless of size. The ostensible aim of this act is to “to promote the utilization of inventions arising from federally supported research or development.” Under the law, a condition for universities (or any other government research performers) to keep their exclusive rights on those patents is that they or their licensees take “effective steps to achieve practical application” of those patents. Bayh-Dole was not designed to create a new source of revenue for universities. If companies are effectively using university technologies, Bayh-Dole’s purpose is served without need of patents.

To understand this point, consider a counterfactual: What if the text of Bayh-Dole had been originally composed to grant a conditional right to patents for federal research grantees? The condition could be stated like this: “This policy seeks to promote the commercialization of federally funded research and to this end it will use the patent system. Grantees may take title to patents if and only if other mechanisms for disseminating and developing those inventions into useful applications prove unsuccessful.” Under this imagined text, the universities could still take title to patents on their inventions if they or the U.S. Patent and Trademark Office were not aware that the technologies were being used in manufactures.

But no court would find their infringement claim meritorious if the accused companies could demonstrate that, absent of willful infringement, they had in fact used the technologies covered by university patents in their commercial products. In this case, other mechanisms for disseminating and developing the technologies would have proven successful indeed. The reality that Bayh-Dole did not mandate such a contingent assignation of rights creates a contradiction between its aims and the means chosen to advance those aims for the subset of patents that were already in use by industry.

I should remark that UW’s predictor circuit resulted from grants from NSF and DARPA and there is no indication that the university exercised its patent rights with any less vigor just because the original research was funded by public funds. In fact, it is fully expected from universities to aggressively assert their patent rights regardless of the source of funding for the original research.

You can have an answer for every question and still lose the debate

It is this litigious attitude that puts off many observers. While the law may very well allow universities to be litigious, universities could still refuse to exercise their rights under circumstances in which those rights are not easily reconciled with the public mission of the university.

Universities administrators, tech transfer personnel, and particularly the legal teams winning infringement cases have legitimate reasons to wonder why universities are publicly scorned. After all, they are acting within the law and simply protecting their patent rights; they are doing what any rational person would do. They may be really surprised when critics accuse universities of becoming allies of patent trolls, or of aiding and abetting their actions. Such accusations are unwarranted. Trolls are truants; the universities are venerable institutions. Patent trolls would exploit the ambiguities of patent law and the burdens of due process to their own benefit and to the detriment of truly productive businesses and persons. In stark contrast, universities are long established partners of democracy, respected beyond ideological divides for their abundant contributions to society.

The critics may not be fully considering the intricacies of patent law. Or they may forget that universities are in need of additional revenue—higher education has not seen public financial support increase in recent years, with federal grants roughly stagnated and state funding falling drastically in some states. Critics may also ignore that revenues collected from licensing of patents, favorable court rulings, and out-of-court settlements, are to a large extent (usually two thirds of the total) plugged back into the research enterprise.

University attorneys may have an answer for every point that critics raise, but the overall concern of critics should not be dismissed outright. Given that many if not most university patents can be traced back to research funded by tax dollars, there is a legitimate reason for observers to expect universities to manage their patents with a degree of restraint. There is also a legitimate reason for public disappointment when universities do not seem to endeavor to balance the tensions between their rights and duties.

Substantive steps to improve the universities’ public image

Universities can become more responsive to public expectations about their character not only by promoting their good work, but also by taking substantive steps to correct misperceptions.

First, when universities discover a case of proven infringement, they should take companies to court as a measure of last resort. If a particular company refuses to negotiate in good faith and an infringement case ends up in court, the universities should be prepared to demonstrate to the court of public opinion that they have tried, with sufficient insistence and time, to negotiate a license and even made concessions in pricing the license. In the case of the predictor circuit patent, it seems that the University of Wisconsin-Madison tried to license the technology and Apple refused, but the university would be in a much better position if it could demonstrate that the licensing deals offered to Apple would have turned to be far less expensive for the tech company.

Second, universities would be well advised not to join any efforts to lobby Congress for stronger patent protection. At least two reasons substantiate this suggestion. First, as a matter of principle, the dogmatic belief that without patents there is no innovation is wrong. Second, as a matter of material interest, universities as a group do not have a financial interest in patenting. It’s worth elaborating these points a bit more.

Neither historians nor social science researchers have settled the question about the net effects of patents on innovation. While there is evidence of social benefits from patent-based innovation, there is also evidence of social costs associated with patent-monopolies, and even more evidence of momentous innovations that required no patents. What’s more, the net social benefit varies across industries and over time. Research shows economic areas in which patents do spur innovation and economic sectors where it actually hinders them. This research explains, for instance, why some computer and Internet giants lobby Congress in the opposite direction to the biotech and big pharma industries. Rigorous industrial surveys of the 1980s and 1990s found that companies in most economic sectors did not use patents as their primary tool to protect their R&D investments.

Yet patenting has increased rapidly over the past four decades. This increase includes industries that once were uninterested in patents. Economic analyses have shown that this new patenting is a business strategy against patent litigation. Companies are building patent portfolios as a defensive strategy, not because they are innovating more. The university’s public position on patent policy should acknowledge that the debate on the impact of patents on innovation is not settled and that this impact cannot be observed in the aggregate, but must be considered in the context of each specific economic sector, industry, or even market. From this vantage point, universities could then turn up or down the intensity with which they negotiate licenses and pursue compensation for infringement. Universities would better assert their commitment to their public mission if they compute on a case by case basis the balance between social benefits and costs for each of its controversial patents.

As to the material interest in patents, it is understandable that some patent attorneys or the biotech lobby publicly espouse the dogma of patents, that there is no innovation without patents. After all, their livelihood depends on it. However, research universities as a group do not have any significant financial interest in stronger patent protection. As I have shown in a previous Brookings paper, the vast majority of research universities earn very little from their patent portfolios and about 87% of tech transfer offices operate in the red. Universities as a group receive so little income from licensing and asserting their patents relative to the generous federal support (below 3%), that if the federal government were to declare that grant reviewers should give a preference to universities that do not patent, all research universities would stop the practice at once. It is true that a few universities (like the University of Wisconsin-Madison) raise significant revenue from their patent portfolio, and they will continue to do so regardless of public protestations. But the majority of universities do not have a material interest in patenting.

Time to get it right on anti-troll legislation

Last year, the House of Representative passed legislation closing loopholes and introducing disincentives for patent trolls. Just as mirror legislation was about to be considered in the Senate, Sen. Patrick Leahy withdrew it from the Judiciary Committee. It was reported that Sen. Harry Reid forced the hand of Mr. Leahy to kill the bill in committee. In the public sphere, the shrewd lobbying efforts to derail the bill were perceived to be pro-troll interests. The lobbying came from pharmaceutical companies, biotech companies, patent attorneys, and, to the surprise of everyone, universities.  Little wonder that critics overreacted and suggested universities were in partnership with trolls: even if they were wrong, these accusations stung.

University associations took that position out of a sincere belief in the dogma of patents and out of fear that the proposed anti-troll legislation limited the universities’ ability to sue patent infringers. However, their convictions stand on shaky ground and only a few universities sue for infringement. In taking that policy position, university associations are representing neither the interests nor the beliefs of the vast majority of universities.

A reversal of that position is not only possible, but would be timely. When anti-troll legislation is again introduced in Congress, universities should distance themselves from efforts to protect the policy status quo that so benefits patent trolls. It is not altogether improbable that Congress sees fit to exempt universities from some of the requirements that the law would impose. University associations could show Congress the merit of such exemptions in consideration of the universities’ constant and significant contributions to states, regions, and the nation. However, no such concessions could ever be expected if the universities continue to place themselves in the company of those who profit from patent management.

No asset is more valuable for universities than their prestige. It is the ample recognition of their value in society that guarantees tax dollars will continue to flow into universities. While acting legally to protect their patent rights, universities are nevertheless toying with their own legitimacy. Let those universities that stand to gain from litigation act in their self-interest, but do not let them speak for all universities. When university associations advocate for stronger patent protection, they do the majority of universities a disservice. These associations should better represent the interests of all their members by advocating a more neutral position about patent reform, by publicly praising universities’ restraint on patent litigation, and by promoting a culture and readiness in technology transfer offices to appraise each patent not by its market value but by its social value. At the same time, the majority of universities that obtain neither private nor social benefits from patenting should press their political representatives to adopt a more balanced approach to policy advocacy, lest they squander the reputation of the entire university system.

Editor's Note: The post was corrected to state that UW’s predictor circuit did originate from federally funded research.

Image Source: © Stephen Lam / Reuters
      
 
 




al

State of the Union’s challenge: How to make tech innovation work for us?


Tuesday night, President Obama presented four critical questions about the future of America and I should like to comment on the first two:

  1. How to produce equal opportunity, emphasizing economic security for all.
  2. In his words, “how do we make technology work for us, and not against us,” particularly to meet the “urgent challenges” of our days.

The challenges the president wishes to meet by means of technological development are climate change and cancer. Let’s consider cancer first. There are plenty of reasons to be skeptical: this is not the first presidential war against cancer, President Nixon tried that once and, alas cancer still has the upper hand. It is ironic that Mr. Obama chose this particular ”moonshot”, because not only are the technical aspects of cancer more uncertain than those of space travel, political support for the project is vastly different and we cannot be sure that even another Democrat in the White House would see this project to fruition. In effect, neither Mr. Obama nor his appointed “mission control”, Vice President Biden, have time in office to see fruits from their efforts on this front.

The second challenge the president wishes to address with technology is problematic beyond technical and economic feasibility (producing renewable energy at competitive prices); curbing carbon emissions has become politically intractable. The president correctly suggested that being leaders in the renewable energy markets of the future makes perfect business sense, even for global warming skeptics. Nevertheless, markets have a political economy, and current energy giants have a material interest in not allowing any changes to the rules that so favor them (including significant federal subsidies). Only when the costs of exploration, extraction, and distribution of fossil fuels rise above those of renewable sources, we can expect policy changes enabling an energy transition to become feasible. When renewables are competitive on a large scale, it is not very likely that their production will be controlled by new industrial players. Such is the political economy of free markets. What’s more, progressives should be wary of standard solutions that would raise the cost of energy (such as a tax on carbon emissions), because low income families are quite sensitive to energy prices; the cost of electricity, gas, and transportation is a far larger proportion of their income than that of their wealthier neighbors.

It’s odd that the president proposes technological solutions to challenges that call for a political solution. Again, in saying this, I’m allowing for the assumption that the technical side is manageable, which is not necessarily a sound assumption to make. The technical and economic complexity of these problems should only compound political hurdles. If I’m skeptical that technological fixes would curb carbon emissions or cure cancer, I am simply vexed by the president’s answer to the question on economic opportunity and security: expand the safety net. It is not that it wouldn’t work; it worked wonders creating prosperity and enlarging the middle-class in the post-World War II period. The problem is that enacting welfare state policies promises to be a hard political battle that, even if won, could result in pyrrhic victories. The greatest achievement of Mr. Obama expanding the safety net was, of course, the Affordable Care Act. But his policy success came at a very high cost: a majority of the voters have questions about the legitimacy of that policy. Even its eponymous name, Obamacare, was coined as a term of derision. It is bizarre that opposition to this reform is often found amidst people who benefit from it. We can blame the systematic campaign against it in every electoral contest, the legal subterfuges brought up to dismantle it (that ACA survived severely bruised), and the AM radio vitriol, but even controlling for the dirty war on healthcare reform, passing such as monumental legislation strictly across party lines has made it the lighting rod of distrust in government.

Progressives are free to try to increase economic opportunity following the welfare state textbook. They will meet the same opposition that Mr. Obama encountered. However, where progressives and conservatives could agree is about increasing opportunities for entrepreneurs, and nothing gives an edge to free enterprise more than innovation. Market competition is the selection mechanism by which an elite of enterprises rises from a legion created any given year; this elite, equipped with a new productive platform, can arm-wrestle markets from the old guard of incumbents. This is not the only way innovation takes place: monopolies and cartels can produce innovation, but with different outcomes. In competitive markets, innovation is the instrument of product differentiation; therefore, it improves quality and cuts consumer prices. In monopolistic markets, innovation also takes place, but generally as a monopolist’s effort to raise barriers to entry and secure high profits. Innovation can take place preserving social protections to the employees of the new industries, or it can undermine job security of its labor force (a concern with the sharing economy). These different modes of innovation are a function of the institutions that govern innovation, including industrial organization, labor and consumer protections.

What the President did not mention is that question two can answer question one: technological development can improve economic opportunity and security, and that is likely to be more politically feasible than addressing the challenges of climate change and cancer. Shaping the institutions that govern innovative activity to favor modes of innovation that benefit a broad base of society is an achievable goal, and could indeed be a standard by which his and future administrations are measured. This is so because these are not the province of the welfare state. They are policy domains that have historically enjoyed bipartisan consensus (such as federal R&D funding, private R&D tax credits) or low contestation (support for small business, tech transfer, loan guarantees).

As Mr. Obama himself suggested, technology can be indeed be made to work for us, all of us.

Image Source: © POOL New / Reuters