y American Foreign Policy in Retreat? A Discussion with Vali Nasr By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 On May 14, Foreign Policy at Brookings hosted Vali Nasr, author of The Dispensable Nation: American Foreign Policy in Retreat (Knopf Doubleday Publishing, 2013), for a discussion on the state of U.S. power globally and whether American foreign policy under the Obama administration is in retreat. Full Article
y Iran, Turkey’s New Ally? By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 A bribery and corruption scandal has plunged Turkey into crisis. Vali Nasr writes that by improving ties with Iran, Prime Minister Recep Tayyip Erdogan has an opportunity repair his weakened authority and to restore Turkey's international standing if he shows that Turkey can once again play a central role in the Middle East. Full Article
y Diplomacy Can Still Save Iraq By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 With the Islamic State of Iraq and Syria's swift sweep across northern Iraq, many believe it will only end with the Middle East's borders redrawn. Vali Nasr writes that it is possible to avoid such an outcome if the United States utilizes diplomacy, rather than staging a military intervention. Full Article
y Understanding Iran beyond the deal By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 On October 15, the Center for Middle East Policy hosted a conversation with Suzanne Maloney, deputy director of Brookings Foreign Policy program and author of the recently released book, Iran’s Political Economy since the Revolution (Cambridge University Press, 2015); Javier Solana, Brookings distinguished fellow and former EU High Representative for the Common Foreign and Security Policy; and Vali Nasr, Dean of Johns Hopkins University School of Advanced International Studies and nonresident senior fellow at Brookings. The three experts discussed Iran today, the implications of the nuclear agreement, and more. Full Article
y In the Wake of BCRA: An Early Report on Campaign Finance in the 2004 Elections By webfeeds.brookings.edu Published On :: Tue, 15 Jun 2004 00:00:00 -0400 ABSTRACT: Early experience with federal campaign finance reform suggests that the new law is fulfilling its primary objective of severing links between policymakers and large donors, and thus reducing the potential for corruption in the political process. Instead of languishing or seeking to circumvent the law, the national political parties have responded to the ban on soft money by increasing their hard money resources. While outside groups appear active, particularly on the Democratic side, their soft money financing should remain a small fraction of what candidates and parties will raise and spend in the 2004 Elections.To read the full article, please visit The Forum's website Authors Anthony CorradoThomas E. Mann Publication: The Forum Full Article
y Party Fundraising Success Continues Through Mid-Year By webfeeds.brookings.edu Published On :: Mon, 02 Aug 2004 00:00:00 -0400 With only a few months remaining before the 2004 elections, national party committees continue to demonstrate financial strength and noteworthy success in adapting to the more stringent fundraising rules imposed by the Bipartisan Campaign Reform Act (BCRA). A number of factors, including the deep partisan divide in the electorate, the expectations of a close presidential race, and the growing competition in key Senate and House races, have combined with recent party investments in new technology and the emergence of the Internet as a major fundraising tool to produce what one party chairman has described as a "perfect storm" for party fundraising.1 Consequently, both national parties have exceeded the mid-year fundraising totals achieved in 2000, and both approach the general election with substantial amounts of money in the bank.After eighteen months of experience under the new rules, the national parties are still outpacing their fundraising efforts of four years ago. As of June 30, the national parties have raised $611.1 million in federally regulated hard money alone, as compared to $535.6 million in hard and soft money combined at a similar point in the 2000 election cycle. The Republicans lead the way, taking in more than $381 million as compared to about $309 million in hard and soft money by the end of June in 2000. The Democrats have also raised more, bringing in $230 million as compared to about $227 million in hard and soft money four years ago. Furthermore, with six months remaining in the election cycle, both national parties have already raised more hard money than they did in the 2000 election cycle.2 In fact, by the end of June, every one of the Democratic and Republican national party committees had already exceeded its hard money total for the entire 2000 campaign.3 This surge in hard money fundraising has allowed the national party committees to replace a substantial portion of the revenues they previously received through unlimited soft money contributions. Through June, these committees have already taken in enough additional hard money to compensate for the $254 million of soft money that they had garnered by this point in 2000, which represented a little more than half of their $495 million in total soft money receipts in the 2000 election cycle.View the accompanying data tables (PDF - 11.4 KB) 1Terrence McAuliffe, Democratic National Committee Chairman, quoted in Paul Fahri, "Small Donors Grow Into Big Political Force," Washington Post, May 3, 2004, p. A11.2In 2000, the Republican national party committees raised $361.6 million in hard money, while the Democratic national committees raised $212.9 million. These figures are based on unadjusted data and do not take into account any transfers of funds that may have taken place among the national party committees.3The election cycle totals for 2000 can be found in Federal Election Commission, "FEC Reports Increase in Party Fundraising for 2000," press release, May 15, 2001. Available at http://www.fec.gov/press/press2001/051501partyfund/051501partyfund.html (viewed July 28, 2004). Downloads DownloadData Tables Authors Anthony Corrado Full Article
y Party Polarization and Campaign Finance By webfeeds.brookings.edu Published On :: Tue, 15 Jul 2014 00:00:00 -0400 There is a lively debate today over whether or not campaign finance reforms have weakened the role of political parties in campaigns. This seems an odd argument in an era of historically high levels of party loyalty — on roll calls in Congress and voting in the electorate. Are parties too strong and unified or too weak and fragmented? Have they been marginalized in the financing of elections or is their role at least as strong as it has ever been? Does the party role in campaign finance (weak or strong) materially shape the capacity to govern? In addition, the increasing involvement in presidential and congressional campaigns of large donors – especially through Super PACs and politically-active nonprofit organizations – has raised serious concerns about whether the super-wealthy are buying American democracy. Ideologically-based outside groups financed by wealthy donors appear to be sharpening partisan differences and resisting efforts to forge agreement across parties. Many reformers have advocated steps to increase the number of small donors to balance the influence of the wealthy. But some scholars have found evidence suggesting that small donors are more polarizing than large donors. Can that be true? If so, are there channels other than the ideological positioning of the parties through which small donors might play a more constructive role in our democracy? In this paper, Thomas Mann and Anthony Corrado attempt to shed light on both of these disputed features of our campaign finance system and then assess whether campaign finance reform offers promise for reducing polarization and strengthening American democracy. They conclude that not only is campaign finance reform a weak tool for depolarizing American political parties, but some break in the party wars is probably a prerequisite to any serious pushback to the broader deregulation of campaign finance now underway. Downloads Download the paper Authors Thomas E. MannAnthony Corrado Image Source: © Gary Cameron / Reuters Full Article
y New Paper: Party Polarization and Campaign Finance By webfeeds.brookings.edu Published On :: Thu, 17 Jul 2014 11:00:00 -0400 The Supreme Court’s recent McCutcheon decision has reinvigorated the discussion on how campaign finance affects American democracy. Seeking to dissect the complex relationship between political parties, partisan polarization, and campaign finance, Tom Mann and Anthony Corrado’s new paper on Party Polarization and Campaign Finance reviews the landscape of hard and soft money in federal elections and asks whether campaign finance reform can abate polarization and strengthen governing capacity in the United States. The paper tackles two popular contentions within the campaign finance debate: First, has campaign finance reform altered the role of political parties as election financiers and therefore undermined deal making and pragmatism? Second, would a change in the composition of small and large individual donors decrease polarization in the parties? The Role of Political Parties in Campaign Finance Political parties have witnessed a number of shifts in their campaign finance role, including McCain-Feingold’s ban on party soft money in 2002. This has led many to ask if the breakdown in compromise and governance and the rise of polarization has come about because parties have lost the power to finance elections. To assess that claim, the authors track the amount of money crossing national and state party books as an indicator of party strength. The empirical evidence shows no significant decrease in party strength post 2002 and holds that “both parties have compensated for the loss of soft money with hard money receipts.” In fact, the parties have upped their spending on congressional candidates more than six-fold since 1980. Despite the ban on soft money, the parties remain major players in federal elections. Large and Small Donors in National Campaigns Mann and Corrado turn to non-party money and survey the universe of individual donors to evaluate “whether small, large or mega-donors are most likely to fuel or diminish the polarization that increasingly defines the political landscape.” The authors map the size and shape of individual giving and confront the concern that Super PACs, politically active nonprofits, and the super-wealthy are buying out American democracy. They ask: would a healthier mix of small and large donors reduce radicalization and balance out asymmetric polarization between the parties? The evidence suggests that increasing the role of small donors would have little effect on partisan polarization in either direction because small donors tend to be highly polarized. Although Mann and Corrado note that a healthier mix would champion democratic ideals like civic participation and equality of voice. Taking both points together, Mann and Corrado find that campaign finance reform is insufficient for depolarizing the parties and improving governing capacity. They argue forcefully that polarization emerges from a broader political and partisan problem. Ultimately, they assert that, “some break in the party wars is probably a prerequisite to any serious pushback to the broader deregulation of campaign finance now underway.” Click to read Mann and Corrado’s full paper, Party Polarization and Campaign Finance. Authors Ashley Gabriele Image Source: © Gary Cameron / Reuters Full Article
y Beyond great forces: How individuals still shape history By webfeeds.brookings.edu Published On :: Tue, 15 Oct 2019 19:09:44 +0000 Full Article
y COVID-19 trends from Germany show different impacts by gender and age By webfeeds.brookings.edu Published On :: Fri, 01 May 2020 15:41:03 +0000 The world is in the midst of a global pandemic and all countries have been impacted significantly. In Europe, the most successful policy response to the pandemic has been by Germany, as measured by the decline in new COVID-19 cases in recent weeks and consistent increase in recovered’ cases. This is also reflected in the… Full Article
y Removing regulatory barriers to telehealth before and after COVID-19 By webfeeds.brookings.edu Published On :: Wed, 06 May 2020 16:00:55 +0000 Introduction A combination of escalating costs, an aging population, and rising chronic health-care conditions that account for 75% of the nation’s health-care costs paint a bleak picture of the current state of American health care.1 In 2018, national health expenditures grew to $3.6 trillion and accounted for 17.7% of GDP.2 Under current laws, national health… Full Article
y How to increase financial support during COVID-19 by investing in worker training By webfeeds.brookings.edu Published On :: Wed, 06 May 2020 17:46:07 +0000 It took just two weeks to exhaust one of the largest bailout packages in American history. Even the most generous financial support has limits in a recession. However, I am optimistic that a pandemic-fueled recession and mass underemployment could be an important opportunity to upskill the American workforce through loans for vocational training. Financially supporting… Full Article
y Why AI systems should disclose that they’re not human By webfeeds.brookings.edu Published On :: Thu, 07 May 2020 22:54:03 +0000 Full Article
y Why France? Understanding terrorism’s many (and complicated) causes By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 The terrible attack in Nice on July 14—Bastille Day—saddened us all. For a country that has done so much historically to promote democracy and human rights at home and abroad, France is paying a terrible and unfair price, even more than most countries. This attack will again raise the question: Why France? Full Article Uncategorized
y Turkey after the coup attempt By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 On July 20, the Foreign Policy program at Brookings will host a panel discussion to consider the domestic and international consequences of the coup attempt in Turkey. Full Article
y Obama’s legacy in African security and development By webfeeds.brookings.edu Published On :: Mon, 25 Jul 2016 16:36:00 +0000 President Obama’s presidency has witnessed widespread change throughout Africa. What legacy will he leave on the continent? Full Article Uncategorized
y What Clinton should say in her DNC speech tonight By webfeeds.brookings.edu Published On :: Fri, 29 Jul 2016 21:15:20 +0000 When she gives her speech tonight at the Democratic National Convention, Hillary Clinton will of course be at a crucial point in her campaign for the presidency. Her fellow Democrats—including her running mate Senator Tim Kaine, as well as Michael Bloomberg—have roundly criticized her Republican opponent Donald Trump this week. Vice President Biden and President Obama usefully offered a counterpoint to the […] Full Article
y What to do when containing the Syrian crisis has failed By webfeeds.brookings.edu Published On :: Mon, 01 Aug 2016 09:30:47 +0000 Attacks across the Western world—including most recently in Nice, but also of course in Brussels, Paris, San Bernardino, and elsewhere—highlight the growing threat from extremism, with Syria as its home base. It’s time to recognize, therefore, that containment of the Syria crisis (which I think is essentially President Obama’s policy and which many in the […] Full Article
y Congo’s political crisis: What is the way forward? By webfeeds.brookings.edu Published On :: Thu, 04 Aug 2016 16:09:16 +0000 On August 15, the Africa Security Initiative, part of the Brookings Center for 21st Century Security and Intelligence, will host an event focused on Congo and the broader region. Full Article
y Hey, Kremlin: Americans can make loose talk about nukes, too By webfeeds.brookings.edu Published On :: Thu, 04 Aug 2016 16:29:21 +0000 Over the past several years, Vladimir Putin and senior Russian officials have talked loosely about nuclear weapons, suggesting the Kremlin might not fully comprehend the awful consequences of their use. That has caused a degree of worry in the West. Now, the West has in Donald Trump—the Republican nominee to become the next president of […] Full Article
y The Marketplace of Democracy : Electoral Competition and American Politics By webfeeds.brookings.edu Published On :: Fri, 01 Sep 2006 00:00:00 -0400 Brookings Institution Press and Cato Institute 2006 312pp. Since 1998, U.S. House incumbents have won a staggering 98 percent of their reelection races. Electoral competition is also low and in decline in most state and primary elections. The Marketplace of Democracy combines the resources of two eminent research organizationsthe Brookings Institution and the Cato Instituteto address the startling lack of competition in our democratic system. The contributors consider the historical development, legal background, and political aspects of a system that is supposed to be responsive and accountable yet for many is becoming stagnant, self-perpetuating, and tone-deaf. How did we get to this point, and whatif anythingshould be done about it? In The Marketplace of Democracy, top-tier political scholars also investigate the perceived lack of competition in arenas only previously speculated on, such as state legislative contests and congressional primaries. Michael McDonald, John Samples, and their colleagues analyze previous reform efforts such as direct primaries and term limits, and the effects they have had on electoral competition. They also examine current reform efforts in redistricting and campaign finance regulation, as well as the impact of third parties. In sum, what does all this tell us about what might be done to increase electoral competition? Elections are the vehicles through which Americans choose who governs them, and the power of the ballot enables ordinary citizens to keep public officials accountable. This volume considers different policy options for increasing the competition needed to keep American politics vibrant, responsive, and democratic. Brookings Forum: "The Marketplace of Democracy: A Groundbreaking Survey Explores Voter Attitudes About Electoral Competition and American Politics," October 27, 2006. Podcast: "The Marketplace of Democracy: Electoral Competition and American Politics," a Capitol Hill briefing featuring Michael McDonald and John Samples, September 22, 2006. Contributors: Stephen Ansolabehere (Massachusetts Institute of Technology), William D. Berry (Florida State University), Bruce Cain (University of California-Berkeley), Thomas M. Carsey (Florida State University), James G. Gimpel (University of Maryland), Tim Groseclose (University of California-Los Angeles), John Hanley (University of California-Berkeley), John mark Hansen (University of Chicago), Paul S. Herrnson (University of Maryland), Shigeo Hirano (Columbia University), Gary C. Jacobson (University of California-San Diego), Thad Kousser (University of California-San Diego), Frances E. Lee (University of Maryland), John C. Matsusaka (University of Southern California), Kenneth R. Mayer (University of Wisconsin-Madison), Michael P. McDonald (Brookings Institution and George Mason University), Jeffrey Milyo (University of Missouri-Columbia), Richard G. Niemi (University of Rochester), Natheniel Persily (University of Pennsylvania Law School), Lynda W. Powell (University of Rochester), David Primo (University of Rochester), John Samples (Cato Institute), James M. Snyder Jr. (Massachusetts Institute of Technology), Timothy Werner (University of Wisconsin-Madison), and Amanda Williams (University of Wisconsin-Madison). ABOUT THE EDITORS John Samples John Samples directs the Center for Representative Government at the Cato Institute and teaches political science at Johns Hopkins University. Michael P. McDonald Downloads Sample Chapter Ordering Information: {9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 978-0-8157-5579-1, $24.95 Add to Cart{CD2E3D28-0096-4D03-B2DE-6567EB62AD1E}, 978-0-8157-5580-7, $54.95 Add to Cart Full Article
y The Marketplace of Democracy: A Groundbreaking Survey Explores Voter Attitudes About Electoral Competition and American Politics By webfeeds.brookings.edu Published On :: Fri, 27 Oct 2006 10:00:00 -0400 Event Information October 27, 200610:00 AM - 12:00 PM EDTFalk AuditoriumThe Brookings Institution1775 Massachusetts Ave., NWWashington, DC Register for the EventDespite the attention on the mid-term races, few elections are competitive. Electoral competition, already low at the national level, is in decline in state and primary elections as well. Reformers, who point to gerrymandering and a host of other targets for change, argue that improving competition will produce voters who are more interested in elections, better-informed on issues, and more likely to turn out to the polls. On October 27, the Brookings Institution—in conjunction with the Cato Institute and The Pew Research Center—presented a discussion and a groundbreaking survey exploring the attitudes and opinions of voters in competitive and noncompetitive congressional districts. The survey, part of Pew's regular polling on voter attitudes, was conducted through the weekend of October 21. A series of questions explored the public's perceptions, knowledge, and opinions about electoral competitiveness. The discussion also explored a publication that addresses the startling lack of competition in our democratic system. The Marketplace of Democracy: Electoral Competition and American Politics (Brookings, 2006), considers the historical development, legal background, and political aspects of a system that is supposed to be responsive and accountable, yet for many is becoming stagnant, self-perpetuating, and tone-deaf. Michael McDonald, editor and Brookings visiting fellow, moderated a discussion among co-editor John Samples, director of the Center for Representative Government at the Cato Institute, and Andrew Kohut and Scott Keeter from The Pew Research Center, who also discussed the survey. Transcript Transcript (.pdf) Event Materials 2006102720061027ppt Full Article
y Five Myths About Turning Out the Vote By webfeeds.brookings.edu Published On :: Sun, 29 Oct 2006 00:00:00 -0400 If you're an upstanding U.S. citizen, you'll stand up and be counted this Election Day, right? Well, maybe not. Just because Americans can vote doesn't mean they do. But who shows up is what decides the tight races, which makes turnout one of the most closely watched aspects of every election -- and one that has fostered a number of myths. Here are five, debunked:1. Thanks to increasing voter apathy, turnout keeps dwindling. This is the mother of all turnout myths. There may be plenty of apathetic voters out there, but the idea that ever fewer Americans are showing up at the polls should be put to rest. What's really happening is that the number of people not eligible to vote is rising -- making it seem as though turnout is dropping. Those who bemoan a decline in American civic society point to the drop in turnout from 55.2 percent in 1972, when 18-year-olds were granted the right to vote, to the low point of 48.9 percent in 1996. But that's looking at the total voting-age population, which includes lots of people who aren't eligible to vote -- namely, noncitizens and convicted felons. These ineligible populations have increased dramatically over the past three decades, from about 2 percent of the voting-age population in 1972 to 10 percent today. When you take them out of the equation, the post-1972 "decline" vanishes. Turnout rates among those eligible to vote have averaged 55.3 percent in presidential elections and 39.4 percent in midterm elections for the past three decades. There has been variation, of course, with turnout as low as 51.7 percent in 1996 and rebounding to 60.3 percent by 2004. Turnout in the most recent election, in fact, is on a par with the low-60 percent turnout rates of the 1950s and '60s. 2. Other countries' higher turnout indicates more vibrant democracies. You can't compare apples and oranges. Voting rules differ from nation to nation, producing different turnout rates. Some countries have mandatory voting. If Americans were fined $100 for playing voter hooky on Election Day, U.S. participation might increase dramatically. But in fact, many people with a ballot pointed at their head simply cast a blank one or a nonsense vote for Mickey Mouse. Moreover, most countries have national elections maybe once every five years; the United States has presidential or congressional elections every two years. Frequent elections may lead to voter fatigue. New European Union elections, for instance, seem to be depressing turnout in member countries. After decades of trailing turnout in the United Kingdom, U.S. turnout in 2004 was on a par with recent British elections, in which turnout was 59.4 percent in 2001 and 61.4 percent in 2005. Americans are asked to vote more often -- in national, state, local and primary contests -- than the citizens of any other country. They can be forgiven for missing one or two elections, can't they? Even then, over the course of several elections, Americans have more chances to participate and their turnout may be higher than that in countries where people vote only once every five years. 3. Negative ads turn off voters and reduce turnout. Don't be so sure. The case on this one is still open. Negative TV advertising increased in the mid-1980s, but turnout hasn't gone down correspondingly. The negative Swift boat campaign against Sen. John F. Kerry (D-Mass.) apparently did little to depress turnout in the 2004 presidential race. Some academic studies have found that negative advertising increases turnout. And that's not so surprising: A particularly nasty ad grabs people's attention and gets them talking. People participate when they're interested. A recent GOP attack ad on Rep. Harold E. Ford Jr. (D-Tenn.), a Senate candidate, has changed the dynamic of the race, probably not because it changed minds or dissuaded Democrats, but because it energized listless Republicans. We'll have to wait to see whether the attack on Ford backfires because voters perceive it as unfair. That's the danger of going negative. So campaigns tend to stick to "contrast ads," in which candidates contrast their records with those of their opponents. When people see stark differences between candidates, they're more likely to vote. 4. The Republican "72-hour campaign" will win the election. Not necessarily. You can lead citizens to the ballot, but you can't make them vote. Republicans supposedly have a super-sophisticated last-minute get-out-the-vote effort that identifies voters who'll be pivotal in electing their candidates. Studies of a campaign's personal contact with voters through phone calls, door-to-door solicitation and the like find that it does have some positive effect on turnout. But people vote for many reasons other than meeting a campaign worker, such as the issues, the closeness of the election and the candidates' likeability. Further, these studies focus on get-out-the-vote drives in low-turnout elections, when contacts from other campaigns and outside groups are minimal. We don't know what the effects of mobilization drives are in highly competitive races in which people are bombarded by media stories, television ads and direct mail. Republican get-out-the-vote efforts could make a difference in close elections if Democrats simply sat on the sidelines. But this year Democrats have vowed to match the GOP mobilization voter for voter. So it'll take more than just knowing whether a prospective voter owns a Volvo or a BMW for Republicans to eke out victory in a competitive race. 5. Making voter registration easier would dramatically increase turnout. Well, yes and no. In 1993, the Democratic government in Washington enacted "Motor Voter," a program that allowed people to register to vote when they received their driver's license or visited a welfare office. Democrats thought that if everyone were registered, turnout rates would increase -- by as much as 7 percentage points. But while many people registered to vote, turnout didn't go up much. Subsequent studies found only small increases in turnout attributable to Motor Voter, perhaps 2 percentage points. Sizable increases in turnout can be seen in states with Election Day registration, which allows people to register when they vote. This may be related to the fact that lots of people don't make up their minds to vote until Election Day, rather than months in advance when they get a license. Authors Michael P. McDonald Publication: The Washington Post Full Article
y Super Tuesday Turned Into a Super Flop By webfeeds.brookings.edu Published On :: Mon, 11 Feb 2008 12:00:00 -0500 The Syndrome, the villain in the 2004 animated movie “The Incredibles,” is an ordinary guy who has a plan to put an end to superheroes by making everyone a superhero.Syndrome’s evil machinations came to fruition on Tuesday, Feb. 5, 2008. The political parties permit states to hold their presidential nominating contests as early as the first Tuesday in February, with familiar states such as Iowa and New Hampshire given exemptions. Other states jealous of the attention lavished on those early states plotted to make their primaries or caucuses sooner, sometimes even violating party rules and suffering a penalty as a consequence. To quote Syndrome, when everyone is a super, no one is a super. And so it was with the Super Tuesday states. Although not intended, a national primary emerged as 24 states fell over one another in a Keystone Kop spectacle by moving up their primaries and caucuses to Feb. 5. Some argued that this would be good for the political parties in the general election since only a candidate who could run a national campaign would win the nomination. Ironically, the candidates acted just like they do in a general election, where they concentrate on the competitive battleground states. On Super Tuesday they decided where they could be competitive, where they could pick up delegates, and targeted their scarce resources to those states. States that thought they would be relevant found themselves irrelevant safe states that the candidates passed by and simply helped run up delegate totals for their favored candidate. A year ago, the campaigns were focused on building organizations and cultivating supporters in the early contest states of Iowa and New Hampshire. Some candidate strategies were solely focused on jump-starting their campaigns by winning these early states, and others hoped that decisive wins would quickly seal the nomination. Some of the better-financed campaigns could be forward-looking, but they still would not want to spend time and money on Super Tuesday states unless they were sure they would need to. By the time the nomination process was whittled down to the remaining players and the campaigns could start their Super Tuesday planning, little time was left to advertise, send direct mail and build volunteer organizations. Even where the campaigns decided they could be competitive, too many states were in play for the campaigns to pour in the same resources they did in Iowa and New Hampshire. The resulting dynamic had a twofold effect on voter participation in this year of high voter interest. Lack of competition drove down turnout in states such as New York, where Sens. Hillary Rodham Clinton (D-N.Y.) and John McCain (R-Ariz.) were expected to win big victories. Only 19 percent of eligible New Yorkers voted, compared with 53 percent in New Hampshire. Lack of organization and campaigning drove turnout down across the board, as all primary states combined averaged a turnout rate of 29 percent. Poor organization particularly afflicted the caucuses, which require campaign organizations to mobilize supporters to give up an entire evening. While 16 percent of eligible Iowans attended caucuses, the combined attendance rate for the four states holding caucuses for both political parties was a meager 6 percent. The silver lining is that continued voter interest buoyed participation where competition and organization failed. Turnout likely would have been much worse if the nominees already had been decided. As we move forward from Super Tuesday, those states that did not crowd to the front of the line will now find themselves being courted a little more graciously and intensely by the campaigns. This should help increase voter participation. However, the nomination battles are still coming rather fast and furiously, so the campaigns still can’t give the extended engagement they do for the early states. Some campaigns are now facing hard choices as to where they can spend their limited remaining resources. Except for perhaps a few intensely fought competitive states remaining, voter turnout has thus likely peaked in this election cycle. We expected Super Tuesday to soar into the stratosphere. Instead, it was more of a flop, a cheap imitation of Iowa and New Hampshire. When the dust settles after this primary season and we look back at how the parties nominate their candidates, we will still be searching for a way to have more equitable involvement by voters in all states. Authors Michael P. McDonald Publication: Roll Call Full Article
y Why the Rules Mattered In the Nomination Race By webfeeds.brookings.edu Published On :: Wed, 04 Jun 2008 12:00:00 -0400 Hillary Clinton was not ready on day one.The autopsies of her defeat for the Democratic nomination contest all point to a series of early blunders by her campaign. Her campaign plan was simple: leverage her name recognition, early money lead, and organization to win the Super-Tuesday contests, thereby wrapping up the Democratic nomination in early February. As the inevitable winner, she could be the centrist candidate on the Iraq war and tout her experience as a problem solver. But her over-confident and over-priced campaign consultants failed to recognize that in a “change” election, caucus attenders were not excited by an Iraq war centrist who also happened to be a Washington insider. Clinton’s lack of a plan to effectively contest the caucuses allowed Barack Obama to win what would be the all important delegate race, and more importantly, give him the mantle of momentum while she appeared mired in the mud at a crucial mid-February stage of the campaign.But she was ready on day two. She hit her stride late in the game by impressively winning a series of primary contests. All the more remarkable: she did so on a shoestring election-to-election budget while the media wrote her off as a spoiler. With a newfound voice that emphasized she was a populist who would fight for the people, her new message resonated particularly well as the economy continued to falter.Unfortunately, by the time she retooled her message and got rid of the people who had driven her campaign into the ditch— campaign manager Patti Solis Doyle and chief strategist Mark Penn—it was already too late. Obama had built a nearly insurmountable lead in the delegate count.It is here that the rules matter. If states had not moved up or “frontloaded” the date of their primaries and caucuses, under the misimpression that doing so would give them a greater voice in the 2008 nomination, Clinton might be the Democratic nominee. She would have received more delegates from Florida and Michigan, two states that she would have likely won if all Democratic candidates had vigorously campaigned, but was denied a full slate because these states violated party rules by holding their elections too early. Counting these contests was important for her delegate count and to her argument that she had won more popular votes than Obama.If states had not frontloaded their primaries and caucuses, she would have recovered from her early stumbles before it was too late. She would have minimized damage from her disastrous February, when Obama racked up an impressive string of victories even in Virginia, where she might have done better given her later strength. The irony is that Clinton was expected to benefit from frontloading. Only a candidate with name recognition, money, and organization could compete. Lesser candidates like Joe Biden, Chris Dodd, Mike Gravel, Dennis Kucinich, Bill Richardson and even John Edwards would be quickly weeded out of the field, leaving her with only one real opponent to dispense with. The lesson is that frontloading does not well serve the nomination process. Running for president is an unrehearsed drill. Mistakes will be made. Candidates become better as they learn how to campaign and to craft messages that work. Democratic Party leaders will undoubtedly look hard over the next four years at what steps can be taken to even out the flow of the nomination contests.While these lessons may resound loudly for Democrats, they apply equally well to Republicans. Democrats permitted the process to play out over a longer time by awarding delegates proportionately; Republicans brought their nomination to a faster close by awarding delegates by winner-take-all. John McCain became the inevitable winner of his party’s nomination without even winning a state’s vote majority before his opponents dropped like flies. While Republicans have delighted in the continued fight among the Democrats, McCain has been in a holding pattern since winning his nomination. Unable to use his time effectively to make headway with the American public, he has incurred problems in his own party. As evidence, 30 percent of South Dakota and Montana Republican primary voters registered a protest vote by voting for someone else.Perhaps McCain won his party’s nomination too soon. He lost to George Bush in 2000 and has yet to demonstrate that he can run an effective general election campaign. He would have benefited from being more strongly tested, making more mistakes, and learning from them in the primary season. Now, he and his campaign will have to learn on the job in the general election, while they face, in Obama, an opponent who has been tempered in his party’s nomination fire stoked by Clinton. Plenty of time remains for McCain to make his mistakes and for Obama to make more—and for both to recover before November. Campaigns often become so knee-jerk reactive to criticisms of any mistake that they fail to recognize the value in the lessons that may be learned. The primary election season is thus a valuable period for candidates to plumb their strengths and shore up their weaknesses, and we need to find a way to restore it as such. Authors Michael P. McDonald Full Article
y The Election of the Century By webfeeds.brookings.edu Published On :: Wed, 24 Sep 2008 12:00:00 -0400 The impending presidential election may be the election of a century. Record primary voting, floods of new registrations, more small campaign donors and highly rated political conventions show that people are intensely interested.These indicators augur a high turnout. Undoubtedly, more people will vote than the 60 percent who turned out four years ago, which was the highest rate since 1968. The question is, how many more? If participation tops the 1960 level of 64 percent, then we must go all the way back to 1908 — literally a century of American politics — to find the next highest rate: 66 percent.Lessons from the 1960 and 1908 elections explain why 2008 may see a historical election. Many people recall the 1960 election that pitted two familiar names, Richard Nixon and John F. Kennedy. Kennedy won one of the closest presidential elections in American history. As in sports, people are interested when two contestants are evenly matched. Just like those in 1960, pre-election polls today show a tight race between Barack Obama and John McCain. People perceive that their vote will help determine big issues of peace and prosperity. Further, an African-American or a woman will be elected, for the first time, to one of the country’s highest offices. Contrast this to 1996: People tuned out when pre-election polls showed President Bill Clinton cruising to reelection over Bob Dole.The 1908 election was not particularly close and did not involve big issues. Republican William Howard Taft won by a landslide over third-time Democratic candidate William Jennings Bryan, whose “free silver” platform had lost its luster. What is notable is that the 1908 election occurred in the twilight of the political machines that dominated American politics throughout the latter half of the 19th century. These machines were built from the bottom up. Local ward bosses, who knew their neighbors intimately, dispensed jobs and favors for votes. (Ward bosses conjure images of big city politics, but rural political machines existed, too.) Political machines even paid supporters’ taxes in states that disenfranchised tax delinquents. During the machine era, turnout rates routinely exceeded 80 percent. Paying people to vote, however, discomfited many. Progressive Era reforms near the turn of the 20th century rooted out the obvious corruption by creating a civil service to replace patronage jobs and adopting the secret ballot so that political machines could not monitor voting. The 1908 election was among the last where machines could still turn out voters.There is mounting evidence that political machines had something right: Face-to-face contact is among the most effective means to activate voters. Today’s high-tech campaigns recreate the mobilization capacity of political machines. In place of ward bosses are local volunteers, and in place of bosses’ neighborly knowledge are sophisticated microtargeted voter profiles that reveal which voters are persuadable and which are loyal party supporters. The glue is the Internet, which provides an information infrastructure for campaigns to recruit and communicate with their volunteers.It is tempting to give Democrats a mobilization edge. Obama’s efforts are highly visible, whereas McCain must rely on the tightlipped Republican National Committee. Obama does not employ the Democratic National Committee for this expensive campaign operation because he opted out of public financing. Indeed, recent presidential candidates — McCain included — usually raise money for voter mobilization through their national parties.Before Obama is given an edge, we must caution that Republicans are better able to register themselves than are lower-income Democrats. Massive Democratic registration drives create a false impression that they are out-hustling Republicans. In 2004, Democratic-aligned organizations’ highly publicized efforts exceeded their voter turnout victory targets. These groups underestimated President Bush’s 72-hour voter mobilization efforts the weekend before the election, which effectively matched them voter for voter. Still, Obama’s organization should not be discounted. Just four years ago, Democrats were still playing catch-up to Republicans. Now they are just as sophisticated and have recruited a large cadre of volunteers, including typically apathetic youth. American campaigns have undergone a paradigm shift. They no longer consist primarily of mass appeals through television advertising; grass-roots organizing is now a critical component. If elections stay close and interesting, we will likely observe higher turnouts. No longer will we wonder why turnout is declining; rather, we will wonder why it is climbing. A revitalized ground game will likely emerge as one explanation in the decade to come. Authors Michael P. McDonald Publication: Politico Full Article
y Early Voters Deluge States By webfeeds.brookings.edu Published On :: Fri, 24 Oct 2008 12:00:00 -0400 Early voting has started in earnest in many states, marking a dramatic change in how Americans vote and how campaigns are run. Preliminary indications are that more people will cast their ballot prior to Election Day than in any campaign in the nation’s history.Already, well over ten million people have cast their ballot for this November’s much-anticipated presidential election. This statistic is from just a few states and localities where these early voting numbers are available. In Georgia, for instance, more people have already voted early than voted early in all of the last presidential election.These early numbers are startling, far outpacing what would be expected at this stage in the election. In the past, early voting starts as a trickle, with the spigot opening as the traditional Election Day approaches. These numbers could portend a higher level of early voting, higher overall turnout, or – most likely – both.The apparent increase witnessed so far is part of the upward trend in early voting that has swept the country over the past two decades. In 1992, about 7 percent of all voters voted early; by 2004 that number exceeded 20 percent. The increase arises among states that have enacted early voting policies permitting people to vote absentee for any reason, to automatically receive an absentee ballot by mail or to vote at special early voting polling place in a high-traffic location.Those who vote early have changed over the past 20 years. People who vote by traditional absentee ballot tend to be younger, single and highly educated; essentially students, military and professionals traveling on business. Today, many people tend to be early voters, though early voters are on average older. This age disparity is consistent with the type of person who is motivated to vote early: a strong partisan who is certain of their vote. Early voters obviously do not show up to vote on Election Day, which causes problems for exit pollsters stationed outside polling places. In 2004, the media’s national exit poll organization conducted phone surveys of early voters to supplement their Election Day polling. These surveys found that in all states – except Iowa – the early electorate was more Republican than the election day electorate, which is an expected pattern steeped in campaign folklore that a Democrat will win if they evenly split the early vote.The deviating case of Iowa makes sense. In 2004, the Iowa Democratic Party conducted an intense early vote drive, a move that may have cost John Kerry the state since their Election Day ground game suffered.We are seeing indications that Barack Obama’s campaign is successfully turning out their supporters in Florida, Georgia and North Carolina, three states that provide demographic breakdowns of early voters. In Florida and North Carolina, registered Democrats outnumber Republicans by two to one among early voters. In Georgia and North Carolina, African-Americans are a much greater share of the early electorate than of the overall 2004 electorate. What makes these numbers all the more impressive is not just their disparity towards Democrats, but that we would normally anticipate Democrats to lag behind Republicans at this stage in the game. Do not expect the well-financed Obama campaign to skimp on their Election Day mobilization efforts, either.It is too soon to tell definitively if these early vote numbers represent a coming flood of early voting and Election Day turnout or if these represent pent up demand by enthusiastic Democrats finally able to cast their ballot. But that this question can even be asked is not encouraging for John McCain. For McCain to win, he needs to turn the election around – now. The presidency is starting to slip from his grasp. Pre-election polling currently indicates Obama will hold all the states won by Kerry in 2004, plus Iowa and New Mexico. Obama wins the Electoral College if he wins Colorado, a state that he has had a small consistent lead in the polls throughout the year. More than 60 percent of Coloradans will cast their ballot early. If McCain can not change the campaign dynamic, it will soon be too late for him to shift enough votes into his column to win. He may be able to take one of the states currently favoring Obama, but that will be an increasingly difficult task as ballots pile up in high-early vote battleground states like Florida, Iowa, Nevada, New Mexico, North Carolina, Oregon and Washington. It’s mid-October. Now is the time for an October surprise, before too many people can no longer be surprised.View 2008 Early Voting Statistics »Michael P. McDonald is an associate professor at George Mason University and a non-resident senior fellow at the Brookings Institution. He calculates national turnout rates for academics and the media and he is co-editor of The Marketplace of Democracy: Electoral Competition in American Politics. Authors Michael P. McDonald Full Article
y Principles for Transparency and Public Participation in Redistricting By webfeeds.brookings.edu Published On :: Thu, 17 Jun 2010 14:21:00 -0400 Scholars from the Brookings Institution and the American Enterprise Institute are collaborating to promote transparency in redistricting. In January 2010, an advisory board of experts and representatives of good government groups was convened in order to articulate principles for transparent redistricting and to identify barriers to the public and communities who wish to create redistricting plans. This document summarizes the principles for transparency in redistricting that were identified during that meeting.Benefits of a Transparent, Participative Redistricting Process The drawing of electoral districts is among the most easily manipulated and least transparent systems in democratic governance. All too often, redistricting authorities maintain their monopoly by imposing high barriers to transparency and public participation. Increasing transparency and public participation can be a powerful counterbalance by providing the public with information similar to that which is typically only available to official decision makers, which can lead to different outcomes and better representation.Increasing transparency can empower the public to shape the representation for their communities, promote public commentary and discussion about redistricting, inform legislators and redistricting authorities which district configurations their constituents and the public support, and educate the public about the electoral process. Fostering public participation can enable the public to identify their neighborhoods and communities, promote the creation of alternative maps, and facilitate an exploration of a wide range of representational possibilities. The existence of publicly-drawn maps can provide a measuring stick against which an official plan can be compared, and promote the creation of a “market” for plans that support political fairness and community representational goals.Transparency Principles All redistricting plans should include sufficient information so the public can verify, reproduce, and evaluate a plan. Transparency thus requires that:Redistricting plans must be available in non-proprietary formats. Redistricting plans must be available in a format allowing them to be easily read and analyzed with commonly-used geographic information software. The criteria used as a basis for creating plans and individual districts must be clearly documented.Creating and evaluating redistricting plans and community boundaries requires access to demographic, geographic, community, and electoral data. Transparency thus requires that:All data necessary to create legal redistricting plans and define community boundaries must be publicly available, under a license allowing reuse of these data for non-commercial purposes. All data must be accompanied by clear documentation stating the original source, the chain of ownership (provenance), and all modifications made to it.Software systems used to generate or analyze redistricting plans can be complex, impossible to reproduce, or impossible to correctly understand without documentation. Transparency thus requires that:Software used to automatically create or improve redistricting plans must be either open-source or provide documentation sufficient for the public to replicate the results using independent software. Software used to generate reports that analyze redistricting plans must be accompanied by documentation of data, methods, and procedures sufficient for the reports to be verified by the public.Services offered to the public to create or evaluate redistricting plans and community boundaries are often opaque and subject to misinterpretation unless adequately documented. Transparency thus requires that:Software necessary to replicate the creation or analysis of redistricting plans and community boundaries produced by the service must be publicly available. The service must provide the public with the ability to make available all published redistricting plans and community boundaries in non-proprietary formats that are easily read and analyzed with commonly-used geographic information software. Services must provide documentation of any organizations providing significant contributions to their operation.Promoting Public Participation New technologies provide opportunities to broaden public participation in the redistricting process. These technologies should aim to realize the potential benefits described and be consistent with the articulated transparency principles. Redistricting is a legally and technically complex process. District creation and analysis software can encourage broad participation by: being widely accessible and easy to use; providing mapping and evaluating tools that help the public to create legal redistricting plans, as well as maps identifying local communities; be accompanied by training materials to assist the public to successfully create and evaluate legal redistricting plans and define community boundaries; have publication capabilities that allow the public to examine maps in situations where there is no access to the software; and promoting social networking and allow the public to compare, exchange and comment on both official and community-produced maps.Official Endorsement from Organizations – Americans for Redistricting Reform, Brennan Center for Justice at New York University, Campaign Legal Center, Center for Governmental Studies, Center for Voting and Democracy, Common Cause, Demos, and the League of Women Voters of the United States.Attending board members – Nancy Bekavac, Director, Scientists and Engineers for America; Derek Cressman, Western Regional Director of State Operations, Common Cause; Anthony Fairfax, President, Census Channel; Representative Mike Fortner (R), Illinois General Assembly; Karin Mac Donald, Director, Statewide Database, Berkeley Law, University of California, Berkeley; Leah Rush, Executive Director, Midwest Democracy Network; Mary Wilson, President, League of Women Voters.Editors – Micah Altman, Harvard University and the Brookings Institution; Thomas E. Mann, Brookings Institution; Michael P. McDonald, George Mason University and the Brookings Institution; Norman J. Ornstein, American Enterprise Institute.This project is funded by a grant from the Sloan Foundation to the Brookings Institution and the American Enterprise Institute. Authors Micah Altman Thomas E. MannMichael P. McDonaldNorman J. Ornstein Publication: The Brookings Institution and The American Enterprise Institute Image Source: © Lucy Nicholson / Reuters Full Article
y Midterm Elections 2010: Driving Forces, Likely Outcomes, Possible Consequences By webfeeds.brookings.edu Published On :: Mon, 04 Oct 2010 09:30:00 -0400 Event Information October 4, 20109:30 AM - 11:30 AM EDTFalk AuditoriumThe Brookings Institution1775 Massachusetts Ave., NWWashington, DC As the recent primary in Delaware attests, this year's midterm elections continue to offer unexpected twists and raise large questions. Will the Republicans take over the House and possibly the Senate? Or has the Republican wave ebbed? What role will President Obama play in rallying seemingly dispirited Democrats -- and what effect will reaction to the sluggish economy play in rallying Republicans? Is the Tea Party more an asset or a liability to the G.O.P.'s hopes? What effect will the inevitably narrowed partisan majorities have in the last two year's of Obama's first term? And how will contests for governorships and state legislatures around the nation affect redistricting and the shape of politics to come?On October 4, a panel of Brookings Governance Studies scholars, moderated by Senior Fellow E.J. Dionne, Jr., attempted to answer these questions. Senior Fellow Thomas Mann provided an overview. Senior Fellow Sarah Binder discussed congressional dynamics under shrunken majorities or divided government. Senior Fellow William Galston offered his views on the administration’s policy prospects during the 112th Congress. Nonresident Senior Fellow Michael McDonald addressed electoral reapportionment and redistricting around the country. Video Partisan Gridlock post-Elections?GOP Influence over Redistricting, ReapportionmentWorking Within Divided GovernmentGood Conditions for GOP in 2010 Midterms Audio Midterm Elections 2010: Driving Forces, Likely Outcomes, Possible Consequences Transcript Uncorrected Transcript (.pdf) Event Materials 20101004_midterm_elections Full Article
y Web Chat: Voter Enthusiasm, Early Voting and the Midterm Elections By webfeeds.brookings.edu Published On :: Wed, 20 Oct 2010 09:16:00 -0400 With little time remaining until the midterm elections, campaigning is intensifying and the outcome for control of Congress remains uncertain. Voter enthusiasm and turnout will be big factors in the elections, where Republicans have demonstrated a leg up in the party’s primaries. On October 20, Brookings expert Michael McDonald answered your questions about what the polls and early voting are telling us about the upcoming midterm elections, in a live web chat moderated by POLITICO Assistant Editor Seung Min Kim. McDonald, with Seth McKee, is author of "Revenge of the Moderates," in today's POLITICO.The transcript of this chat follows: 12:30 Seung Min Kim: Good afternoon, everyone! We have just under two weeks until the Nov. 2 midterm elections, and the Brookings Institution's Michael McDonald is here to answer your questions. Thanks and welcome, Michael. 12:30 [Comment From Dale Dean (Arlington): ] I was wondering from the historical record how closely early results mirror the actual results. Are there systemic distortions in early voting that are the same over many elections or do they differ with each election? 12:30 Michael McDonald: Early voting does not necessarily correspond with Election Day voting. Several data sources suggest the following: Overall, prior to 2008, more Republicans tended to vote early. In 2008, it was Democrats who voted early. We have to see 2010 will be a continuation of 2008 or a reversion to previous elections. 12:30 Michael McDonald: Another important factor is the number of early votes. For high early voting states like Oregon and Washington, essentially ALL votes will be cast early. In other states that require an excuse to vote absentee, the early voting electorate will be much smaller, and have a partisan character more similar to pre-2008. 12:31 [Comment From Katy Steinmetz: ] Are black voters going to turn out for Obama like they did in 2008? Why or why not? How big of a difference do you think this will make? 12:31 Michael McDonald: Since we started surveying, pollsters have found that midterm electorates -- compared to presidential electorates -- tend to be older, wealthier, better educated, and composed of fewer minorities. Sometimes Democrats can overcome this hurdle, as they did in 2006, of course. It would be highly unusual for African-Americans to vote at the same rate as they did in 2008. In some key races, in states with large minority populations, lowered levels of minority voting could be a critical determinant to the outcome. 12:32 [Comment From tim: ] Do the polls accurately reflect the relative turnout of Democrats, GOP and Independents?12:33 Michael McDonald: Pollsters try as best they can. They try to forecast who is likely to vote by various methods that are not consistent across polling firms. So, this is as much as art as a science. There are a number of factors that may further affect the partisan composition of polls, such as if people are interviewed by live interviewers or automatically or whether or not cell phones are interviewed. 12:34 [Comment From Katy Steinmetz: ] When Republican pundits like Karl Rove predict gains of 60 or so seats in the House, does that help or hurt them (in terms of making Republicans complacent and driving Democrats to the polls)? 12:36 Michael McDonald: One of the big questions in this election is the relative effects of enthusiasm versus voter mobilization. Republicans are hoping the enthusiasm gap will help them to victory, while Democrats are banking on their organization to GOTV. So far as I can tell, neither side has a distinct edge yet. 12:37 [Comment From Casey (DC): ] I have a question about the margin of error. Let's say candidate A has been consistently polling a point above candidate B, with a 3% margin of error. Is the fact that A has beaten B in all recent polls statistically significant, even with a margin of error? That is, wouldn't it be misleading to claim that A and B are tied (due to the margin of error) since A has been beating B consistently in the same poll, even by just a point? If they're truly tied, wouldn't we see A beating B half the time and B beating A the other half?? 12:41 Michael McDonald: To quickly review, the MoE is determined by the number of respondents to a survey, and it does not linearly decline as the number of respondents increases [it declines by a factor of 1/sqrt(# of respondents)]. Suppose you have two polls with 1,000 persons each, then. You may treat them as two polls of 2,000. So, the MoE would decline, but it may not decline as much as you might think. Further, as I describe above, different pollsters use different techniques to create likely voter screens (and many other survey issues), so the polls themselves are not entirely comparable. 12:42 Michael McDonald: As a general rule, I like averaging polls and looking at trends among the same pollster. If all the polls are moving in the same direction, I tend to believe that a trend is real and not just statistical noise. 12:43 Michael McDonald: Finally (I know a long answer!): never trust a single poll. Unfortunately, the media tend to report their poll, or a surprising poll, and disregard others. 12:43 [Comment From Jazziette Devereaux (AZ): ] Do you think that early voting can prevent voters from learning facts about candidates that are presented in the feverish last two weeks of the election? 12:44 Michael McDonald: My favorite example is a John Edwards voter who was upset in 2008 that he had cast his vote before he dropped out of the race. 12:46 Michael McDonald: Early voting has certainly changed campaign dynamics. No longer can an opponent release the October surprise the last week. Their opponent gets a chance to respond. And it makes elections more expensive since campaigns need to be active throughout the entire election period. So, there are pluses and minuses. 12:46 [Comment From Mark, Greenbelt: ] Is it your feeling that early voting favors one party over another generally, or is it all case-by-case? 12:48 Michael McDonald: Prior to 2008, more Republicans voted early. In 2010, more Democrats voted early. So, far more Democrats are voting early in 2008, so it may be that 2008 was a watershed election for early voting. Still, in a state-by-state basis, Republicans tend to do better among early voters in states that require an excuse to vote an absentee ballot (early voting rates are much lower, too!). 12:48 [Comment From Rosemarie (NH): ] How do you think negative campaigning impacts turnout? 12:50 Michael McDonald: It used to be that people thought negative campaigning decreased turnout, but since then, numerous studies have shown it increases turnout. People are apt to be interested in slowing down and watching the accident on the side of the road. The media certainly enjoy covering the most negative campaigns, too. 12:50 [Comment From Malcolm, DC: ] Do you have any stats about early voting so far, and can you draw any conclusions? 12:50 Michael McDonald: They are here. So far, over 2 million people have already voted! 12:52 [Comment From Borys Ortega: ] How do you see the Obama support base (liberals, young people, etc) in terms of enthusiasm? 12:52 Seung Min Kim: And in addition to that, it seems like the White House and Democrats are doing a lot more outreach to young voters, with the MTV/BET town halls and the large rallies at universities. Do you think that will have any effect, considering young people have a low turnout rate for midterm elections? 12:53 Michael McDonald: Since we began surveying, polls consistently show that young people, minorities, the poor and uneducated tend to vote at lower rates -- perhaps the most ironic thing about this election is that the people most affected by the economic downturn are the least likely to vote. 12:55 Michael McDonald: The Democrats need to counter the Republican enthusiasm by expanding the electorate. Their strategy is to do voter mobilization targeted at the low propensity midterm voters, like the youth. We will again have to see how effective the Democrat's mobilization will be compared to the Republican's enthusiasm. 12:55 [Comment From Rosemarie (NH): ] Has there been any correlation between the level and campaign spending (especially on advertising) and the results? 12:57 Michael McDonald: A funny statistic is that the more an incumbent spends, the worse they do. This is because they are spending to counter a threat from a viable challenger. This is why this is one of the most difficult questions to answer -- surprisingly. We do not know the marginal effect of another dollar spent because the other campaign is also spending money. 12:57 [Comment From Sally: ] There was a flap this week about Univision airing ads that seek to depress Hispanic voter turnout. How common is that practice? 12:59 Michael McDonald: Voter suppression targeted at minorities has a long and ignoble history in American politics. Generally, I think everyone should vote since democracy works best when its citizens are engaged. This particular episode may ultimately backfire since it may rile up Nevada Latinos in a campaign that has had many racial overtones. 1:00 [Comment From Drew C.: ] What's your evaluation of early vote-by-mail, vs. in-person voting? Are both being done well? 1:00 Michael McDonald: In 2008, approximately 500,000 mail ballots were rejected. These were people who thought they voted by their vote did not count. 1:02 Michael McDonald: Why does this happen? People do not follow the procedures properly -- the return the ballot in the wrong envelope, they do not sign the envelope, etc. I do like California's method of allowing voters to drop their ballots off on election day at their polling places. This allows poll workers to check that the voter followed procedures. 1:03 Michael McDonald: An advantage of in-person early voting is that these problems do not occur, and their is a chance for a voter and election administrators to fix any problems, such as a first time voter forgetting to bring mandatory ID. 1:03 [Comment From Nick, DC: ] Along the lines of what Sally was asking about, we hear a lot about voter suppression, and we also hear a lot about alleged voter fraud. Are either of them really very common? And are voting machines more subject to tampering than the old paper ballots? 1:05 Michael McDonald: Vote fraud -- someone actually intentionally casting an illegal vote -- is extremely rare. When it happens, it tend to happen among mail ballots. Although there are potentially security flaws with electronic machines, there is little evidence of tampering (of course, that may be because there is no way to check!). 1:06 [Comment From Peter G.: ] If you could make one voting reform nationwide to make the system work better, what would it be? 1:08 Michael McDonald: Universal voter registration. There is plenty of evidence that our system of requiring voters to register themselves does not work well. Just about every other advanced democracy registers their own voters. In states with Election Day registration, turnout is much higher (5 to 7 percentage points). So, not only would we increase turnout, but we would get third party organizations like the now-defunct ACORN our of the business of registering voters. 1:09 [Comment From Ben Griffiths: ] You said incumbents fare worse when they spend more. is the same true of challengers? I'm thinking this year of Sharron Angle's $14 million in Nevada. Is it even possible to spend that much in the time left? 1:10 Michael McDonald: The spending in Nevada is tremendous. Despite that likely about half the voters will have already voted by Election Day -- Nevada is a high turnout state -- I think the campaigns will continue spending to the end since the election appears to be going down to the wire. 1:11 Michael McDonald: As for your first question, there is a point where a challenger spends enough money to become viable, which triggers a response in spending from an incumbent. 1:11 [Comment From Rosemarie (NH): ] Is overall turnout higher in states that allow early voting? 1:13 Michael McDonald: I testified to the U.S. Senate that I believe the answer is yes, though the turnout effects are a modest one to two points in presidential elections. There are studies that find big turnout increases in non-presidential elections. Indeed, the very first usage of all-mail ballot elections was in local jurisdictions that needed to meet threshold turnout rates to pass local bond measures. 1:13 [Comment From Nancy: ] Which party gets the early bragging rights? 1:14 Michael McDonald: So far, Democrats have jack rabbited out of the starting line in most states where we have a clue of which party's registrants are voting early. Nevada is an interesting departure, where Democrats have a lead, but it is not as great as 2008. 1:14 [Comment From Carson P.: ] One of your Brookings colleagues - Bill Galston - has proposed the idea of mandatory voting, like they do in Australia. Could that work here? Is it a good idea? 1:15 Michael McDonald: Good luck trying to convince Americans that they will be fined if they do not vote. I do not think this is practical for the U.S., though it obviously increases turnout. 1:15 [Comment From Don: ] What are the prospects for Lisa Murkowski come election day? Do you think she has a realistic shot at beating Joe Miler? 1:16 Michael McDonald: The polls are close. I think it is anyone's game in Alaska. In fact, I wrote an op-ed with my co-author Seth McKee, which was published at Politico today.1:16 [Comment From Greg Dworkin: ] Thanks for all your hard work on this! How 'institutionalized' do you see the early vote by the parties? are they incorporating early voting as part of GOTV or are they behind in realizing so many people vote early these days? 1:19 Michael McDonald: As I document with another co-author -- Tom Schaller -- the Democrats created a strong early voting GOTV organization in 2008, and Republicans only belatedly tried to mobilize their voters to vote early. We will have to see how well Democrats will roll over this organization to 2010. Eventually, I believe the Republicans will have to build as strong as an organization. Early voting allows a party to mobilize over a longer period of time. 1:19 [Comment From Mary H. Hager, PhD: ] Please clarify polling methodology. Who is reached; who is not. The role of technology (email, telephonic, etc.) in defining the subpopulation for polling data. 1:20 Michael McDonald: That is quite a tall order for a chat :) We discuss many of these issues on Pollster -- which now has a home in the politics section of Huffington Post (I also blog at Pollster). 1:21 [Comment From Don (Ossning, NY): ] Does Christine O'Donnell have a chance in Delaware? 1:21 Michael McDonald: No. 1:21 [Comment From Geoffrey V.: ] Over the years, I've gotten the sense that campaigns are moving faster, that there are more undecided voters and that many voters don't make up their minds until the last minute. Is that supported by the data? 1:23 Michael McDonald: Well, given the tremendous increase of early voting from 20% in 2004 to 30% in 2008, it appears that many voters are making up their minds sooner, not later. Still, in a midterm election, the rule has generally been that people tend to hold their ballots longer because they do not have as much information about the candidates. It appears that this election may break that previous pattern. 1:23 [Comment From Joan: ] Do you think compromise will come back to Congress after the midterms? 1:24 Michael McDonald: No. Historically, we still have a ways to go before we reach the highest levels of polarization in our politics observed in the late 19th century. 1:24 [Comment From Al Amundson, ND: ] It seems sometimes that pollsters are "surprised" by wins. Polling is so scientific these days, and there's so much money behind it -- how often does a real surprise actually occur? 1:25 Michael McDonald: Surprises more often occur in primary elections, where the electorate is difficult to predict and information is fluid. I do not expect we will be greatly surprised by the 2010 election outcomes. 1:25 [Comment From Rosemarie (NH): ] Do you think that even with early voting, people just want to get it over with, go in to vote and make up their minds while they read the ballot? 1:27 Michael McDonald: Want the campaigns to stop bugging you? Vote early if you can. Election officials track who has a mail ballot in hand and who has voted, and they share this information with the campaigns. 1:27 [Comment From Bert C.: ] How is Sharron Angle still holding on in Nevada even after her numerous public gaffes? 1:27 Michael McDonald: The economic crisis has hit Nevada VERY hard (and I don't often write in caps!). 1:28 [Comment From Peggy: ] What role do you think the Tea Party will play in future elections? Is this a one-off movement or something more serious in American politics? 1:30 Michael McDonald: Shameless plug: see my Politico op-ed. A conservative/populist movement is nothing new to American politics. At least in the short run, I expect the tea party to continue to be influential, especially if Republicans take the House -- I do not expect they will take the Senate as of today. Victories will further embolden the activists. 1:31 Michael McDonald: Thanks to everyone for your questions. Sorry I could not answer them all! 1:31 Seung Min Kim: And that's it for today. Thanks for all the great questions as we count down the days until Election Day. And thanks to Michael for his insightful answers! Authors Michael P. McDonald Image Source: © John Gress / Reuters Full Article
y Early Voting: A Live Web Chat with Michael McDonald By webfeeds.brookings.edu Published On :: Wed, 26 Sep 2012 12:30:00 -0400 Event Information September 26, 201212:30 PM - 1:00 PM EDTOnline OnlyThe Brookings Institution1775 Massachusetts Ave., NWWashington, DC Register for the EventThousands of Americans are already casting their votes in the 2012 elections through a variety of vote-by-mail and in-person balloting that allows citizens to cast their votes well in advance of November 6. From military personnel posted overseas to absentee voters, these early voting opportunities give voters the opportunity to make their voices heard even when they can’t stand in line on Election Day. However, there are pitfalls in the process. Expert Michael McDonald says that while a great deal of attention has been focused on voter fraud, the untold story is that during the last presidential election, some 400,000 absentee ballots were discarded as improperly submitted. How can early voters make sure their voices are heard? What effect will absentee and other early voting programs have in this election year? On September 26, McDonald took your questions and comments in a live web chat moderated by Vivyan Tran of POLITICO. 12:30 Vivyan Tran: Welcome everyone, let's get started. 12:30 Michael McDonald: Early voting was 30% of all votes cast in the 2008 election. My expectation is that 35% of all votes in 2012 will be cast prior to Election Day. In some states, the volume will be much higher. In the battleground state of CO, about 85% of the votes will be cast early; 70% in FL; and 45% in Ohio. What does it all mean? Hopefully I will be able to answer that question in today's chat! 12:30 Comment from JMC: At what point do you think that the in person early voters become less partisan types eager to cast their vote and more "regular folks" who would be more swayed by debate performances, TV ads, and the like? 12:30 Comment from Jason: 400,000 absentee ballots were discarded in 2008? How? 12:30 Michael McDonald: Reasons why election officials reject mail ballots: unsigned, envelope not sealed, multiple ballots in one envelope, etc. 400K rejected in 2008 does not include the higher rate of spoiled ballots that typically occur with paper mail ballots compared to electronic recording devices used in polling places. Moral: make sure you follow closely the proper procedures to cast your mail ballot! 12:31 Michael McDonald: @JMC: If they are going to vote early, most people wait until the week prior to the election. Those voting now have already made up their minds. But, the polls indicate many people have already done so, so maybe we see more early voting in 2012 as a consequence. 12:31 Comment from User: It was my understanding that absentee ballots are never counted unless the race is incredibly close in a particular state? Is that true - or do the rules for that vary by state? 12:32 Michael McDonald: No, all early votes are counted. What may not be counted, depending on state law and if the election is close enough for them to matter, are provisional ballots. 12:33 Comment from Damion: The blurb here says 400,000 early votes were discarded. Shouldn't the board of elections be reprimanded for that? Who was at fault and what consequences were there? 12:33: Michael McDonald: No, these are ballots "discarded" because people did not follow proper procedures and they must be rejected by law. 12:33 Comment from Shirley: Can you Facebook your vote in? 12:34 Michael McDonald: No. However, election officials are transmitting ballots electronically to overseas citizens and military voters. Voters must print the ballot, fill it out, sign it, scan it, and return. There are ways for these voters to verify that their ballot was received. 12:35 Comment from Karen K: What kind of impact could these discards have on the 2012 election? 12:36 Michael McDonald: Difficult to say. More Republicans vote by mail (excluding all mail ballot states). But, we don't know much about those who fail to follow the procedures. They might be less educated or elderly, and thus might counter the overall trend we see in mail balloting. Who knows? 12:37 Comment from User: This is the first I've heard of so many early votes getting discarded. Is this an issue people are addressing in a serious way? 12:38 Michael McDonald: Unfortunately, we are too focused on issues like voter fraud, which are low occurrence events, when there are many more important ways in which votes are lost in the system. Hopefully we can get the message out so fewer people disenfranchise themselves. 12:39 Comment from Anonymous: What do we know so far about absentee votes for 2012? Can we tell who they're leaning toward in specific states and how? 12:40 Michael McDonald: It's a little early :) yet. One of the major changes from 2008 is that the overseas civilian ballots -- a population that leans D -- was sent ballots much earlier this year than in 2008. We'll get a much better sense of the state of play in the two weeks prior to the election. 12:41 Michael McDonald: That said, the number of absentee ballot requests is running about the same as in 2008, if not a little higher, suggesting that the early vote will indeed be higher than in 2008, and perhaps that overall turnout will be on par with 2008, too. 12:41 Comment from Leslie: So, how can I ensure my early ballot is counted? There are so many rules and regulations, I'm never sure I've brought/filled out the paperwork. 12:42 Michael McDonald: Many states and localities allow people to check on-line the status of their ballot. Do a search for your local election official's webpage to see if that is available to you. 12:42 Comment from Daryyl: Can you define provisional ballots then? 12:44 Michael McDonald: Provisional ballots are required under federal law to allow people to vote if there is a problem with their voter registration. Election officials work after the election to resolve the situation. If you vote in-person early, then you can resolve provisional ballot situations much sooner, which is good. 12:45 Michael McDonald: Some states use provisional ballots for other purposes: e.g., for a person who does not have the required id or to manage a change in voter registration address. One of the untold stories of this cycle is that FL will manage change of reg. address through provisional ballots. OH does so, and 200K provisionals were cast in 2008. Expect 300K in FL, which may mean we will not know the outcome in FL until weeks after the election. Can you say 2000? 12:45 Comment from Mark, Greenbelt: Is early voting a new phenomenon, or is it increasing? It seems we should make it easier for people to vote when they can. 12:46 Michael McDonald: We are seeing more people vote early, particularly in states that offer the option. However, only MD changed its law from 2008 to allow in-person early voting. OH is sending absentee ballot requests to all registered voters, which is not a change in law, but a change in procedure that is expected to significantly increase early voting there. 12:47 Comment from Jennifer S. : Why do we vote on Tuesday? It seems inconvenient. Wouldn't more people vote if we did it on the weekend? Or over a period of days that offered both morning and evening hours? 12:48 Michael McDonald: We used to have early voting in the US! Back at the Founding, elections were held over several days to allow people living in remote areas to get to the courthouse (the polling place back in the day) to vote. In the mid-1840s, the federal gov't set the current single day for voting because -- what else? -- claims of vote fraud. That people could vote more than once. 12:49 Comment from Winston: What percentage of the U.S. population votes? And, if you could make one change that would increase voting in the U.S. what would be? 12:50 Michael McDonald: I also calculate turnout rates for the country for the media and academics. 62.2% of the eligible voters cast a ballot that counted in 2008. If I were to wave a magic wand, I would have election day registration. California just adopted it yesterday (but starting 2015). States with EDR have +5-7 percentage points of turnout. 12:50 Comment from Bernie S.: One of your colleagues at Brookings, Bill Galston, has suggested that we make voting mandatory, as they do in Australia. What do you think of that idea? Is it even possible here? 12:51 Michael McDonald: That will never happen in a county that values individual freedom so deeply as the US. Fun fact: a few years back, AZ voters rejected a ballot initiative to have voters entered into a lottery. 12:51 Comment from James: If early voting becomes more and more common, shouldn't candidates start campaigning earlier? 12:53 Michael McDonald: They do. In fact, you will see the presidential candidates visit battleground states that have in-person early voting at the start of the period. In 2008, you could see how early voting increased in places where Obama held rallies. 12:53 Comment from Devi P. : What are the factors that drive turnout? How do we get people to the polls? And what can you say about the "microtargeting" strategies the political parties are using to get their voters out? 12:54 Michael McDonald: One of the major ways in which elections have changed in the past decade is that campaigns now place more effort into voter contacts. Over 50% of people reported a contact in 2008. These contacts are known to increase turnout rates by upwards of 10 percentage points. Even contacts from Facebook friends seems to matter! 12:54 Comment from Wendy P, Ohio: What's your position on electronic voting? Can't every voting machine be hacked? Isn't plain old paper balloting more secure? 12:56 Michael McDonald: I went to Caltech, so I am sensitive to the potential for hacking. That said, I encourage experimentation so that we can build a better system. There are counties that do hold electronic elections! 12:56 Comment from Leslie: 400,000 seems like a lot - does this actually have impact on the electoral votes, and if so, should we be worried in this coming election that a lengthy recall may occur? 12:57 Michael McDonald: It could affect the outcome. So please spread the word through your networks. This is the #1 way in which votes are lost in the system! 12:57 Comment from JVotes: Perhaps we should microtarget with ballot issues. Many Americans seem disappointed with the two candidates we have to choose from. 12:58 Michael McDonald: Actually, ballot issues are known to increase turnout. But only a small amount in a presidential election, about 1 percentage point. People vote in the main show: the presidential election. 12:58 Michael McDonald: Interesting aside on that: early voting seems to have a small turnout effect in presidential election, but a larger effect in state and local elections. 12:58 Comment from Jaime Ravenet: Is there a reading of the new voter ID requirements (in at least the 9 most contested states) that does not constitute an "abridgment" of citizens' voting rights? 1:00 Michael McDonald: Perhaps under state constitutions. But the US Supreme Court has already ruled in favor of Indiana's id law. Still, that does not mean that lawyers will try to find some way under federal law to overturn them. TX was blocked because their law was determined to be discriminatory, per Sec. 5 of the Voting Rights Act. 1:00 Vivyan Tran: Thanks for the questions everyone, see you next week! Full Article
y Using Crowd-Sourced Mapping to Improve Representation and Detect Gerrymanders in Ohio By webfeeds.brookings.edu Published On :: Wed, 18 Jun 2014 07:30:00 -0400 Analysis of dozens of publicly created redistricting plans shows that map-making technology can improve political representation and detect a gerrymander. In 2012, President Obama won the vote in Ohio by three percentage points, while Republicans held a 13-to-5 majority in Ohio’s delegation to the U.S. House. After redistricting in 2013, Republicans held 12 of Ohio’s House seats while Democrats held four. As is typical in these races, few were competitive; the average margin of victory was 32 points. Is this simply a result of demography, the need to create a majority-minority district, and the constraints traditional redistricting principles impose on election lines—or did the legislature intend to create a gerrymander? Crowd-Sourced Redistricting Maps In the Ohio elections, we have a new source of information that opens a window into the legislature’s choice: Large numbers of publicly created redistricting plans. During the last round of redistricting, across the country thousands of people in over a dozen states created hundreds of legal redistricting plans. Advances in information technology and the engagement of grassroots reform groups made these changes possible. To promote these efforts we created the DistrictBuilder open redistricting platform and many of these groups used this tool to create their plans. Over the last several years, we have used the trove of information produced by public redistricting to gain insight into the politics of representation. In previous work that analyzed public redistricting in Virginia[1], and in Florida[2], we discovered that members of the public are capable of creating legal redistricting plans that outperform those maps created by legislatures in a number of ways. Public redistricting in Ohio shows something new—the likely motives of the legislature. This can be seen through using information visualization methods to show the ways in which redistricting goals can be balanced (or traded-off) in Ohio , revealing the particular trade-offs made by the legislature. The figure below, from our new research paper[3], shows 21 plots—each of which compares legislative and publicly-created plans using a pair of scores—altogether covering seven different traditional and representational criteria. A tiny ‘A’ shows the adopted plan. The top-right corner of each mini-plot shows the best theoretically possible score. When examined by itself, the legislative plan meets a few criteria: it minimizes population deviation, creates an expected majority-minority seat, and creates a substantial majority of districts that would theoretically be competitive in an open-seat race in which the statewide vote was evenly split. Figure 1: Pairwise Congressional Score Comparisons (Scatterplots) - Standardized Scores In previous rounds of redistricting, empirical analysis would stop here—unless experts were called in to draw alternative plans in litigation. However, the large number of public plans now available allows us to see other options, plans the legislature could readily have created had it desired to do so. Comparison of the adopted plans and public plans reveal the weakness of the legislature’s choice. Members of the public were able to find plans that soundly beat the legislative plan on almost every pair of criteria, including competitive districts. So why was the adopted plan chosen? Information visualization can help here, as well, but we need to add another criterion—partisan advantage: Pareto Frontier: Standard Criteria vs. Democratic Surplus When we visualize the number of expected Democratic seats that was likely to result from each plan, and compare this to the other score, we can see that the adopted plan is the best at something— producing Republican seats. Was Ohio gerrymandered? Applying our proposed gerrymandering detection method, the adopted plans stands in high contrast to the public sample of plans, even if the overall competition scoring formula is slightly biased towards the Democrats, as strongly biased towards the Republicans on any measure of partisan fairness. Moreover analyzing the tradeoffs among redistricting criteria illuminate empirically demonstrates what is often suspected, but is typically impossible to demonstrate—that had the legislature desired to improve any good-government criterion—it could have done so, simply by sacrificing some partisan advantage. In light of this new body of evidence, the political intent of the legislature is clearly displayed. However, when politics and technology mix, beware of Kranzberg’s first law: “Technology is neither good nor bad; nor is it neutral.”[4] Indeed there is an unexpected and hopeful lesson on reform revealed by the public participation that was enabled by new technology. The public plans show that, in Ohio, it is possible to improve the expected competitiveness, and to improve compliance with traditional districting principles such as county integrity, without threatening majority-minority districts simply by reducing partisan advantage—this is a tradeoff we should gladly accept. [1] Altman M, McDonald MP. A Half-Century of Virginia Redistricting Battles: Shifting from Rural Malapportionment to Voting Rights to Public Participation. Richmond Law Review [Internet]. 2013;43(1):771-831. [2] Altman M, McDonald M. Paradoxes Of Political Reform: Congressional Redistricting In Florida. In: Jigsaw Puzzle Politics in the Sunshine State. University Press of Florida; 2014. [3] Altman, Micah and McDonald, Michael P., Redistricting by Formula: An Ohio Reform Experiment (June 3, 2014). Available at SSRN: http://ssrn.com/abstract=2450645 [4] Kranzberg, Melvin (1986) Technology and History: "Kranzberg's Laws", Technology and Culture, Vol. 27, No. 3, pp. 544-560. Authors Micah Altman Michael P. McDonald Image Source: © Jonathan Ernst / Reuters Full Article
y Welfare Reform and Beyond By webfeeds.brookings.edu Published On :: The Brookings Institution's Welfare Reform & Beyond Initiative was created to inform the critical policy debates surrounding the upcoming congressional reauthorization of the Temporary Assistance for Needy Families (TANF) program and a number of related programs that were created or dramatically altered by the 1996 landmark welfare reform legislation. The goal of the project has… Full Article
y Social Security Smörgåsbord? Lessons from Sweden’s Individual Pension Accounts By webfeeds.brookings.edu Published On :: President Bush has proposed adding optional personal accounts as one of the central elements of a major Social Security reform proposal. Although many details remain to be worked out, the proposal would allow individuals who choose to do so to divert part of the money they currently pay in Social Security taxes into individual investment… Full Article
y Reviving Faith in Democracy By webfeeds.brookings.edu Published On :: In a new book, What Democracy is For: On Freedom and Moral Government (Princeton University Press, 2007), Stein Ringen points out the failure of the world's democracies, most specifically the United States and Britain, to live up to their own founding ideological values and expectations. Ringen, professor of Sociology and Social Policy at the University… Full Article
y Bridging the Social Security Divide: Lessons From Abroad By webfeeds.brookings.edu Published On :: Executive Summary Efforts by President George W. Bush to promote major reforms in the Social Security retirement program have not led to policy change, but rather to increased polarization between the two parties. And the longer we wait to address Social Security’s long-term funding problem, the bigger and more painful the changes will need to… Full Article
y Target Compliance: The Final Frontier of Policy Implementation By webfeeds.brookings.edu Published On :: Abstract Surprisingly little theoretical attention has been devoted to the final step of the public policy implementation chain: understanding why the targets of public policies do or do not “comply” — that is, behave in ways that are consistent with the objectives of the policy. This paper focuses on why program “targets” frequently fail to… Full Article
y But Will It Work?: Implementation Analysis to Improve Government Performance By webfeeds.brookings.edu Published On :: Executive Summary Problems that arise in the implementation process make it less likely that policy objectives will be achieved in many government programs. Implementation problems may also damage the morale and external reputations of the agencies in charge of implementation. Although many implementation problems occur repeatedly across programs and can be predicted in advance, legislators… Full Article
y Policy Leadership and the Blame Trap: Seven Strategies for Avoiding Policy Stalemate By webfeeds.brookings.edu Published On :: Editor’s Note: This paper is part of the Governance Studies Management and Leadership Initiative. Negative messages about political opponents increasingly dominate not just election campaigns in the United States, but the policymaking process as well. And politics dominated by negative messaging (also known as blame-generating) tends to result in policy stalemate. Negative messaging is attractive… Full Article
y Technology Transfer: Highly Dependent on University Resources By webfeeds.brookings.edu Published On :: Tue, 04 Mar 2014 07:30:00 -0500 Policy makers at all levels, federal and state and local governments, are depositing great faith in innovation as a driver of economic growth and job creation. In the knowledge economy, universities have been called to play a central role as knowledge producers. Universities are actively seeking to accommodate those public demands and many have engaged an ongoing review of their educational programs and their research portfolios to make them more attuned to industrial needs. Technology transfer is a function that universities are seeking to make more efficient in order to better engage with the economy. By law, universities can elect to take title to patents from federally funded research and then license them to the private sector. For years, the dominant model of technology transfer has been to market university patents with commercial promise to prospect partners in industry. Under this model, very few universities have been able to command high licensing fees while the vast majority has never won the lottery of a “blockbuster” patent. Most technology transfer offices are cost centers for their universities. However, upon further inspection, the winners of this apparent lottery seem to be an exclusive club. Over the last decade only 37 universities have shuffled in the top 20 of the licensing revenue ranking. What is more, 5 of the top 20 were barely covering the expenses of their tech transfer offices; the rest were not even making ends meet.[i] It may seem that the blockbuster patent lottery is rigged. See more detail in my Brookings report. That appearance is due to the fact that landing a patent of high commercial value is highly dependent on the resources available to universities. Federal research funding is a good proxy variable to measure those resources. Figure 1 below shows side by side federal funding and net operating income of tech transfer offices. If high licensing revenues are a lottery; then it is one in which only universities with the highest federal funding can participate. Commercial patents may require a critical mass of investment to build the capacity to produce breakthrough discoveries that are at the same time mature enough for the private investors to take an interest. Figure 1. A rigged lottery? High federal research funding is the ticket to enter the blockbuster patent lottery Source: Author elaboration with AUTM data (2013) [ii] But now, let’s turn onto another view of the asymmetry of resources and licensing revenues of universities; the geographical dimension. In Figure 2 we can appreciate the degree of dispersion (or concentration) of both, federal research investment and licensing revenue, across the states. It is easy to recognize the well-funded universities on the East and West coast receiving most of federal funds, and it is easy to observe as well that it is around the same regions, albeit more scattered, that licensing revenues are high. If policymakers are serious about fostering innovation, it is time to discuss the asymmetries of resources among universities across the nation. Licensing revenues is a poor measure of technology transfer activity, because universities engage in a number of interactions with the private sector that do not involve patent licensing contracts. However, this data hints at the larger challenge: If universities are expected to be engines of growth for their regions and if technology transfer is to be streamlined, federal support must be allocated by mechanisms that balance the needs across states. This is not to suggest that research funding should be reallocated from top universities to the rest; that would be misguided policy. But it does suggest that without reform, the engines of growth will not roar throughout the nation, only in a few places. Figure 2. Tech Transfer Activites Depend on Resources Bubbles based on Metropolitan Statistical Areas and propotional to size of the variable [i] These figures are my calculation based on Association of Technology Managers survey data (AUTM, 2013). In 2012, 155 universities reported data to the survey; a majority of the 207 Carnegie classified universities as high or very high research activity. [ii] Note the patenting data is reported by some universities at the state system level (e.g. the UC system). The corresponding federal funding was aggregated across the same reporting universe. Authors Walter D. Valdivia Image Source: © Ina Fassbender / Reuters Full Article
y The Study of the Distributional Outcomes of Innovation: A Book Review By webfeeds.brookings.edu Published On :: Mon, 05 Jan 2015 07:30:00 -0500 Editors Note: This post is an extended version of a previous post. Cozzens, Susan and Dhanaraj Thakur (Eds). 2014. Innovation and Inequality: Emerging technologies in an unequal world. Northampton, Massachusetts: Edward Elgar. Historically, the debate on innovation has focused on the determinants of the pace of innovation—on the premise that innovation is the driver of long-term economic growth. Analysts and policymakers have taken less interest on how innovation-based growth affects income distribution. Less attention even has received the question of how innovation affects other forms of inequality such as economic opportunity, social mobility, access to education, healthcare, and legal representation, or inequalities in exposure to insalubrious environments, be these physical (through exposure to polluted air, water, food or harmful work conditions) or social (neighborhoods ridden with violence and crime). The relation between innovation, equal political representation and the right for people to have a say in the collective decisions that affect their lives can also be added to the list of neglect. But neglect has not been universal. A small but growing group of analysts have been working for at least three decades to produce a more careful picture of the relationship between innovation and the economy. A distinguished vanguard of this group has recently published a collection of case studies that illuminates our understanding of innovation and inequality—which is the title of the book. The book is edited by Susan Cozzens and Dhanaraj Thakur. Cozzens is a professor in the School of Public Policy and Vice Provost of Academic Affairs at Georgia Tech. She has studied innovation and inequality long before inequality was a hot topic and led the group that collaborated on this book. Thakur is a faculty member of the school College of Public Service and Urban Affairs at Tennessee State University (while writing the book he taught at the University of West Indies in Jamaica). He is an original and sensible voice in the study of social dimensions of communication technologies. We’d like to highlight here three aspects of the book: the research design, the empirical focus, and the conceptual framework developed from the case studies in the book. Edited volumes are all too often a collection of disparate papers, but not in this case. This book is patently the product of a research design that probes the evolution of a set of technologies across a wide variety of national settings and, at the same time, it examines the different reactions to new technologies within specific countries. The second part of the book devotes five chapters to study five emerging technologies—recombinant insulin, genetically modified corn, mobile phones, open-source software, and tissue culture—observing the contrasts and similarities of their evolution in different national environments. In turn, part three considers the experience of eight countries, four of high income—Canada, Germany, Malta, and the U.S.—and four of medium or low income—Argentina, Costa Rica, Jamaica, and Mozambique. The stories in part three tell how these countries assimilated these diverse technologies into to their economies and policy environments. The second aspect to highlight is the deliberate choice of elements for empirical focus. First, the object of inquiry is not all of technology but a discreet set of emerging technologies gaining a specificity that would otherwise be negated if they were to handle the unwieldy concept of “technology” broadly construed. At the same time, this choice reveals the policy orientation of the book because these new entrants have just started to shape the socio-technical spaces they inhabit while the spaces of older technologies have likely ossified. Second, the study offers ample variance in terms of jurisdictions under study, i.e. countries of all income levels; a decision that makes at the same time theory construction more difficult and the test of general premises more robust.[i] We can add that the book avoids sweeping generalizations. Third, they focus on technological projects and their champions, a choice that increases the rigor of the empirical analysis. This choice, naturally, narrows the space of generality but the lessons are more precise and the conjectures are presented with according modesty. The combination of a solid design and clear empirical focus allow the reader to obtain a sense of general insight from the cases taken together that could not be derived from any individual case standing alone. Economic and technology historians have tackled the effects of technological advancement, from the steam engine to the Internet, but those lessons are not easily applicable to the present because emerging technologies intimate at a different kind of reconfiguration of economic and social structures. It is still too early to know the long-term effects of new technologies like genetically modified crops or mobile phone cash-transfers, but this book does a good job providing useful concepts that begin to form an analytical framework. In addition, the mix of country case studies subverts the disciplinary separation between the economics of innovation (devoted mostly to high-income countries) and development studies (interested in middle and low income economies). As a consequence of these selections, the reader can draw lessons that are likely to apply to technologies and countries other than the ones discussed in this book. The third aspect we would like to underscore in this review is the conceptual framework. Cozzens, Thakur and their colleagues have done a service to anyone interested in pursuing the empirical and theoretical analysis of innovation and inequality. For these authors, income distribution is only one part of the puzzle. They observe that inequalities are also part of social, ethnic, and gender cleavages in society. Frances Stewart, from Oxford University, introduced the notion of horizontal inequalities or inequalities at the social group level (for instance, across ethnic groups or genders). She developed the concept to contrast vertical inequalities or inequalities operating at the individual level (such as household income or wealth). The authors of this book borrow Stewart’s concept and pay attention to horizontal inequalities in the technologies they examine and observe that new technologies enter marketplaces that are already configured under historical forms of exclusion. A dramatic example is the lack of access to recombinant insulin in the U.S., because it is expensive and minorities are less likely to have health insurance (see Table 3.1 in p. 80).[ii] Another example is how innovation opens opportunities for entrepreneurs but closes them for women in cultures that systematically exclude women from entrepreneurial activities. Another key concept is that of complementary assets. A poignant example is the failure of recombinant insulin to reach poor patients in Mozambique who are sent home with old medicine even though insulin is subsidized by the government. The reason why doctors deny the poor the new treatment is that they don’t have the literacy and household resources (e.g. a refrigerator, a clock) necessary to preserve the shots, inject themselves periodically, and read sugar blood levels. Technologies aimed at fighting poverty require complementary assets to be already in place and in the absence of them, they fail to mitigate suffering and ultimately ameliorate inequality. Another illustration of the importance of complementary assets is given by the case of Open Source Software. This technology has a nominal price of zero; however, only individuals who have computers and the time, disposition, and resources to learn how to use open source operative systems benefit. Likewise, companies without the internal resources to adapt open software will not adopt it and remain economically tied to proprietary software. These observations lead to two critical concepts elaborated in the book: distributional boundaries and the inequalities across technological transitions. Distributional boundaries refer to the reach of the benefits of new technologies, boundaries that could be geographic (as in urban/suburban or center/periphery) or across social cleavages or incomes levels. Standard models of technological diffusion assume the entire population will gradually adopt a new technology, but in reality the authors observe several factors intervene in limiting the scope of diffusion to certain groups. The most insidious factors are monopolies that exercise sufficient control over markets to levy high prices. In these markets, the price becomes an exclusionary barrier to diffusion. This is quite evident in the case of mobile phones (see table 5.1, p. 128) where monopolies (or oligopolies) have market power to create and maintain a distributional boundary between post-pay and high-quality for middle and high income clients and pre-pay and low-quality for poor customers. This boundary renders pre-pay plans doubly regressive because the per-minute rates are higher than post-pay and phone expenses represent a far larger percentage in poor people’s income. Another example of exclusion happens in GMOs because in some countries subsistence farmers cannot afford the prices for engineering seeds; a disadvantage that compounds to their cost and health problems as they have to use more and stronger pesticides. A technological transition, as used here, is an inflection point in the adoption of a technology that re-shapes its distributional boundaries. When smart phones were introduced, a new market for second-hand or hand-down phones was created in Maputo; people who could not access the top technology get stuck with a sub-par system. By looking at tissue culture they find that “whether it provides benefits to small farmers as well as large ones depends crucially on public interventions in the lower-income countries in our study” (p. 190). In fact, farmers in Costa Rica enjoy much better protections compare to those in Jamaica and Mozambique because the governmental program created to support banana tissue culture was designed and implemented as an extension program aimed at disseminating know-how among small-farmers and not exclusively to large multinational-owned farms. When introducing the same technology, because of this different policy environment, the distributional boundaries were made much more extensive in Costa Rica. This is a book devoted to present the complexity of the innovation-inequality link. The authors are generous in their descriptions, punctilious in the analysis of their case studies, and cautious and measured in their conclusions. Readers who seek an overarching theory of inequality, a simple story, or a test of causality, are bound to be disappointed. But those readers may find the highest reward from carefully reading all the case studies presented in this book, not only because of the edifying richness of the detail herein but also because they will be invited to rethink the proper way to understand and address the problem of inequality.[iii] [i] These are clearly spelled out: “we assumed that technologies, societies, and inequalities co-evolved; that technological projects are always inherently distributional; and that the distributional aspects of individual projects and portfolios of projects are open to choice.” (p. 6) [ii] This problem has been somewhat mitigated since the Affordable Healthcare Act entered into effect. [iii] Kevin Risser contributed to this posting. Authors Walter D. Valdivia Image Source: © Akhtar Soomro / Reuters Full Article
y Technology transfer in an open society By webfeeds.brookings.edu Published On :: Mon, 23 Mar 2015 07:30:00 -0400 Recently the University of Massachusetts Amherst courted controversy when it announced that it would not admit Iranian students into some programs in the College of Engineering and in the College of Natural Sciences. The rule sought to comply with sanctions on Iran, but facing strong criticism from faculty and students the university reversed itself and replaced the ban with a more flexible policy that would craft a special curriculum for Iranian students in the fields relevant to the ban. It is not yet clear how that policy will be implemented, but what has become patently clear is that a blanket ban on students by national origin is a transgression of the principles of an open society including academic freedom. Very rarely will the knowledge created and taught at universities present a security risk that justifies the outright exclusion of an entire nationality from participating in the research and learning enterprise. A controversial ban Section 501 of the Iran Threat Reduction and Syria Human Rights Act of 2012 explicitly denies visas to Iranian nationals seeking study in fields related to nuclear engineering or the energy sector. After the controversy and in consultation with the State Department, the university replaced the ban for a policy of “individualized study plans” for Iranian students in the sanctioned fields. Questions remain as to the practicality of crafting study plans that exclude the kind of knowledge Iranians are not supposed to learn. One can imagine the inherent difficulty of asking some students to skip a few chapters of the textbook or to take a coffee break outside the lab when certain experiments are conducted. In a recent column, philosopher Behnam Taebi reminded us of a similar controversy when the Dutch government tried to restrict admission of Iranian students. He offers a valuable lesson from both experiences: “the Iranian academic community has traditionally been a bastion of reformism—a tendency Western governments and universities have every interest in encouraging” and correctly concludes that a ban of Iranian students is self-defeating. Universities export knowledge and values The costs of constraining technology transfer could indeed outweigh the benefits of study programs that entail technical and cultural exchange at the same time. American universities export knowledge and technology but also they export American values. Surely, not all values for export are exactly the height of civilization. Skeptics may point out that conspicuous consumption and reality TV are not worth disseminating but these critics would do well recalling that neither social posing nor voyeurism were invented in the U.S.; what we see here are just new bottles for very old wine. In contrast, the best values for export are those of the American political tradition. Living in the U.S. affords international students a regular exposure to that tradition in informal settings such as community life and churchgoing, and in more formal ones, through the stupendous collections of university libraries and the campus curriculum on American history and political thought. Aside of the lofty and the frivolous, however, there are a few values that are inherent to university life. Of course, the U.S. does not have a monopoly on those values—they are inherent to all universities in stable democracies—but they are certainly part of the experience of any international student. Consider these three: Stability: Students appreciate the relative quietude of university life. In the U.S., most campuses are physically designed as a refuge from the frantic pace of modern life and provide the peace and safety necessary to allow the mind to concentrate, grow, and discover. Students coming from countries troubled by political instability and conflict are able to stop worrying about questions of subsistence or survival and can devote their attention to solve the puzzles of nature and society. Meritocracy: Another value characteristic of academia is meritocracy. The system has its flaws but academia more than other walks of life assigns rewards based on clear standards of performance. There are systemic problems and no absence of prejudice, but hard work and talent tend to be given their due. Social awareness: A third value is a collective concern with public affairs in the local, national, and global spheres. Not everyone in the academic community is socially engaged, but within campus there is a steady supply of debate on contemporary issues and ample opportunity for voluntary work. Visitors will find it easy to engage friends and colleagues in relevant debates and join them in meaningful action on and off campus. Technology transfer is good diplomacy Many international students remain in the U.S. after concluding their training but they also keep ties to their families and scientific communities in their countries of origin. Others return home and may seek to reproduce there the stability, meritocracy, and engagement with social issues that were constitutive of their time at an American university. Some will seek reform within their own universities and a few will go further and press for reform to their country's political system. Spreading the values of academic life in democratic societies is a legitimate and powerful approach to spreading democratic values around the world. Technology transfer as a term of art has evolved to recognize the two-way exchange of knowledge between research and industrial organizations. Likewise, values move both ways and international students enrich American life by injecting their spheres with their own values for export. The policy of American universities of remaining open to all nationalities is both instrument and symbol of an open society. Technology transfer by means of advanced training is indeed good diplomacy. Authors Walter D. ValdiviaMarga Gual Soler Image Source: © Christian Hartmann / Reuters Full Article
y University-industry partnerships can help tackle antibiotic resistant bacteria By webfeeds.brookings.edu Published On :: Wed, 25 Mar 2015 07:30:00 -0400 An academic-industrial partnership published last January in the prestigious journal Nature the results of the development of antibiotic teixobactin. The reported work is still at an early preclinical stage but it is nevertheless good news. Over the last decades the introduction of new antibiotics has slowed down nearly to a halt and over the same period we have seen a dangerous increase in antibiotic resistant bacteria. Such is the magnitude of the problem that it has attracted the attention of the U.S. government. Accepting several recommendations presented by the President’s Council of Advisors on Science and Technology (PCAST) in their comprehensive report, the Obama Administration issued last September an Executive Order establishing an interagency Task Force for combating antibiotic resistant bacteria and directing the Secretary of Human and Health Services (HHS) to establish an Advisory Council on this matter. More recently the White House issued a strategic plan to tackle this problem. Etiology of antibiotic resistance Infectious diseases have been a major cause of morbidity and mortality from time immemorial. The early discovery of sulfa drugs in the 1930s and then antibiotics in the 1940s significantly aided the fight against these scourges. Following World War II society experienced extraordinary gains in life expectancy and overall quality of life. During that period, marked by optimism, many people presumed victory over infectious diseases. However, overuse of antibiotics and a slowdown of innovation, allowed bacteria to develop resistance at such a pace that some experts now speak of a post-antibiotic era. The problem is manifold: overuse of antibiotics, slow innovation, and bacterial evolution. The overuse of antibiotics in both humans and livestock also facilitated the emergence of antibiotic resistant bacteria. Responsibility falls to health care providers who prescribed antibiotics liberally and patients who did not complete their prescribed dosages. Acknowledging this problem, the medical community has been training physicians to avoid pressures to prescribe antibiotics for children (and their parents) with infections that are likely to be viral in origin. Educational efforts are also underway to encourage patients to complete their full course of every prescribed antibiotic and not to halt treatment when symptoms ease. The excessive use of antibiotics in food-producing animals is perhaps less manageable because it affects the bottom line of farm operations. For instance, the FDA reported that even though famers were aware of the risks, antibiotics use in feedstock increased by 16 percent from 2009 to 2012. The development of antibiotics—perhaps a more adequate term would be anti-bacterial agents—indirectly contributed to the problem by being incremental and by nearly stalling two decades ago. Many revolutionary innovations in antibiotics were introduced in a first period of development that started in the 1940s and lasted about two decades. Building upon scaffolds and mechanisms discovered theretofore, a second period of incremental development followed over three decades, through to 1990s, with roughly three new antibiotics introduced every year. High competition and little differentiations rendered antibiotics less and less profitable and over a third period covering the last 20 years pharmaceutical companies have cut development of new antibiotics down to a trickle. The misguided overuse and misuse of antibiotics together with the economics of antibiotic innovation compounded the problem taking place in nature: bacteria evolves and adapts rapidly. Current policy initiatives The PCAST report recommended federal leadership and investment to combat antibiotic-resistant bacteria in three areas: improving surveillance, increasing the longevity of current antibiotics through moderated usage, and picking up the pace of development of new antibiotics and other effective interventions. To implement this strategy PCAST suggested an oversight structure that includes a Director for National Antibiotic Resistance Policy, an interagency Task Force for Combating Antibiotic Resistance Bacteria, and an Advisory Council to be established by the HHS Secretary. PCAST also recommended increasing federal support from $450 million to $900 million for core activities such as surveillance infrastructure and development of transformative diagnostics and treatments. In addition, it proposed $800 million in funding for the Biomedical Advanced Research and Development Authority to support public-private partnerships for antibiotics development. The Obama administration took up many of these recommendations and directed their implementation with the aforementioned Executive Order. More recently, it announced a National Strategy for Combating Antibiotic Resistant Bacteria to implement the recommendations of the PCAST report. The national strategy has five pillars: First, slow the emergence and spread of resistant bacteria by decreasing the abusive usage of antibiotics in health care as well as in farm animals; second, establish national surveillance efforts that build surveillance capability across human and animal environments; third, advance development and usage of rapid and innovative diagnostics to provide more accurate care delivery and data collection; forth, seek to accelerate the invention process for new antibiotics, other therapeutics and vaccines across all stages, including basic and applied research and development; finally, emphasize the importance of international collaboration and endorse the World Health Organization Action Plan to address antimicrobial resistance. University-Industry partnerships Therefore, an important cause of our antibiotic woes seems to be driven by economic logic. On one hand, pharmaceutical companies have by and large abandoned investment in antibiotic development; competition and high substitutability have led to low prices and in their financial calculation, pharmaceutical companies cannot justify new developmental efforts. On the other hand, farmers have found the use of antibiotics highly profitable and thus have no financial incentives to halt their use. There is nevertheless a mirror explanation of a political character. The federal government allocates about $30 billion for research in medicine and health through the National Institutes of Health. The government does not seek to crowd out private research investment; rather, the goal is to fund research the private sector would not conduct because the financial return of that research is too uncertain. Economic theory prescribes government intervention to address this kind of market failure. However, it is also government policy to privatize patents to discoveries made with public monies in order to facilitate their transfer from public to private organizations. An unanticipated risk of this policy is the rebalancing of the public research portfolio to accommodate the growing demand for the kind of research that feeds into attractive market niches. The risk is that the more aligned public research and private demand become, the less research attention will be directed to medical needs without great market prospects. The development of new antibiotics seems to be just that kind of neglected medical public need. If antibiotics are unattractive to pharmaceutical companies, antibiotic development should be a research priority for the NIH. We know that it is unlikely that Congress will increase public spending for antibiotic R&D in the proportion suggested by PCAST, but the NIH could step in and rebalance its own portfolio to increase antibiotic research. Either increasing NIH funding for antibiotics or NIH rebalancing its own portfolio, are political decisions that are sure to meet organized resistance even stronger than antibiotic resistance. The second mirror explanation is that farmers have a well-organized lobby. It is no surprise that the Executive Order gingerly walks over recommendations for the farming sector and avoid any hint at an outright ban of antibiotics use, lest the administration is perceived as heavy-handed. Considering the huge magnitude of the problem, a political solution is warranted. Farmers’ cooperation in addressing this national problem will have to be traded for subsidies and other extra-market incentives that compensate for loss revenues or higher costs. The administration will do well to work out the politics with farmer associations first before they organize in strong opposition to any measure to curb antibiotic use in feedstock. Addressing this challenge adequately will thus require working out solutions to the economic and political dimensions of this problem. Public-private partnerships, including university-industry collaboration, could prove to be a useful mechanism to balance the two dimensions of the equation. The development of teixobactin mentioned above is a good example of this prescription as it resulted from collaboration between the university of Bonn Germany, Northeastern University, and Novobiotic Pharmaceutical, a start-up in Cambridge Mass. If the NIH cannot secure an increase in research funding for antibiotics development and cannot rebalance substantially its portfolio, it can at least encourage Cooperative Research and Development Agreements as well as university start-ups devoted to develop new antibiotics. In order to promote public-private and university-industry partnerships, policy coordination is advised. The nascent enterprises will be assisted greatly if the government can help them raise capital connecting them to venture funding networks or implementing a loan guarantees programs specific to antibiotics. It can also allow for an expedited FDA approval which would lessen the regulatory burden. Likewise, farmers may be convinced to discontinue the risky practice if innovation in animal husbandry can effectively replace antibiotic use. Public-private partnerships, particularly through university extension programs, could provide an adequate framework to test alternative methods, scale them up, and subsidize the transition to new sustainable practices that are not financially painful to farmers. Yikun Chi contributed to this post More TechTank content available here Authors Walter D. ValdiviaMichael S. Kinch Image Source: © Reuters Staff / Reuters Full Article
y Responsible innovation: A primer for policymakers By webfeeds.brookings.edu Published On :: Tue, 05 May 2015 00:00:00 -0400 Technical change is advancing at a breakneck speed while the institutions that govern innovative activity slog forward trying to keep pace. The lag has created a need for reform in the governance of innovation. Reformers who focus primarily on the social benefits of innovation propose to unmoor the innovative forces of the market. Conversely, those who deal mostly with innovation’s social costs wish to constrain it by introducing regulations in advance of technological developments. In this paper, Walter Valdivia and David Guston argue for a different approach to reform the governance of innovation that they call "Responsible Innovation" because it seeks to imbue in the actors of the innovation system a more robust sense of individual and collective responsibility. Responsible innovation appreciates the power of free markets in organizing innovation and realizing social expectations but is self-conscious about the social costs that markets do not internalize. At the same time, the actions it recommends do not seek to slow down innovation because they do not constrain the set of options for researchers and businesses, they expand it. Responsible innovation is not a doctrine of regulation and much less an instantiation of the precautionary principle. Innovation and society can evolve down several paths and the path forward is to some extent open to collective choice. The aim of a responsible governance of innovation is to make that choice more consonant with democratic principles. Valdivia and Guston illustrate how responsible innovation can be implemented with three practical initiatives: Industry: Incorporating values and motivations to innovation decisions that go beyond the profit motive could help industry take on a long-view of those decisions and better manage its own costs associated with liability and regulation, while reducing the social cost of negative externalities. Consequently, responsible innovation should be an integral part of corporate social responsibility, considering that the latter has already become part of the language of business, from the classroom to the board room, and that is effectively shaping, in some quarters, corporate policies and decisions. Universities and National Laboratories: Centers for Responsible Innovation, fashioned after the institutional reform of Internal Review Boards to protect human subjects in research and the Offices of Technology Transfer created to commercialize academic research, could organize existing responsible innovation efforts at university and laboratory campuses. These Centers would formalize the consideration of impacts of research proposals on legal and regulatory frameworks, economic opportunity and inequality, sustainable development and the environment, as well as ethical questions beyond the integrity of research subjects. Federal Government: Federal policy should improve its protections and support of scientific research while providing mechanisms of public accountability for research funding agencies and their contractors. Demanding a return on investment for every research grant is a misguided approach that devalues research and undermines trust between Congress and the scientific community. At the same time, scientific institutions and their advocates should improve public engagement and demonstrate their willingness and ability to be responsive to societal concerns and expectations about the public research agenda. Second, if scientific research is a public good, by definition, markets are not effective commercializing it. New mechanisms to develop practical applications from federal research with little market appeal should be introduced to counterbalance the emphasis the current technology transfer system places on research ready for the market. Third, federal innovation policy needs to be better coordinated with other federal policy, including tax, industrial, and trade policy as well as regulatory regimes. It should also improve coordination with initiatives at the local and state level to improve the outcomes of innovation for each region, state, and metro area. Downloads Download the paper Authors Walter D. ValdiviaDavid H. Guston Full Article
y The politics of federal R&D: A punctuated equilibrium analysis By webfeeds.brookings.edu Published On :: Wed, 17 Jun 2015 00:00:00 -0400 The fiscal budget has become a casualty of political polarization and even functions that had enjoyed bipartisan support, like research and development (R&D), are becoming divisive issues on Capitol Hill. As a result, federal R&D is likely to grow pegged to inflation or worse, decline. With the size of the pie fixed or shrinking, requests for R&D funding increases will trigger an inter-agency zero-sum game that will play out as pointless comparisons of agencies’ merit, or worse, as a contest to attract the favor of Congress or the White House. This insidious politics will be made even more so by the growing tendency of equating public accountability with the measurement of performance. Political polarization, tight budgets, and pressure for quantifiable results threaten to undermine the sustainability of public R&D. The situation begs the question: What can federal agencies do to deal with the changing politics of federal R&D? In a new paper, Walter D. Valdivia and Benjamin Y. Clark apply punctuated equilibrium theory to examine the last four decades of federal R&D, both at the aggregate and the agency level. Valdivia and Clark observe a general upward trend driven by gradual increases. In turn, budget leaps or punctuations are few and far in between and do no appear to have lasting effects. As the politics of R&D are stirred up, federal departments and agencies are sure to find that proposing punctuations is becoming more costly and risky. Consequently, agencies will be well advised in securing stable growth in their R&D budgets in the long run rather than pushing for short term budget leaps. While appropriations history would suggest the stability of R&D spending resulted from the character of the budget politics, in the future, stability will need the stewardship of R&D champions who work to institutionalize gradualism, this time, in spite of the politics. Downloads Download the paper Authors Walter D. ValdiviaBenjamin Y. Clark Full Article
y Federal R&D: Why is defense dominant yet less talked about? By webfeeds.brookings.edu Published On :: Thu, 25 Jun 2015 07:30:00 -0400 Federal departments and agencies received just above $133 billion in R&D funds in 2013. To put that figure in perspective, World Bank data for 2013 shows that, 130 countries had a GDP below that level; U.S. R&D is larger than the entire economy of 60 percent of all countries in the world. The chart below shows how those funds are allocated among the most important federal departments and agencies in terms of R&D. Those looking at these figures for the first time may be surprised to see that the Department of Defense takes about half of the pie. It should be noted however that not all federal R&D is destined to preserve U.S. military preeminence in the world. From non-defense research, 42 percent is destined to the much-needed research conducted by the National Institutes of Health, 17 percent to the research of the Department of Energy—owner of 17 celebrated national laboratories—16 percent for space exploration, and 8 percent for understanding the natural and social worlds at a fundamental level. The balance category is only lumped together for visual display not for its importance; it includes for instance the significant work of the National Oceanic and Atmospheric Administration and the National Institute of Standards and Technology. Despite the impressive size of defense R&D, we hear little about it. While much of defense research and development is classified, in time, civilian applications find their way into mainstream commercial uses—the Internet and GPS emerged from research done at DARPA. Far more visible than defense R&D is biomedical research, clean energy research, or news about truly impressive discoveries either in distant galaxies or in the depths of our oceans. What produces this asymmetry of visibility of federal R&D work? In a recent Brookings paper, a colleague and I suggest that the answer lies in the prominence of R&D in the agencies’ accounting books. In short: How visible is R&D and how much the agency seeks to discuss it in public fora depends not on the relative importance, but on how large a portion of the agency’s budget is dedicated to R&D. From a budget perspective, we identified two types of agencies performing R&D: those agencies whose main mission is to perform research and development, and those agencies that perform many functions in addition to R&D. For the former, the share of R&D in the discretionary budget is consistently high, while for the latter group, R&D is only a small part of their total budget (see the chart below). This distinction influences how agencies will argue for their R&D money, because they will make their case on the most important uses of their budget. If agencies have a low R&D share, they will keep it mixed with other functions and programs; for instance, research efforts will be justified only as supporting the main agency mission. In turn, agencies with a high R&D share must argue for their budgets highlighting the social outcomes of their work. These include three agencies whose primary mission is research (NASA, NSF, NIH), and a fourth (DoE) where research is a significant element of its mission. There is little question that the four agencies with high R&D share produce greatly beneficial research for society. Their strategy of promoting their work publicly is not only smart budget politics but also civic and pedagogical in the sense of helping taxpayers understand that their tax dollars are well-spent. However, it is interesting to observe that other agencies may be producing research of equal social impact that flies under the public radar, mainly because those agencies prefer as a matter of good budget policy to keep a low profile for their R&D work. One interesting conclusion for institutional design from this analysis is that promoting a research agency to the level of departments of government or its director to a cabinet rank position may bring prominence to its research, not because more and better research will necessarily get done but simply because that agency will seek public recognition for their work in order to justify its budget. Likewise, placing a research agency within a larger department may help conceal and protect their R&D funding; the politics of the department will focus on its main goals and R&D would recede to a concern of secondary interest in political battles. In the Politics of Federal R&D we discuss in more detail the changing politics of budget and how R&D agencies can respond. The general strategies of concealment and self-promotion are likely to become more important for agencies to protect a steady growth of their research and development budgets. Data sources: R&D data from the American Association for the Advancement of Sciences historical trends in Federal R&D. Total non-discretionary spending by federal agency from the Office of Management and Budget. Authors Walter D. Valdivia Image Source: © Edgar Su / Reuters Full Article
y Stuck in a patent policy rut: Considerations for trade agreements By webfeeds.brookings.edu Published On :: Thu, 17 Dec 2015 07:30:00 -0500 International development debates of the last four decades have ascribed ever greater importance to intellectual property rights (IPRs). There has also been a significant effort on the part of the U.S. to encourage its trade partners to introduce and enforce patent law modeled after American intellectual property law. Aside from a discussion on the impact of patents on innovation, there are some important consequences of international harmonization regarding the obduracy of the terms of trade agreements. The position of the State Department on patents when negotiating trade agreements has consistently been one of defending stronger patent protection. However, the high-tech sector is under reorganization, and the most innovative industries today have strong disagreements about the value of patents for innovation. This situation begs the question as to why the national posture on patent law is so consistent in favor of industries such as pharmaceuticals or biotech to the detriment of software developers and Internet-based companies. The State Department defends this posture, arguing that the U.S. has a comparative advantage in sectors dependent on patent protection. Therefore, to promote exports, our national trade policy should place incentives for partners to come in line with national patent law. This posture will become problematic when America’s competitive advantage shifts to sectors that find patents to be a hindrance to innovation, because too much effort will have already been invested in twisting the arm of our trade partners. It will be hard to undo those chapters in trade agreements particularly after our trade partners have taken pains in passing laws aligned to American law. Related to the previous concern, the policy inertia effect and inflexibility applies to domestic policy as much as it does to trade agreements. When other nations adopt policy regimes following the American model, advocates of stronger patent protection will use international adoption as an argument in favor of keeping the domestic policy status quo. The pressure we place on our trade partners to strengthen patent protection (via trade agreements and other mechanisms like the Special 301 Report) will be forgotten. Advocates will present those trade partners as having adopted the enlightened laws of the U.S., and ask why American lawmakers would wish to change law that inspires international emulation. Innovation scholar Timothy Simcoe has correctly suggested that harmonization creates inflexibility in domestic policy. Indeed, in a not-too-distant future the rapid transformation of the economy, new big market players, and emerging business models may give policymakers the feeling that we are stuck in a patent policy rut whose usefulness has expired. In addition, there are indirect economic effects from projecting national patent law onto trade agreements. If we assume that a club of economies (such as OECD) generate most of the innovation worldwide while the rest of countries simply adopt new technologies, the innovation club would have control over the global supply of high value-added goods and services and be able to preserve a terms-of-trade advantage. In this scenario, stronger patent protection may be in the interest of the innovation club to the extent that their competitive advantage remains in industries dependent of patent protection. But should the world economic order change and the innovation club become specialized in digital services while the rest of the world takes on larger segments of manufactures, the advantage may shift outside the innovation club. This is not a far-fetched scenario. Emerging economies have increased their service economy in addition to their manufacturing capacity; overall they are better integrated in global supply chains. What is more, these emerging economies are growing consumption markets that will become increasingly more relevant globally as they continue to grow faster than rich economies. What is more, the innovation club will not likely retain a monopoly on global innovation for too long. Within emerging economies, another club of economies is placing great investments in developing innovative capacity. In particular, China, India, Brazil, Mexico, and South Africa (and possibly Russia) have strengthened their innovation systems by expanding public investments in R&D and introducing institutional reforms to foster entrepreneurship. The innovation of this second club may, in a world of harmonized patent law, increase their competitive advantage by securing monopolistic control of key high-tech markets. As industries less reliant on patents flourish and the digital economy transforms US markets, an inflexibly patent policy regime may actually be detrimental to American terms of trade. I should stress that these kind of political and economic effects of America’s posture on IPRs in trade policy are not merely speculative. Just as manufactures displaced the once dominant agricultural sector, and services in turn took over as the largest sector of the economy, we can fully expect that the digital economy—with its preference for limited use of patents—will become not only more economic relevant, but also more politically influential. The tensions observed in international trade and especially the aforementioned considerations merit revisiting the rationale for America’s posture on intellectual property policy in trade negotiations. Elsie Bjarnason contributed to this post. Authors Walter D. Valdivia Image Source: © Romeo Ranoco / Reuters Full Article
y Why should I buy a new phone? Notes on the governance of innovation By webfeeds.brookings.edu Published On :: Fri, 22 Jan 2016 20:00:00 -0500 A review essay of “Governance of Socio-technical Systems: Explaining Change”, edited by Susana Borrás and Jakob Edler (Edward Elgar, 2014, 207 pages). Phasing-out a useful and profitable technology I own a Nokia 2330; it’s a small brick phone that fits comfortably in the palm of my hand. People have feelings about this: mostly, they marvel at my ability to survive without a smart-phone. Concerns go beyond my wellbeing; once a friend protested that I should be aware of the costs I impose onto my friends, for instance, by asking them for precise directions to their houses. Another suggested that I cease trying to be smarter than my phone. But my reason is simple: I don’t need a smart phone. Most of the time, I don’t even need a mobile phone. I can take and place calls from my home or my office. And who really needs a phone during their commute? Still, my device will meet an untimely end. My service provider has informed me via text message that it will phase out all 2G service and explicitly encouraged me to acquire a 3G or newer model. There is a correct if simplistic explanation for this announcement: my provider is not making enough money with my account and should I switch to a newer device, they will be able to sell me a data plan. The more accurate and more complex explanation is that my mobile device is part of a communications system that is integrated to other economic and social systems. As those other systems evolve, my device is becoming incompatible with them; my carrier has determined that I should be integrated. The system integration is easy to understand from a business perspective. My carrier may very well be able to make a profit keeping my account as is, and the accounts of the legion of elderly and low-income customers who use similar devices, and still they may not find it advantageous in the long run to allow 2G devices in their network. To understand this business strategy, we need to go back no farther than the introduction of the iPhone, which in addition to being the most marketable mobile phone set a new standard platform for mobile devices. Its introduction accelerated a trend underway in the core business of carriers: the shift from voice communication to data streaming because smart phones can support layers of overlapping services that depend on fast and reliable data transfer. These services include sophisticated log capabilities, web search, geo-location, connectivity to other devices, and more recently added bio-monitoring. All those services are part of systems of their own, so it makes perfect business sense for carriers to seamlessly integrate mobile communications with all those other systems. Still, the economic rationale explains only a fraction of the systems integration underway. The communication system of mobile telephony is also integrated with regulatory, social, and cultural systems. Consider the most mundane examples: It’s hard to imagine anyone who, having shifted from paper-and-pencil to an electronic agenda, decided to switch back afterwards. We are increasingly dependent of GPS services; while it may have once served tourists who did not wish to learn how to navigate a new city, it is now a necessity for many people who without it are lost in their home town. Not needing to remember phone numbers, the time of our next appointment, or how to go back to that restaurant we really liked, is a clear example of the integration of mobile devices into our value systems. There are coordination efforts and mutual accommodation taking place: tech designers seek to adapt to changing values and we update our values to the new conveniences of slick gadgets. Government officials are engaged in the same mutual accommodation. They are asking how many phone booths must be left in public places, how to reach more people with public service announcements, and how to provide transit information in real-time when commuters need it. At the same time, tech designers are considering all existing regulations so their devices are compliant. Communication and regulatory systems are constantly being re-integrated. The will behind systems integration The integration of technical and social systems that results from innovation demands an enormous amount of planning, effort, and conflict resolution. The people involved in this process come from all quarters of the innovation ecology, including inventors, entrepreneurs, financiers, and government officials. Each of these agents may not be able to contemplate the totality of the system integration problem but they more or less understand how their respective system must evolve so as to be compatible with interrelated systems that are themselves evolving. There is a visible willfulness in the integration task that scholars of innovation call the governance of socio-technical systems. Introducing the term governance, I should emphasize that I do not mean merely the actions of governments or the actions of entrepreneurs. Rather, I mean the effort of all agents involved in the integration and re-integration of systems triggered by innovation; I mean all the coordination and mutual accommodation of agents from interrelated systems. And there is no single vehicle to transport all the relevant information for these agents. A classic representation of markets suggests that prices carry all the relevant information agents need to make optimal decisions. But it is impossible to project this model onto innovation because, as I suggested above, it does not adhere exclusively to economic logic; cultural and political values are also at stake. The governance task is therefore fragmented into pieces and assigned to each of the participants of the socio-technical systems involved, and they cannot resolve it as a profit-maximization problem. Instead, the participants must approach governance as a problem of design where the goal could be characterized as reflexive adaptation. By adaptation I mean seeking to achieve inter-system compatibility. By reflexive I mean that each actor must realize that their actions trigger adaption measures in other systems. Thus, they cannot passively adapt but rather they must anticipate the sequence of accommodations in the interaction with other agents. This is one of the most important aspects of the governance problem, because all too often neither technical nor economic criteria will suffice; quite regularly coordination must be negotiated, which is to say, innovation entails politics. The idea of governance of socio-technical systems is daunting. How do we even begin to understand it? What kinds of modes of governance exist? What are the key dimensions to understand the integration of socio-technical systems? And perhaps more pressing, who prevails in disputes about coordination and accommodation? Fortunately, Susana Borrás, from the Copenhagen Business School, and Jakob Edler, from the University of Manchester, both distinguished professors of innovation, have collected a set of case studies that shed light on these problems in an edited volume entitled Governance of Socio-technical Change: Explaining Change. What is more, they offer a very useful conceptual framework of governance that is worth reviewing here. While this volume will be of great interest to scholars of innovation—and it is written in scholarly language—I think it has great value for policymakers, entrepreneurs, and all agents involved in a practical manner in the work of innovation. Organizing our thinking on the governance of change The first question that Borrás and Edler tackle is how to characterize the different modes of governance. They start out with a heuristic typology across the two central categories: what kinds of agents drive innovation and how the actions of these agents are coordinated. Agents can represent the state or civil society, and actions can be coordinated via dominant or non-dominant hierarchies. Change led by state actors Change led by societal actors Coordination by dominant hierarchies Traditional deference to technocratic competence: command and control. Monopolistic or oligopolistic industrial organization. Coordination by non-dominant hierarchies State agents as primus inter pares. More competitive industries with little government oversight. Source: Adapted from Borrás and Adler (2015), Table 1.2, p. 13. This typology is very useful to understand why different innovative industries have different dynamics; they are governed differently. For instance, we can readily understand why consumer software and pharmaceuticals are so at odds regarding patent law. The strict (and very necessary) regulation of drug production and commercialization coupled with the oligopolistic structure of that industry creates the need and opportunity to advocate for patent protection; which is equivalent to a government subsidy. In turn, the highly competitive environment of consumer software development and its low level of regulation foster an environment where patents hinder innovation. Government intervention is neither needed nor wanted; the industry wishes to regulate itself. This typology is also useful to understand why open source applications have gained currency much faster in the consumer segment than the contractor segment of software producers. Examples of the latter is industry specific software (e.g. to operate machinery, the stock exchange, and ATMs) or software to support national security agencies. These contractors demand proprietary software and depend on the secrecy of the source code. The software industry is not monolithic, and while highly innovative in all its segments, the innovation taking place varies greatly by its mode of governance. Furthermore, we can understand the inherent conflicts in the governance of science. In principle, scientists are led by curiosity and organize their work in a decentralized and organic fashion. In practice, most of science is driven by mission-oriented governmental agencies and is organized in a rigid hierarchical system. Consider the centrality of prestige in science and how it is awarded by peer-review; a system controlled by the top brass of each discipline. There is nearly an irreconcilable contrast between the self-image of science and its actual governance. Using the Borrás-Edler typology, we could say that scientists imagine themselves as citizens of the south-east quadrant while they really inhabit the north-west quadrant. There are practical lessons from the application of this typology to current controversies. For instance, no policy instrument such as patents can have the same effect on all innovation sectors because the effect will depend on the mode of governance of the sector. This corollary may sound intuitive, yet it really is at variance with the current terms of the debate on patent protection, where assertions of its effect on innovation, in either direction, are rarely qualified. The second question Borrás and Edler address is that of the key analytical dimensions to examine socio-technical change. To this end, they draw from an ample selection of social theories of change. First, economists and sociologists fruitfully debate the advantage of social inquiry focused on agency versus institutions. Here, the synthesis offered is reminiscent of Herbert Simon’s “bounded rationality”, where the focus turns to agent decisions constrained by institutions. Second, policy scholars as well as sociologists emphasize the engineering of change. Change can be accomplished with discreet instruments such as laws and regulations, or diffused instruments such as deliberation, political participation, and techniques of conflict resolution. Third, political scientists underscore the centrality of power in the adjudication of disputes produced by systems’ change and integration. Borrás and Edler have condensed these perspectives in an analytical framework that boils down to three clean questions: who drives change? (focus on agents bounded by institutions), how is change engineered? (focus on instrumentation), and why it is accepted by society? (focus on legitimacy). The case studies contained in this edited volume illustrate the deployment of this framework with empirical research. Standards, sustainability, incremental innovation Arthur Daemmrich (Chapter 3) tells the story of how the German chemical company BASF succeeded marketing the biodegradable polymer Ecoflex. It is worth noting the dependence of BASF on government funding to develop Ecoflex, and on the German Institute for Standardization (DIN), making a market by setting standards. With this technology, BASF capitalized on the growing demand in Germany for biodegradables, and with its intense cooperation with DIN helped establish a standard that differentiate Ecoflex from the competition. By focusing on the enterprise (the innovation agent) and its role in engineering the market for its product by setting standards that would favor them, this story reveals the process of legitimation of this new technology. In effect, the certification of DIN was accepted by agribusinesses that sought to utilize biodegradable products. If BASF is an example of innovation by standards, Allison Loconto and Marc Barbier (Chapter 4) show the strategies of governing by standards. They take the case of the International Social and Environmental Accreditation and Labelling alliance (ISEAL). ISEAL, an advocate of sustainability, positions itself as a coordinating broker among standard developing organizations by offering “credibility tools” such as codes of conduct, best practices, impact assessment methods, and assurance codes. The organization advocates what is known as the tripartite system regime (TSR) around standards. TSR is a system of checks and balances to increase the credibility of producers complying with standards. The TSR regime assigns standard-setting, certification, and accreditation of the certifiers, to separate and independent bodies. The case illustrates how producers, their associations, and broker organizations work to bestow upon standards their most valuable attribute: credibility. The authors are cautious not to conflate credibility with legitimacy, but there is no question that credibility is part of the process of legitimizing technical change. In constructing credibility, these authors focus on the third question of the framework –legitimizing innovation—and from that vantage point, they illuminate the role of actors and instruments that will guide innovations in sustainability markets. While standards are instruments of non-dominant hierarchies, the classical instrument of dominant hierarchies is regulation. David Barberá-Tomás and Jordi Molas-Gallart tell the tragic consequences of an innovation in hip-replacement prosthesis that went terribly wrong. It is estimated that about 30 thousand replaced hips failed. The FDA, under the 1976 Medical Device Act, allows incremental improvements in medical devices to go into the market after only laboratory trials, assuming that any substantive innovations have already being tested in regular clinical trials. This policy was designed as an incentive for innovation, a relief from high regulatory costs. However, the authors argue, when products have been constantly improved for a number of years after an original release, any marginal improvement comes at a higher cost or higher risk—a point they refer to as the late stage of the product life-cycle. This has tilted the balance in favor of risky improvements, as illustrated by the hip prosthesis case. The story speaks to the integration of technical and cultural systems: the policy that encourages incremental innovation may alter the way medical device companies assess the relative risk of their innovations, precisely because they focus on incremental improvements over radical ones. Returning to the analytical framework, the vantage point of regulation—instrumentation—elucidates the particular complexities and biases in agents’ decisions. Two additional case studies discuss the discontinuation of the incandescent light bulb (ILB) and the emergence of translational research, both in Western Europe. The first study, authored by Peter Stegmaier, Stefan Kuhlmann and Vincent R. Visser (Chapter 6), focuses on a relatively smooth transition. There was wide support for replacing ILBs that translated in political will and a market willing to purchase new energy efficient bulbs. In effect, the new technical system was relatively easy to re-integrate to a social system in change—public values had shifted in Europe to favor sustainable consumption—and the authors are thus able to emphasize how agents make sense of the transition. Socio-technical change does not have a unique meaning: for citizens it means living in congruence with their values; for policy makers it means accruing political capital; for entrepreneurs it means new business opportunities. The case by Etienne Vignola-Gagné, Peter Biegelbauer and Daniel Lehner (Chapter 7) offers a similar lesson about governance. My reading of their multi-site study of the implementation of translational research—a management movement that seeks to bridge laboratory and clinical work in medical research—reveals how the different agents involved make sense of this organizational innovation. Entrepreneurs see a new market niche, researchers strive for increasing the impact of their work, and public officials align their advocacy for translation with the now regular calls for rendering publicly funded research more productive. Both chapters illuminate a lesson that is as old as it is useful to remember: technological innovation is interpreted in as many ways as the number of agents that participate in it. Innovation for whom? The framework and illustrations of this book are useful for those of us interested in the governance of system integration. The typology of different modes of governance and the three vantage points from which empirical analysis can be deployed are very useful indeed. Further development of this framework should include the question of how political power is redistributed by effect of innovation and the system integration and re-integration that it triggers. The question is pressing because the outcomes of innovation vary as power structures are reinforced or debilitated by the emergence of new technologies—not to mention ongoing destabilizing forces such as social movements. Put another way, the framework should be expanded to explain in which circumstances innovation exacerbates inequality. The expanded framework should probe whether the mutual accommodation is asymmetric across socio-economic groups, which is the same as asking: are poor people asked to do more adapting to new technologies? These questions have great relevance in contemporary debates about economic and political inequality. I believe that Borrás and Edler and their colleagues have done us a great service organizing a broad but dispersed literature and offering an intuitive and comprehensive framework to study the governance of innovation. The conceptual and empirical parts of the book are instructive and I look forward to the papers that will follow testing this framework. We need to better understand the governance of socio-technical change and the dynamics of systems integration. Without a unified framework of comparison, the ongoing efforts in various disciplines will not amount to a greater understanding of the big picture. I also have a selfish reason to like this book: it helps me make sense of my carrier’s push for integrating my value system to their technical system. If I decide to adapt to a newer phone, I could readily do so because I have time and other resources. But that may not be the case for many customers of 2G devices who have neither the resources nor the inclination to learn to use more complex devices. For that reason alone, I’d argue that this sort of innovation-led systems integration could be done more democratically. Still, I could meet the decision of my carrier with indifference: when the service is disconnected, I could simply try to get by without the darn toy. Note: Thanks to Joseph Schuman for an engaging discussion of this book with me. Authors Walter D. Valdivia Image Source: © Dominic Ebenbichler / Reuters Full Article
y What drove Biden’s big wins on Super Tuesday? By webfeeds.brookings.edu Published On :: Wed, 04 Mar 2020 22:59:24 +0000 Brookings Senior Fellow John Hudak looks at the results of the Super Tuesday presidential primaries and examines the factors that fueled former Vice President Joe Biden's dramatic comeback, why former Mayor Bloomberg's unlimited budget couldn't save his candidacy, and which upcoming states will be the true tests of Biden and Bernie Sanders's competing visions for… Full Article
y Why Bernie Sanders vastly underperformed in the 2020 primary By webfeeds.brookings.edu Published On :: Fri, 20 Mar 2020 16:43:18 +0000 Senator Bernie Sanders entered the 2020 Democratic primary race with a wind at his back. With a narrow loss to Hillary Clinton in 2016 and a massive political organization, Mr. Sanders set the tone for the policy conversation in the race. Soon after announcing, the Vermont senator began raising record amounts of money, largely online… Full Article