pr How to meet SDG and climate goals: Eight lessons for scaling up development programs By webfeeds.brookings.edu Published On :: Tue, 10 May 2016 09:30:00 -0400 To achieve the desired outcomes of the Sustainable Development Goals as well as the global targets from the Paris COP21 Climate Summit by 2030, governments will have to find ways to meet the top-down objectives with bottom-up approaches. A systematic focus on scaling up successful development interventions could serve to bridge this gap, or what’s been called the “missing middle.” However, the question remains how to actually address the challenge of scaling up. When Arna Hartmann, adjunct professor of international development, and I first looked at the scaling up agenda in development work in the mid-2000s, we concluded that development agencies were insufficiently focused on supporting the scaling up of successful development interventions. The pervasive focus on one-off projects all too often resulted in what I’ve come to refer to as “pilots to nowhere.” As a first step to fix this, we recommended that each aid organization carry out a review to be sure to focus effectively on scaling up. The institutional dimension is critical, given their role in developing and implementing scaling up pathways. Of course, individuals serve as champions, designers, and implementers, but experience illustrates that if individuals lack a strong link to a supportive institution, scaling up is most likely to be short-lived and unsustainable. “Institutions” include many different types of organizations, such as government ministries and departments, private firms and social enterprises, civil society organizations, and both public and private external donors and financiers. The Brookings book “Getting to Scale: How to Bring Development Solutions to Millions of Poor People” explores the opportunities and challenges that such organizations face, on their own or, better yet, partnering with each other, in scaling up the development impact of their successful interventions. Eight lessons in scaling up Over the past decade I have worked with 10 foreign aid institutions—multilateral and bilateral agencies, as well as big global non-governmental organizations—helping them to focus systematically on scaling up operational work and developing approaches to do so. There are common lessons that apply across the board to these agencies, with one salutary example being the International Fund for Agricultural Development (IFAD) which has tackled the scaling up agenda systematically and persistently. Following are eight takeaway lessons I gleaned from my work with IFAD: Look into the “black box” of institutions. It is not enough to decide that an institution should focus on and support scaling up of successful development interventions. You actually need to look at how institutions function in terms of their mission statement and corporate strategy, their policies and processes, their operational instruments, their budgets, management and staff incentives, and their monitoring and evaluation practices. Check out the Brookings working paper that summarizes the results of a scaling up review of the IFAD. Scaling needs to be pursued institution-wide. Tasking one unit in an organization with innovation and scaling up, or creating special outside entities (like the Global Innovation Fund set up jointly by a number of donor agencies) is a good first step. But ultimately, a comprehensive approach must be mainstreamed so that all operational activities are geared toward scaling up. Scaling up must be championed from the top. The governing boards and leadership of the institutions need to commit to scaling up and persistently stay on message, since, like any fundamental institutional change, effectively scaling up takes time, perhaps a decade or more as with IFAD. The scaling up process must be grown within the institution. External analysis and advice from consultants can play an important role in institutional reviews. But for lasting institutional change, the leadership must come from within and involve broad participation from managers and staff in developing operational policies and processes that are tailored to an institution’s specific culture, tasks, and organizational structure. A well-articulated operational approach for scaling up needs to be put in place. For more on this, take a look at a recent paper by Larry Cooley and I that reviews two helpful operational approaches, which are also covered in Cooley’s blog. For the education sector, the Center for Universal Education at Brookings just published its report “Millions Learning,” which provides a useful scaling up approach specifically tailored to the education sector. Operational staffs need to receive practical guidance and training. It is not enough to tell staff that they have to focus on scaling up and then give them a general framework. They also need practical guidance and training, ideally tailored to the specific business lines they are engaged in. IFAD, for example, developed overall operational guidelines for scaling up, as well as guidance notes for specific area of engagement, including livestock development, agricultural value chains, land tenure security, etc. This guidance and training ideally should also be extended to consultants working with the agency on project preparation, implementation, and evaluation, as well as to the agency’s local counterpart organizations. New approaches to monitoring and evaluation (M&E) have to be crafted. Typically the M&E for development projects is backward looking and focused on accountability, narrow issues of implementation, and short-term results. Scaling up requires continuous learning, structured experimentation, and innovation based on evidence, including whether the enabling conditions for scaling up are being established. And it is important to monitor and evaluate the institutional mainstreaming process of scaling up to ensure that it is effectively pursued. I’d recommend looking at how the German Agency for International Development (GIZ) carried out a corporate-wide evaluation of its scaling up experience. Scaling up helps aid organizations mobilize financial resources. Scaling up leverages limited institutional resources in two ways: First, an organization can multiply the impact of its own financial capacity by linking up with public and private agencies and building multi-stakeholder coalitions in support of scaling up. Second, when an organization demonstrates that it is pursuing not only one-off results but also scaled up impact, funders or shareholders of the organization tend to be more motivated to support the organization. This certainly was one of the drivers of IFAD’s successful financial replenishment consultation rounds over the last decade. By adopting these lessons, development organizations can actually begin to scale up to the level necessary to bridge the missing middle. The key will be to assure that a focus on scaling up is not the exception but instead becomes ingrained in the institutional DNA. Simply put, in designing and implementing development programs and projects, the question needs to be answered, “What’s next, if this intervention works?” Authors Johannes F. Linn Full Article
pr Scaling up social enterprise innovations: Approaches and lessons By webfeeds.brookings.edu Published On :: Thu, 07 Jul 2016 09:53:00 -0400 In 2015 the international community agreed on a set of ambitious sustainable development goals (SDGs) for the global society, to be achieved by 2030. One of the lessons that the implementation of the Millennium Development Goals (MDG s) has highlighted is the importance of a systematic approach to identify and sequence development interventions—policies, programs, and projects—to achieve such goals at a meaningful scale. The Chinese approach to development, which consists of identifying a problem and long-term goal, testing alternative solutions, and then implementing those that are promising in a sustained manner, learning and adapting as one proceeds—Deng Xiaoping’s “crossing the river by feeling the stones”—is an approach that holds promise for successful achievement of the SDGs. Having observed the Chinese way, then World Bank Group President James Wolfensohn in 2004, together with the Chinese government, convened a major international conference in Shanghai on scaling up successful development interventions, and in 2005 the World Bank Group (WBG ) published the results of the conference, including an assessment of the Chinese approach. (Moreno-Dodson 2005). Some ten years later, the WBG once again is addressing the question of how to support scaling up of successful development interventions, at a time when the challenge and opportunity of scaling up have become a widely recognized issue for many development institutions and experts. Since traditional private and public service providers frequently do not reach the poorest people in developing countries, social enterprises can play an important role in providing key services to those at the “base of the pyramid.” In parallel with the recognition that scaling up matters, the development community is now also focusing on social enterprises (SEs), a new set of actors falling between the traditionally recognized public and private sectors. We adopt here the World Bank’s definition of “social enterprises” as a social-mission-led organization that provides sustainable services to Base of the Pyramid (BoP) populations. This is broadly in line with other existing definitions for the sector and reflects the World Bank’s primary interest in social enterprises as a mechanism for supporting service delivery for the poor. Although social enterprises can adopt various organizational forms—business, nongovernmental organizations (NGOs), and community-based organizations are all forms commonly adopted by social enterprises—they differ from private providers principally by combining three features: operating with a social purpose, adhering to business principles, and aiming for financial sustainability. Since traditional private and public service providers frequently do not reach the poorest people in developing countries, social enterprises can play an important role in providing key services to those at the “base of the pyramid.” (Figure 1) Figure 1. Role of SE sector in public service provision Social enterprises often start at the initiative of a visionary entrepreneur who sees a significant social need, whether in education, health, sanitation, or microfinance, and who responds by developing an innovative way to address the perceived need, usually by setting up an NGO, or a for-profit enterprise. Social enterprises and their innovations generally start small. When successful, they face an important challenge: how to expand their operations and innovations to meet the social need at a larger scale. Development partner organizations—donors, for short—have recognized the contribution that social enterprises can make to find and implement innovative ways to meet the social service needs of people at the base of the pyramid, and they have started to explore how they can support social enterprises in responding to these needs at a meaningful scale. The purpose of this paper is to present a menu of approaches for addressing the challenge of scaling up social enterprise innovations, based on a review of the literature on scaling up and on social enterprises. The paper does not aim to offer specific recommendations for entrepreneurs or blueprints and guidelines for the development agencies. The range of settings, problems, and solutions is too wide to permit that. Rather, the paper provides an overview of ways to think about and approach the scaling up of social enterprise innovations. Where possible, the paper also refers to specific tools that can be helpful in implementing the proposed approaches. Note that we talk about scaling up social enterprise innovations, not about social enterprises. This is because it is the innovations and how they are scaled up that matter. An innovation may be scaled up by the social enterprise where it originated, by handoff to a public agency for implementation at a larger scale, or by other private enterprises, small or large. This paper is structured in three parts: Part I presents a general approach to scaling up development interventions. This helps establish basic definitions and concepts. Part II considers approaches for the scaling up of social enterprise innovations. Part III provides a summary of the main conclusions and lessons from experience. A postscript draws out implications for external aid donors. Examples from actual practice are used to exemplify the approaches and are summarized in Annex boxes. Downloads Download the full paper (PDF) Authors Natalia AgapitovaJohannes F. Linn Full Article
pr World Leadership for an International Problem By webfeeds.brookings.edu Published On :: Editor's Note: For Campaign 2012, Ted Gayer wrote a policy brief proposing ideas for the next president on climate change. The following paper is a response to Gayer’s piece from Katherine Sierra. Charles Ebinger and Govinda Avasarala also prepared a response identifying five critical challenges the next president must address to help secure the nation’s energy… Full Article
pr Web Chat: Climate Change and the Presidential Election By webfeeds.brookings.edu Published On :: As the nation’s economy continues a slow and difficult recovery, climate change has so far received little attention on the presidential campaign trail. With the world’s carbon footprint soaring and America approaching an energy crossroads, however, the next president will be forced to make critical decisions regarding clean energy and the future of fossil fuels… Full Article
pr The Green Climate Fund’s Private Sector Facility: The Case for Private Sector Participation on the Board By webfeeds.brookings.edu Published On :: Tue, 21 Aug 2012 18:10:00 +0000 EXECUTIVE SUMMARY The Green Climate Fund’s (GCF) Private Sector Facility can enhance the likelihood of achieving its’ goals of scale-up, transformation and leverage by including individual voting members in its board who bring private sector skills and experience. This would build on growing precedent in the boards of other global funds, as well as in… Full Article
pr Helping close divisions in the US: Insights from the American Well-Being Project By webfeeds.brookings.edu Published On :: Tue, 12 Jun 2018 20:32:59 +0000 Issues of despair in the United States are diverse, widespread, and politically fueled, ranging from concentrated poverty and crime in cities to the opioid crisis plaguing poor rural towns. Local leaders and actors in disconnected communities need public policy resources and inputs beyond what has traditionally been available. Scholars at Brookings and Washington University in… Full Article
pr Do social protection programs improve life satisfaction? Lessons from Iraq By webfeeds.brookings.edu Published On :: Tue, 10 Jul 2018 14:54:06 +0000 There is much debate now—in both developed and developing economies—on the merits or de-merits of universal basic income (UBI), with strong opinions on either side. Advocates clash with those who see targeted transfers to the poor—such as the conditional cash transfers first pioneered in Latin America—as better at providing incentives for long-term investments in health,… Full Article
pr Progress paradoxes in China, India, and the US: A tale of growing but unhappy countries By webfeeds.brookings.edu Published On :: Sat, 20 Oct 2018 13:27:57 +0000 What we know depends on what we measure. Traditional income-based metrics, such as GDP and poverty headcounts, tell a story of unprecedented economic development, as seen by improvements in longevity, health, and literacy. Yet, well-being metrics, which are based on large-scale surveys of individuals around the world and assess their daily moods, satisfaction with life,… Full Article
pr Do social protection programs improve life satisfaction? By webfeeds.brookings.edu Published On :: Wed, 12 Dec 2018 17:14:15 +0000 An extensive literature examines the link between social protection-related public spending and objective outcomes of well-being such as income, employment, education, and health (see Department for International Development [DFID], 2011; ILO, 2010; World Bank, 2012). Much less attention has been given to how government social protection policies influence individuals’ own sense of well-being, particularly in… Full Article
pr Progress paradoxes and sustainable growth By webfeeds.brookings.edu Published On :: Wed, 19 Dec 2018 21:11:29 +0000 The past century is full of progress paradoxes, with unprecedented economic development, as evidenced by improvements in longevity, health, and literacy. At the same time, we face daunting challenges such as climate change, persistent poverty in poor and fragile states, and increasing income inequality and unhappiness in many of the richest countries. Remarkably, some of… Full Article
pr Why Bridgegate proves we need fewer hacks, machines, and back room deals, not more By webfeeds.brookings.edu Published On :: Wed, 06 May 2015 15:30:00 -0400 I had been mulling a rebuttal to my colleague and friend Jon Rauch’s interesting—but wrong—new Brookings paper praising the role of “hacks, machines, big money, and back room deals” in democracy. I thought the indictments of Chris Christie’s associates last week provided a perfect example of the dangers of all of that, and so of why Jon was incorrect. But in yesterday’s L.A. Times, he beat me to it, himself defending the political morality (if not the efficacy) of their actions, and in the process delivering a knockout blow to his own position. Bridgegate is a perfect example of why we need fewer "hacks, machines, big money, and back room deals" in our politics, not more. There is no justification whatsoever for government officials abusing their powers, stopping emergency vehicles and risking lives, making kids late for school and parents late for their jobs to retaliate against a mayor who withholds an election endorsement. We vote in our democracy to make government work, not break. We expect that officials will serve the public, not their personal interests. This conduct weakens our democracy, not strengthens it. It is also incorrect that, as Jon suggests, reformers and transparency advocates are, in part, to blame for the gridlock that sometimes afflicts our American government at every level. As my co-authors and I demonstrated at some length in our recent Brookings paper, “Why Critics of Transparency Are Wrong,” and in our follow-up Op-Ed in the Washington Post, reform and transparency efforts are no more responsible for the current dysfunction in our democracy than they were for the gridlock in Fort Lee. Indeed, in both cases, “hacks, machines, big money, and back room deals” are a major cause of the dysfunction. The vicious cycle of special interests, campaign contributions and secrecy too often freeze our system into stasis, both on a grand scale, when special interests block needed legislation, and on a petty scale, as in Fort Lee. The power of megadonors has, for example, made dysfunction within the House Republican Caucus worse, not better. Others will undoubtedly address Jon’s new paper at length. But one other point is worth noting now. As in foreign policy discussions, I don’t think Jon’s position merits the mantle of political “realism,” as if those who want democracy to be more democratic and less corrupt are fluffy-headed dreamers. It is the reformers who are the true realists. My co-authors and I in our paper stressed the importance of striking realistic, hard-headed balances, e.g. in discussing our non-absolutist approach to transparency; alas, Jon gives that the back of his hand, acknowledging our approach but discarding the substance to criticize our rhetoric as “radiat[ing] uncompromising moralism.” As Bridgegate shows, the reform movement’s “moralism" correctly recognizes the corrupting nature of power, and accordingly advocates reasonable checks and balances. That is what I call realism. So I will race Jon to the trademark office for who really deserves the title of realist! Authors Norman Eisen Image Source: © Andrew Kelly / Reuters Full Article
pr The Iran deal and the Prague Agenda By webfeeds.brookings.edu Published On :: Sun, 12 Jul 2015 17:15:00 -0400 Editor's Note: : We’re hosting a conversation on Markaz on the Iranian nuclear talks, debating the merits of a deal, as well as the broader issues at stake for the United States and the region. This piece originally appeared in The Huffington Post. As we near what may be the endgame of the current negotiations with Iran, I am reminded of the place where President Obama announced the overarching strategy that helped produce this moment: Prague. After stating his readiness to speak to Iran in a Democratic primary debate in 2007, and following that up postelection in 2009 with a series of initial statements directed to the Iranians, the president chose the Czech capital to lay out his vision of dealing with the dangers of nuclear weapons in April 2009. That included emphasizing that Iran would not be permitted to obtain a nuclear weapon on his watch: "Rules must be binding. Violations must be punished. Words must mean something. The world must stand together to prevent the spread of these weapons." As a result of that 2009 speech, the president's nuclear strategy became known as the Prague Agenda. I had the privilege to travel with President Obama back to Prague in April 2010 to witness the signing of a major accomplishment in another area under the Prague Agenda, namely the New START treaty. By the following year, April 2011, I was in Prague as U.S. ambassador. That year, and in the each year that followed, we held an annual Prague Agenda conference to assess the steps that had been taken and the challenges that lay ahead. In the years since, there has been steady progress in each of the four main areas the president laid out on that spring day in Prague in 2009. New START was a step forward on his first objective, to reduce the risks posed by existing nuclear weapons. Another goal, preventing nuclear terror by safeguarding materials and improving safety, has since been the subject of a series of successful Nuclear Security Summits in Washington, Seoul, and The Hague. I saw first-hand the president's personal commitment to a third objective articulated in Prague: to promote the peaceful use of nuclear energy. I was, for example, present in the Oval Office in October 2011 when the president and the Czech prime minister met. President Obama advocated for the use of civil nuclear power as a part of the Czech energy mix (and also to achieve energy independence from Russia). That approach has been replicated in administration policy supporting civil nuclear energy in the United States and around the world. Now, with the possible Iran deal, progress under the Prague Agenda's final prong is in reach: holding to account a state which had violated its nuclear obligations under international treaties. I am not of the school that believes the president needs to secure an Iran deal to build his legacy. That was never the case; having known him for almost a quarter of a century, since we were law students together, and having worked for him for six years, first in the White House and then as ambassador, I can attest that those kinds of considerations do not enter into critical decisions like this one. Even the president's strongest critics have to admit that legacy is, as a matter of logic, much less of a consideration after the recent breakthroughs on the Affordable Care Act and on Trade Promotion Authority. Instead, as the comprehensive nature of the Prague Agenda itself suggests, President Obama is pursuing a deal out of principle. He is acting from his conviction that a good agreement with Iran represents another step toward making the U.S., our allies, and the world safe from nuclear terror. It is that ambition that has driven the president's formulation and consistent pursuit of each of the four elements of the Prague Agenda, the obscure aspects just as much as the headline-making ones. Of course, as the president himself has repeatedly emphasized, the deal must be a good one. That is why I recently joined a bipartisan group of experts convened by the Washington Institute for Near East Policy in signing a statement laying out criteria for what any deal with Iran must at a minimum contain in five core areas: monitoring and verification; possible military dimensions; advanced centrifuges; sanctions relief; and consequences of violations. We also agreed on the importance of complementing any agreement with a strong deterrence policy and a comprehensive regional strategy. I have been encouraged by the warm reception for our statement from all corners, and by the strong tone struck by the American negotiators in Vienna this week. They recognize that willingness to walk away is the surest path to securing a good deal. If such a deal can be struck that meets the criteria in our bipartisan statement, that will be another stride forward under the Prague Agenda — perhaps the biggest yet. Authors Norman Eisen Image Source: © Petr Josek Snr / Reuters Full Article
pr Principles for Transparency and Public Participation in Redistricting By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 Scholars from the Brookings Institution and the American Enterprise Institute are collaborating to promote transparency in redistricting. In January 2010, an advisory board of experts and representatives of good government groups was convened in order to articulate principles for transparent redistricting and to identify barriers to the public and communities who wish to create redistricting… Full Article
pr Using Crowd-Sourced Mapping to Improve Representation and Detect Gerrymanders in Ohio By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 Analysis of dozens of publicly created redistricting plans shows that map-making technology can improve political representation and detect a gerrymander. In 2012, President Obama won the vote in Ohio by three percentage points, while Republicans held a 13-to-5 majority in Ohio’s delegation to the U.S. House. After redistricting in 2013, Republicans held 12 of Ohio’s… Full Article Uncategorized
pr The Impact of Domestic Drones on Privacy, Safety and National Security By webfeeds.brookings.edu Published On :: Legal and technology experts hosted a policy discussion on how drones and forthcoming Federal Aviation Agency regulations into unmanned aerial vehicles will affect Americans’ privacy, safety and the country’s overall security on April 4, 2012 at Brookings. The event followed a new aviation bill, signed in February, which will open domestic skies to “unmanned aircraft… Full Article
pr How Promise programs can help former industrial communities By webfeeds.brookings.edu Published On :: Wed, 17 Jul 2019 14:08:06 +0000 The nation is seeing accelerating gaps in economic opportunity and prosperity between more educated, tech-savvy, knowledge workers congregating in the nation’s “superstar” cities (and a few university-town hothouses) and residents of older industrial cities and the small towns of “flyover country.” These growing divides are shaping public discourse, as policymakers and thought leaders advance recipes… Full Article
pr When the champagne is finished: Why the post-Paris parade of climate euphoria is largely premature By webfeeds.brookings.edu Published On :: The new international climate change agreement has received largely positive reviews despite the fact that many years of hard work will be required to actually turn “Paris” into a success. As with all international agreements, the Paris agreement too will have to be tested and proven over time. The Eiffel Tower is engulfed in fog… Full Article Uncategorized
pr The presidential candidates’ views on energy and climate By webfeeds.brookings.edu Published On :: This election cycle, what will separate Democrats from Republicans on energy policy and their approach to climate change? Republicans tend to be fairly strong supporters of the fossil fuel industry, and to various degrees deny that climate change is occurring. Democratic candidates emphasize the importance of further expanding the share of renewable energy at the… Full Article Uncategorized
pr Implementing Common Core: The problem of instructional time By webfeeds.brookings.edu Published On :: Thu, 09 Jul 2015 00:00:00 -0400 This is part two of my analysis of instruction and Common Core’s implementation. I dubbed the three-part examination of instruction “The Good, The Bad, and the Ugly.” Having discussed “the “good” in part one, I now turn to “the bad.” One particular aspect of the Common Core math standards—the treatment of standard algorithms in whole number arithmetic—will lead some teachers to waste instructional time. A Model of Time and Learning In 1963, psychologist John B. Carroll published a short essay, “A Model of School Learning” in Teachers College Record. Carroll proposed a parsimonious model of learning that expressed the degree of learning (or what today is commonly called achievement) as a function of the ratio of time spent on learning to the time needed to learn. The numerator, time spent learning, has also been given the term opportunity to learn. The denominator, time needed to learn, is synonymous with student aptitude. By expressing aptitude as time needed to learn, Carroll refreshingly broke through his era’s debate about the origins of intelligence (nature vs. nurture) and the vocabulary that labels students as having more or less intelligence. He also spoke directly to a primary challenge of teaching: how to effectively produce learning in classrooms populated by students needing vastly different amounts of time to learn the exact same content.[i] The source of that variation is largely irrelevant to the constraints placed on instructional decisions. Teachers obviously have limited control over the denominator of the ratio (they must take kids as they are) and less than one might think over the numerator. Teachers allot time to instruction only after educational authorities have decided the number of hours in the school day, the number of days in the school year, the number of minutes in class periods in middle and high schools, and the amount of time set aside for lunch, recess, passing periods, various pull-out programs, pep rallies, and the like. There are also announcements over the PA system, stray dogs that may wander into the classroom, and other unscheduled encroachments on instructional time. The model has had a profound influence on educational thought. As of July 5, 2015, Google Scholar reported 2,931 citations of Carroll’s article. Benjamin Bloom’s “mastery learning” was deeply influenced by Carroll. It is predicated on the idea that optimal learning occurs when time spent on learning—rather than content—is allowed to vary, providing to each student the individual amount of time he or she needs to learn a common curriculum. This is often referred to as “students working at their own pace,” and progress is measured by mastery of content rather than seat time. David C. Berliner’s 1990 discussion of time includes an analysis of mediating variables in the numerator of Carroll’s model, including the amount of time students are willing to spend on learning. Carroll called this persistence, and Berliner links the construct to student engagement and time on task—topics of keen interest to researchers today. Berliner notes that although both are typically described in terms of motivation, they can be measured empirically in increments of time. Most applications of Carroll’s model have been interested in what happens when insufficient time is provided for learning—in other words, when the numerator of the ratio is significantly less than the denominator. When that happens, students don’t have an adequate opportunity to learn. They need more time. As applied to Common Core and instruction, one should also be aware of problems that arise from the inefficient distribution of time. Time is a limited resource that teachers deploy in the production of learning. Below I discuss instances when the CCSS-M may lead to the numerator in Carroll’s model being significantly larger than the denominator—when teachers spend more time teaching a concept or skill than is necessary. Because time is limited and fixed, wasted time on one topic will shorten the amount of time available to teach other topics. Excessive instructional time may also negatively affect student engagement. Students who have fully learned content that continues to be taught may become bored; they must endure instruction that they do not need. Standard Algorithms and Alternative Strategies Jason Zimba, one of the lead authors of the Common Core Math standards, and Barry Garelick, a critic of the standards, had a recent, interesting exchange about when standard algorithms are called for in the CCSS-M. A standard algorithm is a series of steps designed to compute accurately and quickly. In the U.S., students are typically taught the standard algorithms of addition, subtraction, multiplication, and division with whole numbers. Most readers of this post will recognize the standard algorithm for addition. It involves lining up two or more multi-digit numbers according to place-value, with one number written over the other, and adding the columns from right to left with “carrying” (or regrouping) as needed. The standard algorithm is the only algorithm required for students to learn, although others are mentioned beginning with the first grade standards. Curiously, though, CCSS-M doesn’t require students to know the standard algorithms for addition and subtraction until fourth grade. This opens the door for a lot of wasted time. Garelick questioned the wisdom of teaching several alternative strategies for addition. He asked whether, under the Common Core, only the standard algorithm could be taught—or at least, could it be taught first. As he explains: Delaying teaching of the standard algorithm until fourth grade and relying on place value “strategies” and drawings to add numbers is thought to provide students with the conceptual understanding of adding and subtracting multi-digit numbers. What happens, instead, is that the means to help learn, explain or memorize the procedure become a procedure unto itself and students are required to use inefficient cumbersome methods for two years. This is done in the belief that the alternative approaches confer understanding, so are superior to the standard algorithm. To teach the standard algorithm first would in reformers’ minds be rote learning. Reformers believe that by having students using strategies in lieu of the standard algorithm, students are still learning “skills” (albeit inefficient and confusing ones), and these skills support understanding of the standard algorithm. Students are left with a panoply of methods (praised as a good thing because students should have more than one way to solve problems), that confuse more than enlighten. Zimba responded that the standard algorithm could, indeed, be the only method taught because it meets a crucial test: reinforcing knowledge of place value and the properties of operations. He goes on to say that other algorithms also may be taught that are consistent with the standards, but that the decision to do so is left in the hands of local educators and curriculum designers: In short, the Common Core requires the standard algorithm; additional algorithms aren’t named, and they aren’t required…Standards can’t settle every disagreement—nor should they. As this discussion of just a single slice of the math curriculum illustrates, teachers and curriculum authors following the standards still may, and still must, make an enormous range of decisions. Zimba defends delaying mastery of the standard algorithm until fourth grade, referring to it as a “culminating” standard that he would, if he were teaching, introduce in earlier grades. Zimba illustrates the curricular progression he would employ in a table, showing that he would introduce the standard algorithm for addition late in first grade (with two-digit addends) and then extend the complexity of its use and provide practice towards fluency until reaching the culminating standard in fourth grade. Zimba would introduce the subtraction algorithm in second grade and similarly ramp up its complexity until fourth grade. It is important to note that in CCSS-M the word “algorithm” appears for the first time (in plural form) in the third grade standards: 3.NBT.2 Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction. The term “strategies and algorithms” is curious. Zimba explains, “It is true that the word ‘algorithms’ here is plural, but that could be read as simply leaving more choice in the hands of the teacher about which algorithm(s) to teach—not as a requirement for each student to learn two or more general algorithms for each operation!” I have described before the “dog whistles” embedded in the Common Core, signals to educational progressives—in this case, math reformers—that despite these being standards, the CCSS-M will allow them great latitude. Using the plural “algorithms” in this third grade standard and not specifying the standard algorithm until fourth grade is a perfect example of such a dog whistle. Why All the Fuss about Standard Algorithms? It appears that the Common Core authors wanted to reach a political compromise on standard algorithms. Standard algorithms were a key point of contention in the “Math Wars” of the 1990s. The 1997 California Framework for Mathematics required that students know the standard algorithms for all four operations—addition, subtraction, multiplication, and division—by the end of fourth grade.[ii] The 2000 Massachusetts Mathematics Curriculum Framework called for learning the standard algorithms for addition and subtraction by the end of second grade and for multiplication and division by the end of fourth grade. These two frameworks were heavily influenced by mathematicians (from Stanford in California and Harvard in Massachusetts) and quickly became favorites of math traditionalists. In both states’ frameworks, the standard algorithm requirements were in direct opposition to the reform-oriented frameworks that preceded them—in which standard algorithms were barely mentioned and alternative algorithms or “strategies” were encouraged. Now that the CCSS-M has replaced these two frameworks, the requirement for knowing the standard algorithms in California and Massachusetts slips from third or fourth grade all the way to sixth grade. That’s what reformers get in the compromise. They are given a green light to continue teaching alternative algorithms, as long as the algorithms are consistent with teaching place value and properties of arithmetic. But the standard algorithm is the only one students are required to learn. And that exclusivity is intended to please the traditionalists. I agree with Garelick that the compromise leads to problems. In a 2013 Chalkboard post, I described a first grade math program in which parents were explicitly requested not to teach the standard algorithm for addition when helping their children at home. The students were being taught how to represent addition with drawings that clustered objects into groups of ten. The exercises were both time consuming and tedious. When the parents met with the school principal to discuss the matter, the principal told them that the math program was following the Common Core by promoting deeper learning. The parents withdrew their child from the school and enrolled him in private school. The value of standard algorithms is that they are efficient and packed with mathematics. Once students have mastered single-digit operations and the meaning of place value, the standard algorithms reveal to students that they can take procedures that they already know work well with one- and two-digit numbers, and by applying them over and over again, solve problems with large numbers. Traditionalists and reformers have different goals. Reformers believe exposure to several algorithms encourages flexible thinking and the ability to draw on multiple strategies for solving problems. Traditionalists believe that a bigger problem than students learning too few algorithms is that too few students learn even one algorithm. I have been a critic of the math reform movement since I taught in the 1980s. But some of their complaints have merit. All too often, instruction on standard algorithms has left out meaning. As Karen C. Fuson and Sybilla Beckmann point out, “an unfortunate dichotomy” emerged in math instruction: teachers taught “strategies” that implied understanding and “algorithms” that implied procedural steps that were to be memorized. Michael Battista’s research has provided many instances of students clinging to algorithms without understanding. He gives an example of a student who has not quite mastered the standard algorithm for addition and makes numerous errors on a worksheet. On one item, for example, the student forgets to carry and calculates that 19 + 6 = 15. In a post-worksheet interview, the student counts 6 units from 19 and arrives at 25. Despite the obvious discrepancy—(25 is not 15, the student agrees)—he declares that his answers on the worksheet must be correct because the algorithm he used “always works.”[iii] Math reformers rightfully argue that blind faith in procedure has no place in a thinking mathematical classroom. Who can disagree with that? Students should be able to evaluate the validity of answers, regardless of the procedures used, and propose alternative solutions. Standard algorithms are tools to help them do that, but students must be able to apply them, not in a robotic way, but with understanding. Conclusion Let’s return to Carroll’s model of time and learning. I conclude by making two points—one about curriculum and instruction, the other about implementation. In the study of numbers, a coherent K-12 math curriculum, similar to that of the previous California and Massachusetts frameworks, can be sketched in a few short sentences. Addition with whole numbers (including the standard algorithm) is taught in first grade, subtraction in second grade, multiplication in third grade, and division in fourth grade. Thus, the study of whole number arithmetic is completed by the end of fourth grade. Grades five through seven focus on rational numbers (fractions, decimals, percentages), and grades eight through twelve study advanced mathematics. Proficiency is sought along three dimensions: 1) fluency with calculations, 2) conceptual understanding, 3) ability to solve problems. Placing the CCSS-M standard for knowing the standard algorithms of addition and subtraction in fourth grade delays this progression by two years. Placing the standard for the division algorithm in sixth grade continues the two-year delay. For many fourth graders, time spent working on addition and subtraction will be wasted time. They already have a firm understanding of addition and subtraction. The same thing for many sixth graders—time devoted to the division algorithm will be wasted time that should be devoted to the study of rational numbers. The numerator in Carroll’s instructional time model will be greater than the denominator, indicating the inefficient allocation of time to instruction. As Jason Zimba points out, not everyone agrees on when the standard algorithms should be taught, the alternative algorithms that should be taught, the manner in which any algorithm should be taught, or the amount of instructional time that should be spent on computational procedures. Such decisions are made by local educators. Variation in these decisions will introduce variation in the implementation of the math standards. It is true that standards, any standards, cannot control implementation, especially the twists and turns in how they are interpreted by educators and brought to life in classroom instruction. But in this case, the standards themselves are responsible for the myriad approaches, many unproductive, that we are sure to see as schools teach various algorithms under the Common Core. [i] Tracking, ability grouping, differentiated learning, programmed learning, individualized instruction, and personalized learning (including today’s flipped classrooms) are all attempts to solve the challenge of student heterogeneity. [ii] An earlier version of this post incorrectly stated that the California framework required that students know the standard algorithms for all four operations by the end of third grade. I regret the error. [iii] Michael T. Battista (2001). “Research and Reform in Mathematics Education,” pp. 32-84 in The Great Curriculum Debate: How Should We Teach Reading and Math? (T. Loveless, ed., Brookings Instiution Press). Authors Tom Loveless Full Article
pr No, the sky is not falling: Interpreting the latest SAT scores By webfeeds.brookings.edu Published On :: Thu, 01 Oct 2015 12:00:00 -0400 Earlier this month, the College Board released SAT scores for the high school graduating class of 2015. Both math and reading scores declined from 2014, continuing a steady downward trend that has been in place for the past decade. Pundits of contrasting political stripes seized on the scores to bolster their political agendas. Michael Petrilli of the Fordham Foundation argued that falling SAT scores show that high schools need more reform, presumably those his organization supports, in particular, charter schools and accountability.* For Carol Burris of the Network for Public Education, the declining scores were evidence of the failure of polices her organization opposes, namely, Common Core, No Child Left Behind, and accountability. Petrilli and Burris are both misusing SAT scores. The SAT is not designed to measure national achievement; the score losses from 2014 were miniscule; and most of the declines are probably the result of demographic changes in the SAT population. Let’s examine each of these points in greater detail. The SAT is not designed to measure national achievement It never was. The SAT was originally meant to measure a student’s aptitude for college independent of that student’s exposure to a particular curriculum. The test’s founders believed that gauging aptitude, rather than achievement, would serve the cause of fairness. A bright student from a high school in rural Nebraska or the mountains of West Virginia, they held, should have the same shot at attending elite universities as a student from an Eastern prep school, despite not having been exposed to the great literature and higher mathematics taught at prep schools. The SAT would measure reasoning and analytical skills, not the mastery of any particular body of knowledge. Its scores would level the playing field in terms of curricular exposure while providing a reasonable estimate of an individual’s probability of success in college. Note that even in this capacity, the scores never suffice alone; they are only used to make admissions decisions by colleges and universities, including such luminaries as Harvard and Stanford, in combination with a lot of other information—grade point averages, curricular resumes, essays, reference letters, extra-curricular activities—all of which constitute a student’s complete application. Today’s SAT has moved towards being a content-oriented test, but not entirely. Next year, the College Board will introduce a revised SAT to more closely reflect high school curricula. Even then, SAT scores should not be used to make judgements about U.S. high school performance, whether it’s a single high school, a state’s high schools, or all of the high schools in the country. The SAT sample is self-selected. In 2015, it only included about one-half of the nation’s high school graduates: 1.7 million out of approximately 3.3 million total. And that’s about one-ninth of approximately 16 million high school students. Generalizing SAT scores to these larger populations violates a basic rule of social science. The College Board issues a warning when it releases SAT scores: “Since the population of test takers is self-selected, using aggregate SAT scores to compare or evaluate teachers, schools, districts, states, or other educational units is not valid, and the College Board strongly discourages such uses.” TIME’s coverage of the SAT release included a statement by Andrew Ho of Harvard University, who succinctly makes the point: “I think SAT and ACT are tests with important purposes, but measuring overall national educational progress is not one of them.” The score changes from 2014 were miniscule SAT scores changed very little from 2014 to 2015. Reading scores dropped from 497 to 495. Math scores also fell two points, from 513 to 511. Both declines are equal to about 0.017 standard deviations (SD).[i] To illustrate how small these changes truly are, let’s examine a metric I have used previously in discussing test scores. The average American male is 5’10” in height with a SD of about 3 inches. A 0.017 SD change in height is equal to about 1/20 of an inch (0.051). Do you really think you’d notice a difference in the height of two men standing next to each other if they only differed by 1/20th of an inch? You wouldn’t. Similarly, the change in SAT scores from 2014 to 2015 is trivial.[ii] A more serious concern is the SAT trend over the past decade. Since 2005, reading scores are down 13 points, from 508 to 495, and math scores are down nine points, from 520 to 511. These are equivalent to declines of 0.12 SD for reading and 0.08 SD for math.[iii] Representing changes that have accumulated over a decade, these losses are still quite small. In the Washington Post, Michael Petrilli asked “why is education reform hitting a brick wall in high school?” He also stated that “you see this in all kinds of evidence.” You do not see a decline in the best evidence, the National Assessment of Educational Progress (NAEP). Contrary to the SAT, NAEP is designed to monitor national achievement. Its test scores are based on a random sampling design, meaning that the scores can be construed as representative of U.S. students. NAEP administers two different tests to high school age students, the long term trend (LTT NAEP), given to 17-year-olds, and the main NAEP, given to twelfth graders. Table 1 compares the past ten years’ change in test scores of the SAT with changes in NAEP.[iv] The long term trend NAEP was not administered in 2005 or 2015, so the closest years it was given are shown. The NAEP tests show high school students making small gains over the past decade. They do not confirm the losses on the SAT. Table 1. Comparison of changes in SAT, Main NAEP (12th grade), and LTT NAEP (17-year-olds) scores. Changes expressed as SD units of base year. SAT 2005-2015 Main NAEP 2005-2015 LTT NAEP 2004-2012 Reading -0.12* +.05* +.09* Math -0.08* +.09* +.03 *p<.05 Petrilli raised another concern related to NAEP scores by examining cohort trends in NAEP scores. The trend for the 17-year-old cohort of 2012, for example, can be constructed by using the scores of 13-year-olds in 2008 and 9-year-olds in 2004. By tracking NAEP changes over time in this manner, one can get a rough idea of a particular cohort’s achievement as students grow older and proceed through the school system. Examining three cohorts, Fordham’s analysis shows that the gains between ages 13 and 17 are about half as large as those registered between ages nine and 13. Kids gain more on NAEP when they are younger than when they are older. There is nothing new here. NAEP scholars have been aware of this phenomenon for a long time. Fordham points to particular elements of education reform that it favors—charter schools, vouchers, and accountability—as the probable cause. It is true that those reforms more likely target elementary and middle schools than high schools. But the research literature on age discrepancies in NAEP gains (which is not cited in the Fordham analysis) renders doubtful the thesis that education policies are responsible for the phenomenon.[v] Whether high school age students try as hard as they could on NAEP has been pointed to as one explanation. A 1996 analysis of NAEP answer sheets found that 25-to-30 percent of twelfth graders displayed off-task test behaviors—doodling, leaving items blank—compared to 13 percent of eighth graders and six percent of fourth graders. A 2004 national commission on the twelfth grade NAEP recommended incentives (scholarships, certificates, letters of recognition from the President) to boost high school students’ motivation to do well on NAEP. Why would high school seniors or juniors take NAEP seriously when this low stakes test is taken in the midst of taking SAT or ACT tests for college admission, end of course exams that affect high school GPA, AP tests that can affect placement in college courses, state accountability tests that can lead to their schools being deemed a success or failure, and high school exit exams that must be passed to graduate?[vi] Other possible explanations for the phenomenon are: 1) differences in the scales between the ages tested on LTT NAEP (in other words, a one-point gain on the scale between ages nine and 13 may not represent the same amount of learning as a one-point gain between ages 13 and 17); 2) different rates of participation in NAEP among elementary, middle, and high schools;[vii] and 3) social trends that affect all high school students, not just those in public schools. The third possibility can be explored by analyzing trends for students attending private schools. If Fordham had disaggregated the NAEP data by public and private schools (the scores of Catholic school students are available), it would have found that the pattern among private school students is similar—younger students gain more than older students on NAEP. That similarity casts doubt on the notion that policies governing public schools are responsible for the smaller gains among older students.[viii] Changes in the SAT population Writing in the Washington Post, Carol Burris addresses the question of whether demographic changes have influenced the decline in SAT scores. She concludes that they have not, and in particular, she concludes that the growing proportion of students receiving exam fee waivers has probably not affected scores. She bases that conclusion on an analysis of SAT participation disaggregated by level of family income. Burris notes that the percentage of SAT takers has been stable across income groups in recent years. That criterion is not trustworthy. About 39 percent of students in 2015 declined to provide information on family income. The 61 percent that answered the family income question are probably skewed against low-income students who are on fee waivers (the assumption being that they may feel uncomfortable answering a question about family income).[ix] Don’t forget that the SAT population as a whole is a self-selected sample. A self-selected subsample from a self-selected sample tells us even less than the original sample, which told us almost nothing. The fee waiver share of SAT takers increased from 21 percent in 2011 to 25 percent in 2015. The simple fact that fee waivers serve low-income families, whose children tend to be lower-scoring SAT takers, is important, but not the whole story here. Students from disadvantaged families have always taken the SAT. But they paid for it themselves. If an additional increment of disadvantaged families take the SAT because they don’t have to pay for it, it is important to consider whether the new entrants to the pool of SAT test takers possess unmeasured characteristics that correlate with achievement—beyond the effect already attributed to socioeconomic status. Robert Kelchen, an assistant professor of higher education at Seton Hall University, calculated the effect on national SAT scores of just three jurisdictions (Washington, DC, Delaware, and Idaho) adopting policies of mandatory SAT testing paid for by the state. He estimated that these policies explain about 21 percent of the nationwide decline in test scores between 2011 and 2015. He also notes that a more thorough analysis, incorporating fee waivers of other states and districts, would surely boost that figure. Fee waivers in two dozen Texas school districts, for example, are granted to all juniors and seniors in high school. And all students in those districts (including Dallas and Fort Worth) are required to take the SAT beginning in the junior year. Such universal testing policies can increase access and serve the cause of equity, but they will also, at least for a while, lead to a decline in SAT scores. Here, I offer my own back of the envelope calculation of the relationship of demographic changes with SAT scores. The College Board reports test scores and participation rates for nine racial and ethnic groups.[x] These data are preferable to family income because a) almost all students answer the race/ethnicity question (only four percent are non-responses versus 39 percent for family income), and b) it seems a safe assumption that students are more likely to know their race or ethnicity compared to their family’s income. The question tackled in Table 2 is this: how much would the national SAT scores have changed from 2005 to 2015 if the scores of each racial/ethnic group stayed exactly the same as in 2005, but each group’s proportion of the total population were allowed to vary? In other words, the scores are fixed at the 2005 level for each group—no change. The SAT national scores are then recalculated using the 2015 proportions that each group represented in the national population. Table 2. SAT Scores and Demographic Changes in the SAT Population (2005-2015) Projected Change Based on Change in Proportions Actual Change Projected Change as Percentage of Actual Change Reading -9 -13 69% Math -7 -9 78% The data suggest that two-thirds to three-quarters of the SAT score decline from 2005 to 2015 is associated with demographic changes in the test-taking population. The analysis is admittedly crude. The relationships are correlational, not causal. The race/ethnicity categories are surely serving as proxies for a bundle of other characteristics affecting SAT scores, some unobserved and others (e.g., family income, parental education, language status, class rank) that are included in the SAT questionnaire but produce data difficult to interpret. Conclusion Using an annual decline in SAT scores to indict high schools is bogus. The SAT should not be used to measure national achievement. SAT changes from 2014-2015 are tiny. The downward trend over the past decade represents a larger decline in SAT scores, but one that is still small in magnitude and correlated with changes in the SAT test-taking population. In contrast to SAT scores, NAEP scores, which are designed to monitor national achievement, report slight gains for 17-year-olds over the past ten years. It is true that LTT NAEP gains are larger among students from ages nine to 13 than from ages 13 to 17, but research has uncovered several plausible explanations for why that occurs. The public should exercise great caution in accepting the findings of test score analyses. Test scores are often misinterpreted to promote political agendas, and much of the alarmist rhetoric provoked by small declines in scores is unjustified. * In fairness to Petrilli, he acknowledges in his post, “The SATs aren’t even the best gauge—not all students take them, and those who do are hardly representative.” [i] The 2014 SD for both SAT reading and math was 115. [ii] A substantively trivial change may nevertheless reach statistical significance with large samples. [iii] The 2005 SDs were 113 for reading and 115 for math. [iv] Throughout this post, SAT’s Critical Reading (formerly, the SAT-Verbal section) is referred to as “reading.” I only examine SAT reading and math scores to allow for comparisons to NAEP. Moreover, SAT’s writing section will be dropped in 2016. [v] The larger gains by younger vs. older students on NAEP is explored in greater detail in the 2006 Brown Center Report, pp. 10-11. [vi] If these influences have remained stable over time, they would not affect trends in NAEP. It is hard to believe, however, that high stakes tests carry the same importance today to high school students as they did in the past. [vii] The 2004 blue ribbon commission report on the twelfth grade NAEP reported that by 2002 participation rates had fallen to 55 percent. That compares to 76 percent at eighth grade and 80 percent at fourth grade. Participation rates refer to the originally drawn sample, before replacements are made. NAEP is conducted with two stage sampling—schools first, then students within schools—meaning that the low participation rate is a product of both depressed school (82 percent) and student (77 percent) participation. See page 8 of: http://www.nagb.org/content/nagb/assets/documents/publications/12_gr_commission_rpt.pdf [viii] Private school data are spotty on the LTT NAEP because of problems meeting reporting standards, but analyses identical to Fordham’s can be conducted on Catholic school students for the 2008 and 2012 cohorts of 17-year-olds. [ix] The non-response rate in 2005 was 33 percent. [x] The nine response categories are: American Indian or Alaska Native; Asian, Asian American, or Pacific Islander; Black or African American; Mexican or Mexican American; Puerto Rican; Other Hispanic, Latino, or Latin American; White; Other; and No Response. Authors Tom Loveless Full Article
pr Principals as instructional leaders: An international perspective By webfeeds.brookings.edu Published On :: Thu, 24 Mar 2016 00:00:00 -0400 Full Article
pr The NAEP proficiency myth By webfeeds.brookings.edu Published On :: Mon, 13 Jun 2016 07:00:00 -0400 On May 16, I got into a Twitter argument with Campbell Brown of The 74, an education website. She released a video on Slate giving advice to the next president. The video begins: “Without question, to me, the issue is education. Two out of three eighth graders in this country cannot read or do math at grade level.” I study student achievement and was curious. I know of no valid evidence to make the claim that two out of three eighth graders are below grade level in reading and math. No evidence was cited in the video. I asked Brown for the evidentiary basis of the assertion. She cited the National Assessment of Educational Progress (NAEP). NAEP does not report the percentage of students performing at grade level. NAEP reports the percentage of students reaching a “proficient” level of performance. Here’s the problem. That’s not grade level. In this post, I hope to convince readers of two things: 1. Proficient on NAEP does not mean grade level performance. It’s significantly above that. 2. Using NAEP’s proficient level as a basis for education policy is a bad idea. Before going any further, let’s look at some history. NAEP history NAEP was launched nearly five decades ago. The first NAEP test was given in science in 1969, followed by a reading test in 1971 and math in 1973. For the first time, Americans were able to track the academic progress of the nation’s students. That set of assessments, which periodically tests students 9, 13, and 17 years old and was last given in 2012, is now known as the Long Term Trend (LTT) NAEP. It was joined by another set of NAEP tests in the 1990s. The Main NAEP assesses students by grade level (fourth, eighth, and twelfth) and, unlike the LTT, produces not only national but also state scores. The two tests, LTT and main, continue on parallel tracks today, and they are often confounded by casual NAEP observers. The main NAEP, which was last administered in 2015, is the test relevant to this post and will be the only one discussed hereafter. The NAEP governing board was concerned that the conventional metric for reporting results (scale scores) was meaningless to the public, so achievement standards (also known as performance standards) were introduced. The percentage of students scoring at advanced, proficient, basic, and below basic levels are reported each time the main NAEP is given. Does NAEP proficient mean grade level? The National Center for Education Statistics (NCES) states emphatically, “Proficient is not synonymous with grade level performance.” The National Assessment Governing Board has a brochure with information on NAEP, including a section devoted to myths and facts. There, you will find this: Myth: The NAEP Proficient level is like being on grade level. Fact: Proficient on NAEP means competency over challenging subject matter. This is not the same thing as being “on grade level,” which refers to performance on local curriculum and standards. NAEP is a general assessment of knowledge and skills in a particular subject. Equating NAEP proficiency with grade level is bogus. Indeed, the validity of the achievement levels themselves is questionable. They immediately came under fire in reviews by the U.S. Government Accountability Office, the National Academy of Sciences, and the National Academy of Education.[1] The National Academy of Sciences report was particularly scathing, labeling NAEP’s achievement levels as “fundamentally flawed.” Despite warnings of NAEP authorities and critical reviews from scholars, some commentators, typically from advocacy groups, continue to confound NAEP proficient with grade level. Organizations that support school reform, such as Achieve Inc. and Students First, prominently misuse the term on their websites. Achieve presses states to adopt cut points aligned with NAEP proficient as part of new Common Core-based accountability systems. Achieve argues that this will inform parents whether children “can do grade level work.” No, it will not. That claim is misleading. How unrealistic is NAEP proficient? Shortly after NCLB was signed into law, Robert Linn, one of the most prominent psychometricians of the past several decades, called ”the target of 100% proficient or above according to the NAEP standards more like wishful thinking than a realistic possibility.” History is on the side of that argument. When the first main NAEP in mathematics was given in 1990, only 13 % of eighth graders scored proficient and 2 % scored advanced. Imagine using “proficient” as synonymous with grade level—85 % scored below grade level! The 1990 national average in eighth grade scale scores was 263 (see Table 1). In 2015, the average was 282, a gain of 19 scale score points. Table 1. Main NAEP Eighth Grade Math Score, by achievement levels, 1990-2015 Year Scale Score Average Below Basic (%) Basic Proficient Advanced Proficient and Above 2015 282 29 38 25 8 33 2009 283 27 39 26 8 34 2003 278 32 39 23 5 28 1996 270 39 38 20 4 24 1990 263 48 37 13 2 15 That’s an impressive gain. Analysts who study NAEP often use 10 points on the NAEP scale as a back of the envelope estimate of one year’s worth of learning. Eighth graders have gained almost two years. The percentage of students scoring below basic has dropped from 48% in 1990 to 29% in 2015. The percentage of students scoring proficient or above has more than doubled, from 15% to 33%. That’s not bad news; it’s good news. But the cut point for NAEP proficient is 299. By that standard, two-thirds of eighth graders are still falling short. Even students in private schools, despite hailing from more socioeconomically advantaged homes and in some cases being selectively admitted by schools, fail miserably at attaining NAEP proficiency. More than half (53 percent) are below proficient. Today’s eighth graders have made it about half-way to NAEP proficient in 25 years, but they still need to gain almost two more years of math learning (17 points) to reach that level. And, don’t forget, that’s just the national average, so even when that lofty goal is achieved, half of the nation’s students will still fall short of proficient. Advocates of the NAEP proficient standard want it to be for all students. That is ridiculous. Another way to think about it: proficient for today’s eighth graders reflects approximately what the average twelfth grader knew in mathematics in 1990. Someday the average eighth grader may be able to do that level of mathematics. But it won’t be soon, and it won’t be every student. In the 2007 Brown Center Report on American Education, I questioned whether NAEP proficient is a reasonable achievement standard.[2] That year, a study by Gary Phillips of American Institutes for Research was published that projected the 2007 TIMSS scores on the NAEP scale. Phillips posed the question: based on TIMSS, how many students in other countries would score proficient or better on NAEP? The study’s methodology only produces approximations, but they are eye-popping. Here are just a few countries: Table 2. Projected Percent NAEP Proficient, Eighth Grade Math Singapore 73 Hong Kong SAR 66 Korea, Rep. of 65 Chinese Taipei 61 Japan 57 Belgium (Flemish) 40 United States 26 Israel 24 England 22 Italy 17 Norway 9 Singapore was the top scoring nation on TIMSS that year, but even there, more than a quarter of students fail to reach NAEP proficient. Japan is not usually considered a slouch on international math assessments, but 43% of its eighth graders fall short. The U.S. looks weak, with only 26% of students proficient. But England, Israel, and Italy are even weaker. Norway, a wealthy nation with per capita GDP almost twice that of the U.S., can only get 9 out of 100 eighth graders to NAEP proficient. Finland isn’t shown in the table because it didn’t participate in the 2007 TIMSS. But it did in 2011, with Finland and the U.S. scoring about the same in eighth grade math. Had Finland’s eighth graders taken NAEP in 2011, it’s a good bet that the proportion scoring below NAEP proficient would have been similar to that in the U.S. And yet articles such as “Why Finland Has the Best Schools,” appear regularly in the U.S. press.[3] Why it matters The National Center for Education Statistics warns that federal law requires that NAEP achievement levels be used on a trial basis until the Commissioner of Education Statistics determines that the achievement levels are “reasonable, valid, and informative to the public.” As the NCES website states, “So far, no Commissioner has made such a determination, and the achievement levels remain in a trial status. The achievement levels should continue to be interpreted and used with caution.” Confounding NAEP proficient with grade-level is uninformed. Designating NAEP proficient as the achievement benchmark for accountability systems is certainly not cautious use. If high school students are required to meet NAEP proficient to graduate from high school, large numbers will fail. If middle and elementary school students are forced to repeat grades because they fall short of a standard anchored to NAEP proficient, vast numbers will repeat grades. On NAEP, students are asked the highest level math course they’ve taken. On the 2015 twelfth grade NAEP, 19% of students said they either were taking or had taken calculus. These are the nation’s best and the brightest, the crème-de la crème of math students. Only one in five students work their way that high up the hierarchy of American math courses. If you are over 45 years old and reading this, the proportion who took calculus in high school is less than one out of ten. In the graduating class of 1990, for instance, only 7% of students had taken calculus.[4] Unsurprisingly, calculus students are also typically taught by the nation’s most knowledgeable math teachers. The nation’s elite math students paired with the nation’s elite math teachers: if any group can prove NAEP proficient a reasonable goal and succeed in getting all students over the NAEP proficiency bar, this is the group. But they don’t. A whopping 30% score below proficient on NAEP. For black and Hispanic calculus students, the figures are staggering. Two-thirds of black calculus students score below NAEP proficient. For Hispanics, the figure is 52%. The nation’s pre-calculus students also fair poorly (69% below proficient). Then the success rate falls off a cliff. In the class of 2015, more than nine out of ten students whose highest math course was Trigonometry or Algebra II fail to meet the NAEP proficient standard. Table 3. 2015 NAEP Twelfth Grade Math, Proficient by Highest Math Course Taken Highest Math Course Taken Percentage Below NAEP Proficient Calculus 30 Pre-calculus 69 Trig/Algebra II 92 Source: NAEP Data Explorer These data defy reason; they also refute common sense. For years, educators have urged students to take the toughest courses they can possibly take. Taken at face value, the data in Table 3 rip the heart out of that advice. These are the toughest courses, and yet huge numbers of the nation’s star students, by any standard aligned with NAEP proficient, would be told that they have failed. Some parents, misled by the confounding of proficient with grade level, might even mistakenly believe that their kids don’t know grade level math. Conclusion NAEP proficient is not synonymous with grade level. NAEP officials urge that proficient not be interpreted as reflecting grade level work. It is a standard set much higher than that. Scholarly panels have reviewed the NAEP achievement standards and found them flawed. The highest scoring nations of the world would appear to be mediocre or poor performers if judged by the NAEP proficient standard. Even large numbers of U.S. calculus students fall short. As states consider building benchmarks for student performance into accountability systems, they should not use NAEP proficient—or any standard aligned with NAEP proficient—as a benchmark. It is an unreasonable expectation, one that ill serves America’s students, parents, and teachers--and the effort to improve America’s schools. [1] Shepard, L. A., Glaser, R., Linn, R., & Bohrnstedt, G. (1993) Setting Performance Standards For Student Achievement: Background Studies. Report of the NAE Panel on the Evaluation of the NAEP Trial State Assessment: An Evaluation of the 1992 Achievement Levels. National Academy of Education. [2] Loveless, Tom. The 2007 Brown Center Report, pages 10-13. [3] William Doyle, “Why Finland Has The Best Schools,” Los Angeles Times, March 18, 2016. [4] NCES, America’s High School Graduates: Results of the 2009 NAEP High School Transcript Study. See Table 8, p. 49. Authors Tom Loveless Image Source: © Brian Snyder / Reuters Full Article
pr COVID-19 outbreak highlights critical gaps in school emergency preparedness By webfeeds.brookings.edu Published On :: Wed, 11 Mar 2020 13:49:02 +0000 The COVID-19 epidemic sweeping the globe has affected millions of students, whose school closures have more often than not caught them, their teachers, and families by surprise. For some, it means missing class altogether, while others are trialing online learning—often facing difficulties with online connections, as well as motivational and psychosocial well-being challenges. These problems… Full Article
pr Obama’s exit calculus on the peace process By webfeeds.brookings.edu Published On :: Wed, 27 Jul 2016 17:29:00 +0000 One issue that has traditionally shared bipartisan support is how the United States should approach the Israeli-Palestinian conflict, write Sarah Yerkes and Ariella Platcha. However, this year both parties have shifted their positions farther from the center and from past Democratic and Republican platforms. How will that affect Obama’s strategy? Full Article Uncategorized
pr Taking the off-ramp: A path to preventing terrorism By webfeeds.brookings.edu Published On :: Tue, 02 Aug 2016 21:28:37 +0000 Full Article
pr The U.S. needs a national prevention network to defeat ISIS By webfeeds.brookings.edu Published On :: Wed, 03 Aug 2016 15:40:11 +0000 The recent release of a Congressional report highlighting that the United States is the “top target” of the Islamic State coincided with yet another gathering of members of the global coalition to counter ISIL to take stock of the effort. There, Defense Secretary Carter echoed the sentiments of an increasing number of political and military leaders when he said that military […] Full Article
pr Minding the gap: A multi-layered approach to tackling violent extremism By webfeeds.brookings.edu Published On :: Wed, 03 Aug 2016 16:20:33 +0000 Full Article
pr An agenda for reducing poverty and improving opportunity By webfeeds.brookings.edu Published On :: Wed, 18 Nov 2015 00:00:00 -0500 SUMMARY:With the U.S. poverty rate stuck at around 15 percent for years, it’s clear that something needs to change, and candidates need to focus on three pillars of economic advancement-- education, work, family -- to increase economic mobility, according to Brookings Senior Fellow Isabel Sawhill and Senior Research Assistant Edward Rodrigue. “Economic success requires people’s initiative, but it also requires us, as a society, to untangle the web of disadvantages that make following the sequence difficult for some Americans. There are no silver bullets. Government cannot do this alone. But government has a role to play in motivating individuals and facilitating their climb up the economic ladder,” they write. The pillar of work is the most urgent, they assert, with every candidate needing to have concrete jobs proposals. Closing the jobs gap (the difference in work rates between lower and higher income households) has a huge effect on the number of people in poverty, even if the new workers hold low-wage jobs. Work connects people to mainstream institutions, helps them learn new skills, provides structure to their lives, and provides a sense of self-sufficiency and self-respect, while at the aggregate level, it is one of the most important engines of economic growth. Specifically, the authors advocate for making work pay (EITC), a second-earner deduction, childcare assistance and paid leave, and transitional job programs. On the education front, they suggest investment in children at all stages of life: home visiting, early childhood education, new efforts in the primary grades, new kinds of high schools, and fresh policies aimed at helping students from poor families attend and graduate from post-secondary institutions. And for the third prong, stable families, Sawhill and Rodrique suggest changing social norms around the importance of responsible, two-person parenthood, as well as making the most effective forms of birth control (IUDs and implants) more widely available at no cost to women. “Many of our proposals would not only improve the life prospects of less advantaged children; they would pay for themselves in higher taxes and less social spending. The candidates may have their own blend of responses, but we need to hear less rhetoric and more substantive proposals from all of them,” they conclude. Downloads Download the paper Authors Isabel V. SawhillEdward Rodrigue Full Article
pr Campaign 2016: Ideas for reducing poverty and improving economic mobility By webfeeds.brookings.edu Published On :: Wed, 18 Nov 2015 16:35:00 -0500 We can be sure that the 2016 presidential candidates, whoever they are, will be in favor of promoting opportunity and cutting poverty. The question is: how? In our contribution to a new volume published today, “Campaign 2016: Eight big issues the presidential candidates should address,” we show that people who clear three hurdles—graduating high school, working full-time, and delaying parenthood until they in a stable, two-parent family—are very much more likely to climb to middle class than fall into poverty: But what specific policies would help people achieve these three benchmarks of success? Our paper contains a number of ideas that candidates might want to adopt. Here are a few examples: 1. To improve high school graduation rates, expand “Small Schools of Choice,” a program in New York City, which replaced large, existing schools with more numerous, smaller schools that had a theme or focus (like STEM or the arts). The program increased graduation rates by about 10 percentage points and also led to higher college enrollment with no increase in costs. 2. To support work, make the Child and Dependent Care Tax Credit (CDCTC) refundable and cap it at $100,000 in household income. Because the credit is currently non-refundable, low-income families receive little or no benefit, while those with incomes above $100,000 receive generous tax deductions. This proposal would make the program more equitable and facilitate low-income parents’ labor force participation, at no additional cost. 3. To strengthen families, make the most effective forms of birth control (IUDs and implants) more widely available at no cost to women, along with good counselling and a choice of all FDA-approved methods. Programs that have done this in selected cities and states have reduced unplanned pregnancies, saved money, and given women better ability to delay parenthood until they and their partners are ready to be parents. Delayed childbearing reduces poverty rates and leads to better prospects for the children in these families. These are just a few examples of good ideas, based on the evidence, of what a candidate might want to propose and implement if elected. Additional ideas and analysis will be found in our longer paper on this topic. Authors Isabel V. SawhillEdward Rodrigue Image Source: © Darren Hauck / Reuters Full Article
pr The District’s proposed law shows the wrong way to provide paid leave By webfeeds.brookings.edu Published On :: Tue, 19 Jan 2016 15:03:00 -0500 The issue of paid leave is heating up in 2016. At least two presidential candidates — Democrat Hillary Clinton and Republican Sen. Marco Rubio (Fla.) — have proposed new federal policies. Several states and large cities have begun providing paid leave to workers when they are ill or have to care for a newborn child or other family member. This forward movement on paid-leave policy makes sense. The United States is the only advanced country without a paid-leave policy. While some private and public employers already provide paid leave to their workers, the workers least likely to get paid leave are low-wage and low-income workers who need it most. They also cannot afford to take unpaid leave, which the federal government mandates for larger companies. Paid leave is good for the health and development of children; it supports work, enabling employees to remain attached to the labor force when they must take leave; and it can lower costly worker turnover for employers. Given the economic and social benefits it provides and given that the private market will not generate as much as needed, public policies should ensure that such leave is available to all. But it is important to do so efficiently, so as not to burden employers with high costs that could lead them to substantially lower wages or create fewer jobs. States and cities that require employers to provide paid sick days mandate just a small number, usually three to seven days. Family or temporary disability leaves that must be longer are usually financed through small increases in payroll taxes paid by workers and employers, rather than by employer mandates or general revenue. Policy choices could limit costs while expanding benefits. For instance, states should limit eligibility to workers with experience, such as a year, and it might make sense to increase the benefit with years of accrued service to encourage labor force attachment. Some states provide four to six weeks of family leave, though somewhat larger amounts of time may be warranted, especially for the care of newborns, where three months seems reasonable. Paid leave need not mean full replacement of existing wages. Replacing two-thirds of weekly earnings up to a set limit is reasonable. The caps and partial wage replacement give workers some incentive to limit their use of paid leave without imposing large financial burdens on those who need it most. While many states and localities have made sensible choices in these areas, some have not. For instance, the D.C. Council has proposed paid-leave legislation for all but federal workers that violates virtually all of these rules. It would require up to 16 weeks of temporary disability leave and up to 16 weeks of paid family leave; almost all workers would be eligible for coverage, without major experience requirements; and the proposed law would require 100 percent replacement of wages up to $1,000 per week, and 50 percent coverage up to $3,000. It would be financed through a progressive payroll tax on employers only, which would increase to 1 percent for higher-paid employees. Our analysis suggests that this level of leave would be badly underfunded by the proposed tax, perhaps by as much as two-thirds. Economists believe that payroll taxes on employers are mostly paid through lower worker wages, so the higher taxes needed to fully fund such generous leave would burden workers. The costly policy might cause employers to discriminate against women. The disruptions and burdens of such lengthy leaves could cause employers to hire fewer workers or shift operations elsewhere over time. This is particularly true here, considering that the D.C. Council already has imposed costly burdens on employers, such as high minimum wages (rising to $11.50 per hour this year), paid sick leave (although smaller amounts than now proposed) and restrictions on screening candidates. The minimum wage in Arlington is $7.25 with no other mandates. Employers will be tempted to move operations across the river or to replace workers with technology wherever possible. Cities, states and the federal government should provide paid sick and family leave for all workers. But it can and should be done in a fiscally responsible manner that does not place undue burdens on the workers themselves or on their employers. Editor's note: this piece originally appeared in The Washington Post. Authors Harry J. HolzerIsabel V. Sawhill Publication: The Washington Post Image Source: © Charles Platiau / Reuters Full Article
pr Does pre-K work—or not? By webfeeds.brookings.edu Published On :: Tue, 19 Apr 2016 09:02:00 -0400 In this tumultuous election year one wonders whether reasoned debate about education or other policies is still possible. That said, research has a role to play in helping policymakers make good decisions – if not before than after they are in office. So what do we know about the ability of early education to change children’s lives? At the moment, scholars are divided. One camp argues that pre-k doesn’t work, suggesting that it would be a mistake to expand it. Another camp believes that it is one of the most cost-effective things we could do to improve children’s lifetime prospects, especially if they come from disadvantaged homes. The pre-k advocates cite several earlier demonstrations, such as the Perry Preschool and Abecedarian programs. These have been rigorously evaluated and found to improve children’s long-term success, including less use of special education, increases in high school graduation, reduced crime, and higher earnings. Participants in the Abecedarian program, for example, earned 60 percent more than controls by age 30. Mothers benefit as well since more of them are able to work. The Abecedarian project increased maternal earnings by $90,000 over the course of the mother’s career. Finally, by reducing crime, improving health, and decreasing the need for government assistance, these programs also reduce the burden on taxpayers. According to one estimate, the programs even increase GDP to the tune of $30 to $80 billion (in 2015 dollars) once the children have moved into and through their working lives. A careful summary of all this research can be found in this year’s Economic Report of the President. The Report notes, and I would emphasize, that no one study can do justice to this issue, and not every program has been successful, but the weight of the evidence points strongly to the overall success of high-quality programs. This includes not just the small, very intensive model programs, but importantly the large, publically-funded pre-school programs such as those in Boston, Tulsa, Georgia, North Carolina, and New Jersey. Some estimates put the ratio of benefits to costs at $7 to $1. Very few investments promise such a large return. Pre-k advocates admit that any gains in IQ may fade but that boosts to nonacademic skills such as self-control, motivation, and planning have long-term effects that have been documented in studies of siblings exposed to differing amounts of early education. The pre-k critics point to findings from rigorous evaluations of the national Head Start program and of a state-wide program in Tennessee. These studies found that any gains from pre-k at the end of the program had faded by the time the children were in elementary school. They argue that the positive results from earlier model programs, such as Perry and Abecedarian, may have been the result of their small scale, their intensity, and the fact that the children involved had few alternative sources of care or early education. Children with more than adequate home environments or good substitute child care do not benefit as much, or at all, from participating in a pre-k program. In my view, this is an argument for targeted programs or for a universal program with a sliding scale fee for those who participate. In the meantime, it is too early to know what the longer-term effects of current programs will be. Despite their current popularity among scholars, one big problem with randomized controlled trials (RCTs) is that it takes a generation to get the answers you need. And, as is the case with Perry and Abecedarian, by the time you get them, they may no longer be relevant to contemporary environments in which mothers are better educated and more children have access to out-of-home care. In the end, you can’t make public policy with RCTs alone. We need to incorporate lessons from neuroscience about the critical changes to the brain that occur in early childhood and the insights of specialists in child development. We need to consider what happens to non-cognitive skills over the longer term. We need to worry about the plight of working mothers, especially single parents, who cannot work without some form of out-of-home care. Providing that care on the cheap may turn out to be penny wise and pound foolish. (A universal child care program in Quebec funded at $5 a day led to worse behavior among the kids in the program.) Of course we need to continuously improve the effectiveness of pre-k through ongoing evaluation. That means weeding out ineffective programs along with improving curriculum, teacher preparation and pay, and better follow-up in the early grades. Good quality pre-k works; bad-quality does not. For the most disadvantaged children, it may require intervening much earlier than age 3 or 4 as the Abecedarian program did -- with strikingly good results. Our society is coming apart. Scholars from AEI’s Charles Murray to Harvard’s Robert Putnam agree on that point. Anything that can improve the lives of the next generation should command our attention. The evidence will never be air-tight. But once one adds it all up, investing in high quality pre-k looks like a good bet to me. Editor's note: This piece originally appeared in Real Clear Markets. Authors Isabel V. Sawhill Publication: Real Clear Markets Image Source: © Carlos Garcia Rawlins / Reute Full Article
pr In Daniel Patrick Moynihan Prize speech, Ron Haskins and Isabel Sawhill stress importance of evidence-based policy By webfeeds.brookings.edu Published On :: Mon, 23 May 2016 16:33:00 -0400 Senior Fellows Ron Haskins and Isabel Sawhill are the first joint recipients of the Daniel Patrick Moynihan Prize from the American Academy of Political and Social Science (AAPSS). The prize is awarded each year to a leading policymaker, social scientist, or public intellectual whose career focuses on advancing the public good through social science. It was named after the late senator from New York and renowned sociologist Daniel Patrick Moynihan. The pair accepted the award May 12 at a ceremony in Washington, DC. In their joint lecture delivered at the ceremony, Haskins and Sawhill emphasized the importance of evidence-based public policy, highlighting Sawhill’s latest work in her book, Generation Unbound (Brookings, 2014). Watch their entire speech here: “Marriage is disappearing and more and more babies are born outside marriage,” Sawhill said during the lecture. “Right now, the proportion born outside of marriage is about 40 percent. It’s higher than that among African Americans and lower than that among the well-educated. But it’s no longer an issue that just affects the poor or minority groups.” Download Sawhill's slides » | Download Ron Haskins' slides » The power of evidence-based policy is finally being recognized, Haskins added. “One of the prime motivating factors of the current evidence-based movement,” he said, “is the understanding, now widespread, that most social programs either have not been well evaluated or they don’t work.” Haskins continued: Perhaps the most important social function of social science is to find and test programs that will reduce the nation’s social problems. The exploding movement of evidence-based policy and the many roots the movement is now planting, offer the best chance of fulfilling this vital mission of social science, of achieving, in other words, exactly the outcomes Moynihan had hoped for. He pointed toward the executive branch, state governments, and non-profits implementing policies that could make substantial progress against the nation’s social problems. Richard Reeves, a senior fellow at Brookings and co-director, with Haskins, of the Center on Children and Families (CCF), acknowledged Haskins and Sawhill’s “powerful and unique intellectual partnership” and their world-class work on families, poverty, opportunity, evidence, parenting, work, and education. Haskins and Sawhill were the first to be awarded jointly by the AAPSS, which recognizes their 15-year collaboration at Brookings and the Center on Children and Families, which they established. In addition to their work at CCF, the two co-wrote Creating an Opportunity Society (Brookings 2009) and serve as co-editors of The Future of Children, a policy journal that tackles issues that have an impact on children and families. Haskins and Sawhill join the ranks of both current and past Brookings scholars who have received the Moynihan Prize, including Alice Rivlin (recipient of the inaugural prize), Rebecca Blank, and William Julius Wilson along with other distinguished scholars and public servants. Want to learn more about the award’s namesake? Read Governance Studies Senior Fellow and historian Steve Hess’s account of Daniel Patrick Moynihan’s time in the Nixon White House in his book The Professor and the President (Brookings, 2014). Authors James King Full Article
pr Social mobility: A promise that could still be kept By webfeeds.brookings.edu Published On :: Fri, 29 Jul 2016 10:47:00 -0400 As a rhetorical ideal, greater opportunity is hard to beat. Just about all candidates for high elected office declare their commitments to promoting opportunity – who, after all, could be against it? But opportunity is, to borrow a term from the philosopher and political theorist Isaiah Berlin, a "protean" word, with different meanings for different people at different times. Typically, opportunity is closely entwined with an idea of upward mobility, especially between generations. The American Dream is couched in terms of a daughter or son of bartenders or farm workers becoming a lawyer, or perhaps even a U.S. senator. But even here, there are competing definitions of upward mobility. It might mean being better off than your parents were at a similar age. This is what researchers call "absolute mobility," and largely relies on economic growth – the proverbial rising tide that raises most boats. Or it could mean moving to a higher rung of the ladder within society, and so ending up in a better relative position than one's parents. Scholars label this movement "relative mobility." And while there are many ways to think about status or standard of living – education, wealth, health, occupation – the most common yardstick is household income at or near middle age (which, somewhat depressingly, tends to be defined as 40). As a basic principle, we ought to care about both kinds of mobility as proxies for opportunity. We want children to have the chance to do absolutely and relatively well in comparison to their parents. On the One Hand… So how are we doing? The good news is that economic standards of living have improved over time. Most children are therefore better off than their parents. Among children born in the 1970s and 1980s, 84 percent had higher incomes (even after adjusting for inflation) than their parents did at a similar age, according to a Pew study. Absolute upward income mobility, then, has been strong, and has helped children from every income class, especially those nearer the bottom of the ladder. More than 9 in 10 of those born into families in the bottom fifth of the income distribution have been upwardly mobile in this absolute sense. There's a catch, though. Strong absolute mobility goes hand in hand with strong economic growth. So it is quite likely that these rates of generational progress will slow, since the potential growth rate of the economy has probably diminished. This risk is heightened by an increasingly unequal division of the proceeds of growth in recent years. Today's parents are certainly worried. Surveys show that they are far less certain than earlier cohorts that their children will be better off than they are. If the story on absolute mobility may be about to turn for the worse, the picture for relative mobility is already pretty bad. The basic message here: pick your parents carefully. If you are born to parents in the poorest fifth of the income distribution, your chance of remaining stuck in that income group is around 35 to 40 percent. If you manage to be born into a higher-income family, the chances are similarly good that you will remain there in adulthood. It would be wrong, however, to say that class positions are fixed. There is still a fair amount of fluidity or social mobility in America – just not as much as most people seem to believe or want. Relative mobility is especially sticky in the tails at the high and low end of the distribution. Mobility is also considerably lower for blacks than for whites, with blacks much less likely to escape from the bottom rungs of the ladder. Equally ominously, they are much more likely to fall down from the middle quintile. Relative mobility rates in the United States are lower than the rhetoric about equal opportunity might suggest and lower than people believe. But are they getting worse? Current evidence suggests not. In fact, the trend line for relative mobility has been quite flat for the past few decades, according to work by Raj Chetty of Stanford and his co-researchers. It is simply not the case that the amount of intergenerational relative mobility has declined over time. Whether this will remain the case as the generations of children exposed to growing income inequality mature is not yet clear, though. As one of us (Sawhill) has noted, when the rungs on the ladder of opportunity grow further apart, it becomes more difficult to climb the ladder. To the same point, in his latest book, Our Kids – The American Dream in Crisis, Robert Putnam of Harvard argues that the growing gaps not just in income but also in neighborhood conditions, family structure, parenting styles and educational opportunities will almost inevitably lead to less social mobility in the future. Indeed, these multiple disadvantages or advantages are increasingly clustered, making it harder for children growing up in disadvantaged circumstances to achieve the dream of becoming middle class. The Geography of Opportunity Another way to assess the amount of mobility in the United States is to compare it to that found in other high-income nations. Mobility rates are highest in Scandinavia and lowest in the United States, Britain and Italy, with Australia, Western Europe and Canada lying somewhere in between, according to analyses by Jo Blanden, of the University of Surrey and Miles Corak of the University of Ottawa. Interestingly, the most recent research suggests that the United States stands out most for its lack of downward mobility from the top. Or, to paraphrase Billie Holiday, God blesses the child that's got his own. Any differences among countries, while notable, are more than matched by differences within Pioneering work (again by Raj Chetty and his colleagues) shows that some cities have much higher rates of upward mobility than others. From a mobility perspective, it is better to grow up in San Francisco, Seattle or Boston than in Atlanta, Baltimore or Detroit. Families that move to these high-mobility communities when their children are still relatively young enhance the chances that the children will have more education and higher incomes in early adulthood. Greater mobility can be found in places with better schools, fewer single parents, greater social capital, lower income inequality and less residential segregation. However, the extent to which these factors are causes rather than simply correlates of higher or lower mobility is not yet known. Scholarly efforts to establish why it is that some children move up the ladder and others don't are still in their infancy. Models of Mobility What is it about their families, their communities and their own characteristics that determine why they do or do not achieve some measure of success later in life? To help get at this vital question, the Brookings Institution has created a life-cycle model of children's trajectories, using data from the National Longitudinal Survey of Youth on about 5,000 children from birth to age 40. (The resulting Social Genome Model is now a partnership among three institutions: Brookings, the Urban Institute and Child Trends). Our model tracks children's progress through multiple life stages with a corresponding set of success measures at the end of each. For example, children are considered successful at the end of elementary school if they have mastered basic reading and math skills and have acquired the behavioral or non-cognitive competencies that have been shown to predict later success. At the end of adolescence, success is measured by whether the young person has completed high school with a GPA average of 2.5 or better and has not been convicted of a crime or had a baby as a teenager. These metrics capture common-sense intuition about what drives success. But they are also aligned with the empirical evidence on life trajectories. Educational achievement, for example, has a strong effect on later earnings and income, and this well-known linkage is reflected in the model. We have worked hard to adjust for confounding variables but cannot be sure that all such effects are truly causal. We do know that the model does a good job of predicting or projecting later outcomes. Three findings from the model stand out. First, it's clear that success is a cumulative process. According to our measures, a child who is ready for school at age 5 is almost twice as likely to be successful at the end of elementary school as one who is not. This doesn't mean that a life course is set in stone this early, however. Children who get off track at an early age frequently get back on track at a later age; it's just that their chances are not nearly as good. So this is a powerful argument for intervening early in life. But it is not an argument for giving up on older youth. Second, the chances of clearing our last hurdle – being middle class by middle age (specifically, having an income of around $68,000 for a family of four by age 40) – vary quite significantly. A little over half of all children born in the 1980s and 1990s achieved this goal. But those who are black or born into low-income families were very much less likely than others to achieve this benchmark. Third, the effect of a child's circumstances at birth is strong. We use a multidimensional measure here, including not just the family's income but also the mother's education, the marital status of the parents and the birth weight of the child. Together, these factors have substantial effects on a child's subsequent success. Maternal education seems especially important. The Social Genome Model, then, is a useful tool for looking under the hood at why some children succeed and others don't. But it can also be used to assess the likely impact of a variety of interventions designed to improve upward mobility. For one illustrative simulation, we hand-picked a battery of programs shown to be effective at different life stages – a parenting program, a high-quality early-edcation program, a reading and socio-emotional learning program in elementary school, a comprehensive high school reform model – and assessed the possible impact for low-income children benefiting from each of them, or all of them. No single program does very much to close the gap between children from lower- and higher-income families. But the combined effects of multiple programs – that is, from intervening early and often in a child's life – has a surprisingly big impact. The gap of almost 20 percentage points in the chances of low-income and high-income children reaching the middle class shrinks to six percentage points. In other words, we are able to close about two-thirds of the initial gap in the life chances of these two groups of children. The black-white gap narrows, too. Looking at the cumulative impact on adult incomes over a working life (all appropriately discounted with time) and comparing these lifetime income benefits to the costs of the programs, we believe that such investments would pass a cost-benefit test from the perspective of society as a whole and even from the narrower prospective of the taxpayers who fund the programs. What Now? Understanding the processes that lie beneath the patterns of social mobility is critical. It is not enough to know how good the odds of escaping are for a child born into poverty. We want to know why. We can never eliminate the effects of family background on an individual's life chances. But the wide variation among countries and among cities in the U.S. suggests that we could do better – and that public policy may have an important role to play. Models like the Social Genome are intended to assist in that endeavor, in part by allowing policymakers to bench- test competing initiatives based on the statistical evidence. America's presumed exceptionalism is rooted in part on a belief that class-based distinctions are less important than in Western Europe. From this perspective, it is distressing to learn that American children do not have exceptional opportunities to get ahead – and that the consequences of gaps in children's initial circumstances might embed themselves in the social fabric over time, leading to even less social mobility in the future. But there is also some cause for optimism. Programs that compensate at least to some degree for disadvantages earlier in life really can close opportunity gaps and increase rates of social mobility. Moreover, by most any reasonable reckoning, the return on the public investment is high. Editor's note: This piece originally appeared in the Milken Institute Review. Authors Richard V. ReevesIsabel V. Sawhill Publication: Milken Institute Review Image Source: Eric Audras Full Article
pr On North Korea, press for complete denuclearization, but have a plan B By webfeeds.brookings.edu Published On :: The goal President Trump will try to advance in Vietnam – the complete denuclearization of North Korea – is a goal genuinely shared by the ROK, China, Japan, Russia, and many other countries. For the ROK, it would remove a major asymmetry with its northern neighbor and a barrier to North-South reconciliation. For China, it… Full Article
pr US-DPRK negotiations: Time to pivot to an interim agreement By webfeeds.brookings.edu Published On :: Fri, 09 Aug 2019 14:11:22 +0000 Executive Summary: If and when U.S.-North Korea working-level talks resume, as agreed by U.S. President Donald Trump and Chairman Kim Jong Un at their brief June 30 meeting at the Demilitarized Zone, prospects for overcoming the current impasse will depend heavily on whether the Trump administration is now prepared to recognize that the North is… Full Article
pr Experts assess the nuclear Non-Proliferation Treaty, 50 years after it went into effect By webfeeds.brookings.edu Published On :: Tue, 03 Mar 2020 20:51:09 +0000 March 5, 2020 marks the 50th anniversary of the entry into effect of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Five decades on, is the treaty achieving what was originally envisioned? Where is it succeeding in curbing the spread of nuclear weapons, and where might it be falling short? Four Brookings experts on defense… Full Article
pr Facebook, Google, and the Future of Privacy and Free Speech By webfeeds.brookings.edu Published On :: Mon, 02 May 2011 00:00:00 -0400 IntroductionIt was 2025 when Facebook decided to post live feeds from public and private surveillance cameras, so they could be searched online. The decision hardly came as a surprise. Ever since Facebook passed the 500 million-member mark in 2010, it found increasing consumer demand for applications that allowed users to access surveillance cameras with publicly accessible IP addresses. (Initially, live feeds to cameras on Mexican beaches were especially popular.) But in the mid-2020s, popular demand for live surveillance camera feeds were joined by demands from the U.S. government that an open circuit television network would be invaluable in tracking potential terrorists. As a result, Facebook decided to link the public and private camera networks, post them live online, and store the video feeds without restrictions on distributed servers in the digital cloud. Once the new open circuit system went live, anyone in the world could log onto the Internet, select a particular street view on Facebook maps and zoom in on a particular individual. Anyone could then back click on that individual to retrace her steps since she left the house in the morning or forward click on her to see where she was headed in the future. Using Facebook’s integrated face recognition app, users could click on a stranger walking down any street in the world, plug her image into the Facebook database to identify her by name, and then follow her movements from door-to-door. Since cameras were virtually ubiquitous in public and commercial spaces, the result was the possibility of ubiquitous identification and surveillance of all citizens virtually anywhere in the world—and by anyone. In an enthusiastic launch, Mark Zuckerberg dubbed the new 24/7 ubiquitous surveillance system “Open Planet.” Open Planet is not a technological fantasy. Most of the architecture for implementing it already exists, and it would be a simple enough task for Facebook or Google, if the companies chose, to get the system up and running: face recognition is already plausible, storage is increasing exponentially; and the only limitation is the coverage and scope of the existing cameras, which are growing by the day. Indeed, at a legal Futures Conference at Stanford in 2007, Andrew McLaughlin, then the head of public policy at Google, said he expected Google to get requests to put linked surveillance networks live and online within the decade. How, he, asked the audience of scholars and technologists, should Google respond? If “Open Planet” went live, would it violate the Constitution? The answer is that it might not under Supreme Court doctrine as it now exists—at least not if it were a purely-private affair, run by private companies alone and without government involvement. Both the First Amendment, which protects free speech, and the Fourth Amendment, which prohibits unreasonable searches and seizures, only restrict actions by the government. On the other hand, if the government directed Open Planet’s creation or used it to track citizens on government-owned, as well as private-sector, cameras, perhaps Facebook might be viewed as the equivalent of a state actor, and therefore restricted by the Constitution. At the time of the framing of the Constitution, a far less intrusive invasion of privacy – namely, the warrantless search of private homes and desk drawers for seditious papers – was considered the paradigmatic case of an unreasonable and unconstitutional invasion of privacy. The fact that 24/7 ubiquitous surveillance may not violate the Constitution today suggests the challenge of translating the framers’ values into a world in which Google and Facebook now have far more power over the privacy and free speech of most citizens than any King, president, or Supreme Court justice. In this essay, I will examine four different areas where the era of Facebook and Google will challenge our existing ideas about constitutional protections for free speech and privacy: ubiquitous surveillance with GPS devices and online surveillance cameras; airport body scanners; embarrassing Facebook photos and the problem of digital forgetting; and controversial YouTube videos. In each area, I will suggest, preserving constitutional values requires a different balance of legal and technological solutions, combined with political mobilization that leads to changes in social norms. Let’s start with Open Planet, and imagine sufficient government involvement to make the courts plausibly consider Facebook’s program the equivalent of state action. Imagine also that the Supreme Court in 2025 were unsettled by Open Planet and inclined to strike it down. A series of other doctrines might bar judicial intervention. The Court has come close to saying that we have no legitimate expectations of privacy in public places, at least when the surveillance technologies in question are in general public use by ordinary members of the public.[1] As mobile camera technology becomes ubiquitous, the Court might hold that the government is entitled to have access to the same linked camera system that ordinary members of the public have become accustomed to browsing. Moreover, the Court has said that we have no expectation of privacy in data that we voluntarily surrender to third parties.[2] In cases where digital images are captured on cameras owned by third parties and stored in the digital cloud—that is, on distributed third party servers--we have less privacy than citizens took for granted at the time of the American founding. And although the founders expected a degree of anonymity in public, that expectation would be defeated by the possibility of 24/7 surveillance on Facebook. The doctrinal seeds of a judicial response to Open Planet, however, do exist. A Supreme Court inclined to strike down ubiquitous surveillance might draw on recent cases involving decisions by the police to place a GPS tracking device on the car of a suspect without a warrant, tracking his movements 24/7. The Supreme Court has not yet decided whether prolonged surveillance, in the form of “dragnet-type law enforcement practices” violates the Constitution.[3] Three federal circuits have held that the use of a GPS tracking device to monitor someone’s movements in a car over a prolonged period is not a search because we have no expectations of privacy in our public movements.[4] But in a visionary opinion in 2010, Judge Douglas Ginsburg of the U.S. Court of Appeals disagreed. Prolonged surveillance is a search, he recognized, because no reasonable person expects that his movements will be continuously monitored from door to door; all of us have a reasonable expectation of privacy in the “whole” of our movements in public. [5] Ginsburg and his colleagues struck down the warrantless GPS surveillance of a suspect that lasted 24 hours a day for nearly a month on the grounds that prolonged, ubiquitous tracking of citizen’s movements in public is constitutionally unreasonable. “Unlike one’s movements during a single journey, the whole of one’s movements over the course of a month is not actually exposed to the public because the likelihood anyone will observe all those movements is effectively nil,” Ginsburg wrote. Moreover, “That whole reveals more – sometimes a great deal more – than does the sum of its parts.”[6] Like the “mosaic theory” invoked by the government in national security cases, Ginsburg concluded that “Prolonged surveillance reveals types of information not revealed by short-term surveillance, such as what a person does repeatedly, what he does not do, and what he does ensemble. These types of information can each reveal more about a person than does any individual trip viewed in isolation.”[7] Ginsburg understood that 24/7 ubiquitous surveillance differs from more limited tracking not just in degree but in kind – it looks more like virtual stalking than a legitimate investigation – and therefore is an unreasonable search of the person. Because prolonged surveillance on “Open Planet” potentially reveals far more about each of us than 24/7 GPS tracking does, providing real time images of all our actions, rather than simply tracking the movements of our cars, it could also be struck down as an unreasonable search of our persons. And if the Supreme Court struck down Open Planet on Fourth Amendment grounds, it might be influenced by the state regulations of GPS surveillance that Ginsburg found persuasive, or by Congressional attempts to regulate Facebook or other forms of 24/7 surveillance, such as the Geolocational Privacy and Surveillance Act proposed by Sen. Ron Wyden (D-OR) that would require officers to get a warrant before electronically tracking cell phones or cars.[8] The Supreme Court in 2025 might also conceivably choose to strike down Open Planet on more expansive grounds, relying not just on the Fourth Amendment, but on the right to autonomy recognized in cases like Casey v. Planned Parenthood and Lawrence v. Texas. The right to privacy cases, beginning with Griswold v. Connecticut and culminating in Roe v. Wade and Lawrence, are often viewed as cases about sexual autonomy, but in Casey and Lawrence, Justice Anthony Kennedy recognized a far more sweeping principle of personal autonomy that might well protect individuals from totalizing forms of ubiquitous surveillance. Imagine an opinion written in 2025 by Justice Kennedy, still ruling the Court and the country at the age of 89. “In our tradition the State is not omnipresent in the home. And there are other spheres of our lives and existence, outside the home, where the State should not be a dominant presence,” Kennedy wrote in Lawrence. “Freedom extends beyond spatial bounds. Liberty presumes an autonomy of self that includes freedom of thought, belief, expression, and certain intimate conduct.”[9] Kennedy’s vision of an “autonomy of self” that depends on preventing the state from becoming a “dominant presence” in public as well as private places might well be invoked to prevent the state from participating in a ubiquitous surveillance system that prevents citizens from defining themselves and expressing their individual identities. Just as citizens in the Soviet Union were inhibited from expressing and defining themselves by ubiquitous KGB surveillance, Kennedy might hold, the possibility of ubiquitous surveillance on “Open Planet” also violates the right to autonomy, even if the cameras in question are owned by the private sector, as well as the state, and a private corporation provides the platform for their monitoring. Nevertheless, the fact that the system is administered by Facebook, rather than the Government, might be an obstacle to a constitutional ruling along these lines. And if Kennedy (or his successor) struck down “Open Planet” with a sweeping vision of personal autonomy that didn’t coincide with the actual values of a majority of citizens in 2025, the decision could be the Roe of virtual surveillance, provoking backlashes from those who don’t want the Supreme Court imposing its values on a divided nation. Would the Supreme Court, in fact, strike down “Open Planet” in 2025? If the past is any guide, the answer may depend on whether the public, in 2025, views 24/7 ubiquitous surveillance as invasive and unreasonable, or whether citizens have become so used to ubiquitous surveillance on and off the web, in virtual space and real space, that the public demands “Open Planet” rather than protesting against it. I don’t mean to suggest that the Court actually reads the polls. But in the age of Google and Facebook, technologies that thoughtfully balance privacy with free expression and other values have tended to be adopted only when companies see their markets as demanding some kind of privacy protection, or when engaged constituencies have mobilized in protest against poorly designed architectures and demanded better ones, helping to create a social consensus that the invasive designs are unreasonable. The paradigmatic case of the kind of political mobilization on behalf of constitutional values that I have in mind is presented by my second case: the choice between the naked machine and the blob machine in airport security screening. In 2002, officials at Orlando International airport first began testing the millimeter wave body scanners that are currently at the center of a national uproar. The designers of the scanners at Pacific Northwest Laboratories offered U.S. officials a choice: naked machines or blob machines? The same researchers had developed both technologies, and both were equally effective at identifying contraband. But, as their nicknames suggest, the former displays graphic images of the human body, while the latter scrambles the images into a non-humiliating blob.[10] Since both versions of the scanners promise the same degree of security, any sane attempt to balance privacy and safety would seem to favor the blob machines over the naked machines. And that’s what European governments chose. Most European airport authorities have declined to adopt body scanners at all, because of persuasive evidence that they’re not effective at detecting low-density contraband such as the chemical powder PETN that the trouser bomber concealed in his underwear on Christmas day, 2009. But the handful of European airports that have adopted body scanners, such as Schiphol airport in Amsterdam, have opted for a version of the blob machine. This is in part due to the efforts of European privacy commissioners, such as Germany’s Peter Schaar, who have emphasized the importance of designing body scanners in ways that protect privacy. The U.S. Department of Homeland Security made a very different choice. It deployed the naked body scanners without any opportunity for public comment—then appeared surprised by the backlash. Remarkably, however, the backlash was effective. After a nationwide protest inspired by the Patrick Henry of the anti-Naked Machines movement, a traveler who memorably exclaimed “Don’t Touch my Junk,” President Obama called on the TSA to go back to the drawing board. And a few months after authorizing the intrusive pat downs, in February 2011, the TSA announced that it would begin testing, on a pilot basis, versions of the very same blob machines that the agency had rejected nearly a decade earlier. According to the latest version, to be tested in Las Vegas and Washington, D.C, the TSA will install software filters on its body scanner machines that detects potential threat items and indicates their location on a generic, blob like outline of each passenger that will appear on a monitor attached to the machine. Passengers without suspicious items will be cleared as “OK,” those with suspicious items will be taken aside for additional screening. The remote rooms in which TSA agents view images of the naked body will be eliminated. According to news reports, TSA began testing the filtering software in the fall of 2010 – precisely when the protests against the naked machines went viral. If the filtering software is implemented across the country, converting naked machines into blob machines, the political victory for privacy will be striking. Of course, it’s possible that courts might strike down the naked machines as unreasonable and unconstitutional, even without the political protests. In a 1983 opinion upholding searches by drug-sniffing dogs, Justice Sandra Day O’Connor recognized that a search is most likely to be considered constitutionally reasonable if it is very effective at discovering contraband without revealing innocent but embarrassing information.[11] The backscatter machines seem, under O'Connor's view, to be the antithesis of a reasonable search: They reveal a great deal of innocent but embarrassing information and are remarkably ineffective at revealing low-density contraband. It’s true that the government gets great deference in airports and at the borders, where routine border searches don’t require heightened suspicion. But the Court has held that non-routine border searches, such as body cavity or strip searches, do require a degree of individual suspicion. And although the Supreme Court hasn't evaluated airport screening technology, lower courts have emphasized, as the U.S. Court of Appeals for the 9th Circuit ruled in 2007, that "a particular airport security screening search is constitutionally reasonable provided that it 'is no more extensive nor intensive than necessary, in the light of current technology, to detect the presence of weapons or explosives.'"[12] It’s arguable that since the naked machines are neither effective nor minimally intrusive – that is, because they might be designed with blob machine like filters that promise just as much security while also protecting privacy – that courts might strike them down. As a practical matter, however, both lower courts and the Supreme Court seem far more likely to strike down strip searches that have inspired widespread public opposition – such as the strip search of a high school girl wrongly accused of carrying drugs, which the Supreme Court invalidated by a vote of 8-1,[13] then they are of searches that, despite the protests of a mobilized minority, the majority of the public appears to accept. The tentative victory of the blob machines over the naked machines, if it materializes, provides a model for successful attempts to balance privacy and security: government can be pressured into striking a reasonable balance between privacy and security by a mobilized minority of the public when the privacy costs of a particular technology are dramatic, visible, widely distributed, and people experience the invasions personally as a kind of loss of control over the conditions of their own exposure. But can we be mobilized to demand a similarly reasonable balance when the threats to privacy come not from the government but from private corporations and when those responsible for exposing too much personal information about us are none other than ourselves? When it comes to invasions of privacy by fellow citizens, rather than by the government, we are in the realm not of autonomy but of dignity and decency. (Autonomy preserves a sphere of immunity from government intrusion in our lives; dignity protects the norms of social respect that we accord to each other.) And since dignity is a socially constructed value, it’s unlikely to be preserved by judges--or by private corporations--in the face of the expressed preferences of citizens who are less concerned about dignity than exposure. This is the subject of our third case, which involves a challenge that, in big and small ways, is confronting millions of people around the globe: how best to live our lives in a world where the Internet records everything and forgets nothing—where every online photo, status update, Twitter post and blog entry by and about us can be stored forever.[14] Consider the case of Stacy Snyder. Four years ago, Snyder, then a 25-year-old teacher in training at Conestoga Valley High School in Lancaster, Pa., posted a photo on her MySpace page that showed her at a party wearing a pirate hat and drinking from a plastic cup, with the caption “Drunken Pirate.” After discovering the page, her supervisor at the high school told her the photo was “unprofessional,” and the dean of Millersville University School of Education, where Snyder was enrolled, said she was promoting drinking in virtual view of her under-age students. As a result, days before Snyder’s scheduled graduation, the university denied her a teaching degree. Snyder sued, arguing that the university had violated her First Amendment rights by penalizing her for her (perfectly legal) after-hours behavior. But in 2008, a federal district judge rejected the claim, saying that because Snyder was a public employee whose photo didn’t relate to matters of public concern, her “Drunken Pirate” post was not protected speech.[15] When historians of the future look back on the perils of the early digital age, Stacy Snyder may well be an icon. With Web sites like LOL Facebook Moments, which collects and shares embarrassing personal revelations from Facebook users, ill-advised photos and online chatter are coming back to haunt people months or years after the fact. Technological advances, of course, have often presented new threats to privacy. In 1890, in perhaps the most famous article on privacy ever written, Samuel Warren and Louis Brandeis complained that because of new technology — like the Kodak camera and the tabloid press — “gossip is no longer the resource of the idle and of the vicious but has become a trade.”[16] But the mild society gossip of the Gilded Age pales before the volume of revelations contained in the photos, video and chatter on social-media sites and elsewhere across the Internet. Facebook, which surpassed MySpace in 2008 as the largest social-networking site, now has more than 500 million members, or 22 percent of all Internet users, who spend more than 500 billion minutes a month on the site. Facebook users share more than 25 billion pieces of content each month (including news stories, blog posts and photos), and the average user creates 70 pieces of content a month. Today, as in Brandeis’s day, the value threatened by gossip on the Internet – whether posted by us our by others – is dignity. (Brandeis called it an offense against honor.) But American law has never been good at regulating offenses against dignity – especially when regulations would clash with other values, such as protections for free speech. And indeed, the most ambitious proposals in Europe to create new legal rights to escape your past on the Internet are very hard to reconcile with the American free speech tradition. The cautionary tale here is Argentina, which has dramatically expanded the liability of search engines like Google and Yahoo for offensive photographs that harm someone’s reputation. Recently, an Argentinean judge held Google and Yahoo liable for causing “moral harm” and violating the privacy of Virginia Da Cunha, a pop star, by indexing pictures of her that were linked to erotic content. The ruling against Google and Yahoo was overturned on appeal in August, but there are at least 130 similar cases pending in Argentina to force search engines to remove or block offensive content. In the U.S., search engines are protected by the Communications Decency Act, which immunizes Internet service providers from hosting content posted by third parties. But as liability against search engines expands abroad, it will seriously curtain free speech: Yahoo says that the only way to comply with injunctions about is to block all sites that refer to a particular plaintiff.[17] In Europe, recent proposals to create a legally enforceable right to escape your past have come from the French. The French data commissioner, Alex Turc, who has proposed a right to oblivion – namely a right to escape your past on the Internet. The details are fuzzy, but it appears that the proposal would rely on an international body – say a commission of forgetfulness – to evaluate particular take down requests and order Google and Facebook to remove content that, in the view of commissioners, violated an individuals’ dignitary rights. From an American perspective, the very intrusiveness of this proposal is enough to make it implausible: how could we rely on bureaucrats to protect our dignity in cases where we have failed to protect it on our own? Europeans, who have less of a free speech tradition and far more of a tradition of allowing people to remove photographs taken and posted against their will, will be more sympathetic to the proposal. But from the perspective of most American courts and companies, giving people the right selectively to delete their pasts from public discourse would pose unacceptably great threats to free speech. A far more promising solution to the problem of forgetting on the Internet is technological. And there are already small-scale privacy apps that offer disappearing data. An app called TigerText allows text-message senders to set a time limit from one minute to 30 days, after which the text disappears from the company’s servers, on which it is stored, and therefore, from the senders’ and recipients’ phones. (The founder of TigerText, Jeffrey Evans, has said he chose the name before the scandal involving Tiger Woods’s supposed texts to a mistress.)[18] Expiration dates could be implemented more broadly in various ways. Researchers at the University of Washington, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read. The technology doesn’t promise perfect control — you can’t stop someone from copying your photos or Facebook chats during the period in which they are not encrypted. But as Vanish improves, it could bring us much closer to a world where our data don’t linger forever. Facebook, if it wanted to, could implement expiration dates on its own platform, making our data disappear after, say, three days or three months unless a user specified that he wanted it to linger forever. It might be a more welcome option for Facebook to encourage the development of Vanish-style apps that would allow individual users who are concerned about privacy to make their own data disappear without imposing the default on all Facebook users. So far, however, Zuckerberg, Facebook’s C.E.O., has been moving in the opposite direction — toward transparency, rather than privacy. In defending Facebook’s recent decision to make the default for profile information about friends and relationship status public, Zuckerberg told the founder of the publication TechCrunch that Facebook had an obligation to reflect “current social norms” that favored exposure over privacy. “People have really gotten comfortable not only sharing more information and different kinds but more openly and with more people, and that social norm is just something that has evolved over time,” [19] he said. It’s true that a German company, X-Pire, recently announced the launch of a Facebook app that will allow users automatically to erase designated photos. Using electronic keys that expire after short periods of time, and obtained by solving a Captcha, or graphic that requires users to type in a fixed number combinations, the application ensures that once the time stamp on the photo has expired, the key disappears.[20] X-Pire is a model for a sensible, blob-machine-like solution to the problem of digital forgetting. But unless Facebook builds X-Pire-like apps into its platform – an unlikely outcome given its commercial interests – a majority of Facebook users are unlikely to seek out disappearing data options until it’s too late. X-Pire, therefore, may remain for the foreseeable future a technological solution to a grave privacy problem—but a solution that doesn’t have an obvious market. The courts, in my view, are better equipped to regulate offenses against autonomy, such as 24/7 surveillance on Facebook, than offenses against dignity, such as drunken Facebook pictures that never go away. But that regulation in both cases will likely turn on evolving social norms whose contours in twenty years are hard to predict. Finally, let’s consider one last example of the challenge of preserving constitutional values in the age of Facebook and Google, an example that concerns not privacy but free speech.[21] At the moment, the person who arguably has more power than any other to determine who may speak and who may be heard around the globe isn’t a king, president or Supreme Court justice. She is Nicole Wong, the deputy general counsel of Google, and her colleagues call her “The Decider.” It is Wong who decides what controversial user-generated content goes down or stays up on YouTube and other applications owned by Google, including Blogger, the blog site; Picasa, the photo-sharing site; and Orkut, the social networking site. Wong and her colleagues also oversee Google’s search engine: they decide what controversial material does and doesn’t appear on the local search engines that Google maintains in many countries in the world, as well as on Google.com. As a result, Wong and her colleagues arguably have more influence over the contours of online expression than anyone else on the planet. At the moment, Wong seems to be exercising that responsibility with sensitivity to the values of free speech. Google and Yahoo can be held liable outside the United States for indexing or directing users to content after having been notified that it was illegal in a foreign country. In the United States, by contrast, Internet service providers are protected from most lawsuits involving having hosted or linked to illegal user-generated content. As a consequence of these differing standards, Google has considerably less flexibility overseas than it does in the United States about content on its sites, and its “information must be free” ethos is being tested abroad. For example, on the German and French default Google search engines, Google.de and Google.fr, you can’t find Holocaust-denial sites that can be found on Google.com, because Holocaust denial is illegal in Germany and France. Broadly, Google has decided to comply with governmental requests to take down links on its national search engines to material that clearly violates national laws. But not every overseas case presents a clear violation of national law. In 2006, for example, protesters at a Google office in India demanded the removal of content on Orkut, the social networking site, that criticized Shiv Sena, a hard-line Hindu political party popular in Mumbai. Wong eventually decided to take down an Orkut group dedicated to attacking Shivaji, revered as a deity by the Shiv Sena Party, because it violated Orkut terms of service by criticizing a religion, but she decided not to take down another group because it merely criticized a political party. “If stuff is clearly illegal, we take that down, but if it’s on the edge, you might push a country a little bit,” Wong told me. “Free-speech law is always built on the edge, and in each country, the question is: Can you define what the edge is?” Over the past couple of years, Google and its various applications have been blocked, to different degrees, by 24 countries. Blogger is blocked in Pakistan, for example, and Orkut in Saudi Arabia. Meanwhile, governments are increasingly pressuring telecom companies like Comcast and Verizon to block controversial speech at the network level. Europe and the U.S. recently agreed to require Internet service providers to identify and block child pornography, and in Europe there are growing demands for network-wide blocking of terrorist-incitement videos. As a result, Wong and her colleagues worry that Google’s ability to make case-by-case decisions about what links and videos are accessible through Google’s sites may be slowly circumvented, as countries are requiring the companies that give us access to the Internet to build top-down censorship into the network pipes. It is not only foreign countries that are eager to restrict speech on Google and YouTube. In May, 2006, Joseph Lieberman who has become the A. Mitchell Palmer of the digital age, had his staff contacted Google and demanded that the company remove from YouTube dozens of what he described as jihadist videos. After viewing the videos one by one, Wong and her colleagues removed some of the videos but refused to remove those that they decided didn’t violate YouTube guidelines. Lieberman wasn’t satisfied. In an angry follow-up letter to Eric Schmidt, the C.E.O. of Google, Lieberman demanded that all content he characterized as being “produced by Islamist terrorist organizations” be immediately removed from YouTube as a matter of corporate judgment — even videos that didn’t feature hate speech or violent content or violate U.S. law. Wong and her colleagues responded by saying, “YouTube encourages free speech and defends everyone’s right to express unpopular points of view.” Recently, Google and YouTube announced new guidelines prohibiting videos “intended to incite violence.” That category scrupulously tracks the Supreme Court’s rigorous First Amendment doctrine, which says that speech can be banned only when it poses an imminent threat of producing serious lawless action. Unfortunately, Wong and her colleagues recently retreated from that bright line under further pressure from Lieberman. In November, 2010, YouTube added a new category that viewers can click to flag videos for removal: “promotes terrorism.” There are 24 hours of video uploaded on YouTube every minute, and a series of categories viewers can use to request removal, including “violent or repulsive content” or inappropriate sexual content. Although hailed by Senator Lieberman, the new “promotes terrorism category” is potentially troubling because it goes beyond the narrow test of incitement to violence that YouTube had previously used to flag terrorism related videos for removal. YouTube’s capitulation to Lieberman shows that a user generated system for enforcing community standards will never protect speech as scrupulously as unelected judges enforcing strict rules about when speech can be viewed as a form of dangerous conduct. Google remains a better guardian for free speech than internet companies like Facebook and Twitter, which have refused to join the Global Network Initiative, an industry-wide coalition committed to upholding free speech and privacy. But the recent capitulation of YouTube shows that Google’s “trust us” model may not be a stable way of protecting free speech in the twenty-first century, even though the alternatives to trusting Google – such as authorizing national regulatory bodies around the globe to request the removal of controversial videos – might protect less speech than Google’s “Decider” model currently does. I’d like to conclude by stressing the complexity of protecting constitutional values like privacy and free speech in the age of Google and Facebook, which are not formally constrained by the Constitution. In each of my examples – 24/7 Facebook surveillance, blob machines, escaping your Facebook past, and promoting free speech on YouTube and Google -- it’s possible to imagine a rule or technology that would protect free speech and privacy, while also preserving security—a blob-machine like solution. But in some areas, those blob-machine-like solutions are more likely, in practice, to be adopted then others. Engaged minorities may demand blob machines when they personally experience their own privacy being violated; but they may be less likely to rise up against the slow expansion of surveillance cameras, which transform expectations of privacy in public. Judges in the American system may be more likely to resist ubiquitous surveillance in the name of Roe v. Wade-style autonomy than they are to create a legal right to allow people to edit their Internet pasts, which relies on ideas of dignity that in turn require a social consensus that in America, at least, does not exist. As for free speech, it is being anxiously guarded for the moment by Google, but the tremendous pressures, from consumers and government are already making it hard to hold the line at removing only speech that threatens imminent lawless action. In translating constitutional values in light of new technologies, it’s always useful to ask: What would Brandeis do? Brandeis would never have tolerated unpragmatic abstractions, which have the effect of giving citizens less privacy in the age of cloud computing than they had during the founding era. In translating the Constitution into the challenges of our time, Brandeis would have considered it a duty actively to engage in the project of constitutional translation in order to preserve the Framers’ values in a startlingly different technological world. But the task of translating constitutional values can’t be left to judges alone: it also falls to regulators, legislators, technologists, and, ultimately, to politically engaged citizens. As Brandeis put it, “If we would guide by the light of reason, we must let our minds be bold.” [1] See Florida v. Riley, 488 U.S. 445 (1989) (O’Connor, J., concurring). [2] See United States v. Miller, 425 U.S. 435 (1976).[3] See United States v. Knotts, 460 U.S. 276, 283-4 (1983). [4] See United States v. Pineda-Morena, 591 F.3d 1212 (9th Cir. 2010); United States v. Garcia, 474 F.3d 994 (7th Cir. 2007); United States v. Marquez, 605 F.3d 604 (8th Cir. 2010). [5] See United States v. Maynard, 615 F.3d 544 (D.C. Cir 2010). [6] 615 F.3d at 558. [7] Id. at 562.[8] See Declan McCullagh, “Senator Pushes for Mobile Privacy Reform,” CNet News, March 22, 2011, available at http://m.news.com/2166-12_3-20045723-281.html [9] Lawrence v. Texas, 539 U.S. 558, 562 (2003). [10] The discussion of the blob machines is adapted from “Nude Breach,” New Republic, December 13, 2010. [11] United States v. Place, 462 U.S. 696 (1983). [12] U.S. v. Davis, 482 F.2d 893, 913 (9th Cir. 1973).[13] Safford Unified School District v. Redding, 557 U.S. ___ (2009). [14] The discussion of digital forgetting is adapted from “The End of Forgetting,” New York Times Magazine, July 25, 2010. [15]Snyder v. Millersville University, No. 07-1660 (E.D. Pa. Dec. 3, 2008). [16] Brandeis and Warren, “The Right to Privacy,” 4 Harv. L. Rev. 193 (1890).[17] Vinod Sreeharsha, Google and Yahoo Win Appeal in Argentine Case, N.Y. Times, August 20, 2010, B4.[18] See Belinda Luscombe, “Tiger Text: An iPhone App for Cheating Spouses?”, Time.com, Feb. 26, 2010, available at http://www.time.com/time/business/article/0,8599,1968233,00.html [19]Marshall Kirkpatrick, “Facebook’s Zuckerbeg Says the Age of Privacy Is Over,” ReadWriteWeb.com, January 9, 2010, available at http://www.readwriteweb.com/archives/facebooks_zuckerberg_says_the_age_of_privacy_is_ov.php [20] Aemon Malone, “X-Pire Aims to Cut down on Photo D-Tagging on Facebook,” Digital Trends.com, January 17, 2011, available at http://www.digitaltrends.com/social-media/x-pire-adds-expiration-date-to-digital-photos/ [21] The discussion of free speech that follows is adapted from “Google’s Gatekeepers,” New York Times Magazine, November 30, 2008. Downloads Download the Full Paper Authors Jeffrey Rosen Image Source: David Malan Full Article
pr Interpreting the Constitution in the Digital Era By webfeeds.brookings.edu Published On :: Wed, 30 Nov 2011 11:23:00 -0500 In an interview on NPR's Fresh Air, Jeffrey Rosen discusses how technological changes are challenging basic Constitutional principles of freedom of speech and our own individual autonomy.TERRY GROSS, HOST:This is FRESH AIR. I'm Terry Gross. The digital world that we've come to rely on - the Internet, social networks, GPS's, street maps—also creates opportunities to collect information about us, track our movements and invade our privacy. Add to that brain scans that might reveal criminal tendencies and new developments in genetic medicine and biotechnology, and you have a lot of potential challenges to basic Constitutional principles that our founding father couldn't possibly have imagined. My guest, Jeffrey Rosen has put together a new book that explores those challenges. Along with Benjamin Wittes, he co-edited Constitution 3.0: Freedom and Technological Change. It's a publication of the Brookings Institution's Project on Technology and the Constitution, which Rosen directs. He's also a law professor at George Washington University and legal editor for The New Republic. His new book is a collection of essays in which a diverse group of legal scholars imagine plausible technological developments in or near the year 2025 that would stress current Constitutional law, and they propose possible solutions. Jeffrey Rosen, welcome back to FRESH AIR. So what are the particular parts of the Constitution that you think really come into play here with new technologies? JEFFREY ROSEN: Well, what's so striking is that none of the existing amendments give clear answers to the most basic questions we're having today. So, for example, think about global positioning system technologies, which the Supreme Court is now considering. Can the police, without a warrant, put a secret GPS device on the bottom of someone's car and track him 24/7 for a month? Well, the relevant constitutional text is the Fourth Amendment which says the right of the people to be secure in their persons, houses, papers and effects against unreasonable searches and seizures, shall not be violated. But that doesn't answer the question: Is it an unreasonable search of our persons or effects to be monitored in public spaces? Some courts have said no. Several lower court judges and the Obama administration argue that we have no expectation of privacy in public, because it's theoretically possible for our neighbors to put a tail on us or for the police to track us for 100 miles, as the court has said. Therefore, we have to assume the risk that we're being monitored, ubiquitously, 24/7 for a month. But not everyone agrees. In a visionary opinion, Judge Douglas Ginsburg on the U.S. Court of Appeals for the D.C. Circuit said there's a tremendous difference between short-term and long-term surveillance. We may expect that our neighbors are watching when we walk on the street for a few blocks, but no one in practice expects to be tailed or surveilled for a month. Ginsburg said we do have an expectation of privacy in the whole of our movements, and therefore when the police are going to engage in long-term surveillance, because they can learn so much more about us, they should have a warrant. There was a remarkable moment in the oral argument for the global positioning system case. Chief Justice John Roberts, who asked the first question, he said: Isn't there a difference between 100-mile search of the kind we've approved in the past and watching someone for a month? The government's lawyer resisted, and Roberts said: Is it the U.S. government's position that the police could put GPS devices inside the clothes of the members of this court, of these justices, or under our cars and track us for a month? And when the government's lawyer said yes, I think he may have lost the case. Click here to read the full transcript » Click here to download the full interview » Authors Jeffrey Rosen Publication: NPR Image Source: Tom Grill Full Article
pr Walk this Way:The Economic Promise of Walkable Places in Metropolitan Washington, D.C. By webfeeds.brookings.edu Published On :: An economic analysis of a sample of neighborhoods in the Washington, D.C. metropolitan area using walkability measures finds that: More walkable places perform better economically. For neighborhoods within metropolitan Washington, as the number of environmental features that facilitate walkability and attract pedestrians increase, so do office, residential, and retail rents, retail revenues, and for-sale… Full Article
pr Taxing capital income: Mark-to-market and other approaches By webfeeds.brookings.edu Published On :: Mon, 28 Oct 2019 16:13:03 +0000 Given increased income and wealth inequality, much recent attention has been devoted to proposals to increase taxes on the wealthy (such as imposing a tax on accumulated wealth). Since capital income is highly skewed toward the ultra-wealthy, methods of increasing taxes on capital income provide alternative approaches for addressing inequality through the tax system. Marking… Full Article
pr Mexico is a prop in President Trump’s political narrative By webfeeds.brookings.edu Published On :: Thu, 20 Jun 2019 13:00:20 +0000 When it comes to his country’s relationship with Mexico, U.S. President Donald Trump has decided to take a position that is at once reckless and suicidal. Reckless, because he is single-handedly scuttling a bilateral relationship with a nation that is vital to the prosperity, security, and well-being of the U.S. Suicidal, because the punitive tariffs… Full Article
pr Obama in China: Preserving the Rebalance By webfeeds.brookings.edu Published On :: This November, after focusing on foreign policy concerns around the globe and congressional midterm elections at home, President Barack Obama will travel to Beijing to attend the APEC Economic Leaders’ Meeting in hopes of preserving and enhancing one of his key foreign policy achievements—the rebalance to Asia. Obama’s trip to China will be his first… Full Article
pr Previewing this Week’s Public Forum on Immigration Reform at Claremont McKenna College By webfeeds.brookings.edu Published On :: Today at Claremont McKenna College, a new bipartisan public forum—the Dreier Roundtable—will convene leaders in politics, business, journalism and academia to hold constructive, substantive discussions about immigration reform. Just days after the midterm elections of 2014, the panel of experts will examine the strengths and weaknesses of current immigration policy and debate the economic and… Full Article Uncategorized
pr The U.S. and China’s Great Leap Forward … For Climate Protection By webfeeds.brookings.edu Published On :: It’s rare in international diplomacy today that dramatic agreements come entirely by surprise. And that’s particularly the case in economic negotiations, where corporate, labor, and environmental organizations intensely monitor the actions of governments – creating a rugby scrum around the ball of the negotiation that seems to grind everything to incremental measures. That’s what makes… Full Article Uncategorized
pr Presidential leadership in the first year By webfeeds.brookings.edu Published On :: Fri, 02 Dec 2016 19:12:23 +0000 The first year in office presents a unique window of opportunity for a new president to advance his agenda and pass signature legislation. President Obama’s first year for instance saw the passage of the economic stimulus, Dodd-Frank, and the Lilly Ledbetter Fair Pay Act, along with new ethics guidelines designed to curtail the influence of… Full Article
pr The Summit of the Americas and prospects for inter-American relations By webfeeds.brookings.edu Published On :: Fri, 03 Apr 2015 09:00:00 -0400 Event Information April 3, 20159:00 AM - 10:15 AM EDTSaul/Zilkha RoomsBrookings Institution1775 Massachusetts Avenue NWWashington, DC 20036 Register for the EventOn April 10 and 11, 2015, the Seventh Summit of the Americas will bring together the heads of state and government of every country in the Western Hemisphere for the first time. Recent efforts by the United States to reform immigration policy, re-establish diplomatic relations with Cuba, and reform our approach to drug policies at home and abroad have generated greater optimism about the future of inter-American relations. This Summit provides an opportunity to spark greater collaboration on development, social inclusion, democracy, education, and energy security. However, this Summit of the Americas is also convening at a time when the hemisphere is characterized by competing visions for economic development, democracy and human rights, and regional cooperation through various institutions such as the Organization of American States, the Union of South American Nations, and the Community of Latin American and Caribbean States. On Friday, April 3, the Latin America Initiative at Brookings hosted Assistant Secretary of State Roberta S. Jacobson for a discussion on the Seventh Summit of the Americas and what it portends for the future of hemispheric relations. Join the conversation on Twitter using #VIISummit Audio The Summit of the Americas and prospects for inter-American relations Transcript Uncorrected Transcript (.pdf) Event Materials 20150403_summit_americas_jacobson_transcript Full Article
pr Reconciling U.S. property claims in Cuba By webfeeds.brookings.edu Published On :: Fri, 04 Dec 2015 17:33:00 -0500 As the United States and Cuba rebuild formal relations, certain challenging topics remain to be addressed. Among these are outstanding U.S. property claims in Cuba. In this report, Richard E. Feinberg argues that it is in both countries’ interests to tackle this thorny issue expeditiously, and that the trauma of property seizures in the twentieth century could be transformed into an economic opportunity now. The report looks closely at the nearly 6,000 certified U.S. claims, disaggregating them by corporate and individual, large and small. To settle the U.S. claims, Feinberg suggests a hybrid formula, whereby smaller claimants receive financial compensation while larger corporate claimants can select an “opt-out” option whereby they pursue their claims directly with Cuban authorities, perhaps facilitated by an umbrella bilateral claims resolution committee. In this scenario, the larger corporate claimants (which account for nearly $1.7 billion of the $1.9 billion in total U.S. claims, excluding interest) could select from a menu of business development rights, including vouchers applicable to tax liabilities or equity investments, and preferred acquisition rights. Participating U.S. firms could also agree to inject additional capital and modern technology, to ensure benefits to the Cuban economy. Though it is often argued that Cuba is too poor to pay some $2 billion of claims, the paper finds that Cuba can in fact manage payments if they are stretched out over a reasonable period of time and exclude interest. The paper also suggests a number of mechanisms whereby the Cuban government could secure funds to pay compensation, including revenues on normalization-related activities. The Cuban government does not dispute the principle of compensation for properties nationalized in the public interest; the two governments agree on this. Cuba also asserts a set of counter-claim that allege damages from the embargo and other punitive actions against it. But a grand bargain with claims settlement as the centerpiece would require important changes in U.S. sanctions laws and regulations that restrict U.S. investments in Cuba. The United States could also offer to work with Cuba and other creditors to renegotiate Cuba’s outstanding official and commercial debts, taking into account Cuba’s capacity to pay, and allow Cuba to enter the international financial institutions. Feinberg ultimately argues that both nations should make claims resolution the centerpiece of a grand bargain that would advance the resolution of a number of other remaining points of tension between the two nations. This paves the way for Cuba to embrace an ambitious-forward-looking development strategy and for real, notable progress in normalizing relations with the United States. Downloads Reconciling U.S. property claims in CubaUncorrected Transcript--Reconciling U.S. property claims in Cuba (Media Roundtable) Authors Richard E. Feinberg Image Source: © Kevin Lamarque / Reuters Full Article
pr A preview of President Obama's upcoming trip to Cuba and Argentina By webfeeds.brookings.edu Published On :: Tue, 15 Mar 2016 00:00:00 -0400 In advance of President Obama’s historic trip to Cuba and Argentina, three Brookings scholars participated in a media roundtable to offer context and outline their expectations for the outcomes of the trip. Richard Feinberg and Ted Piccone discussed Cuba–including developments in the U.S.-Cuba relationship, the Cuban economy, and human rights on the island–and Harold Trinkunas offered insight on Argentina, inter-American relations, and the timing of the visit. Read the transcript (PDF) » Richard Feinberg: The idea is to promote a gradual incremental transition to a more open, pluralistic and prosperous Cuba integrated into global markets of goods, capital, and ideas. It is a long-term strategy. It cannot be measured by quarterly reports. Ted Piccone: ...the key [is] to unlock a whole set of future changes that I think will be net positive for the United States, but it is going to take time, and it is not going to happen overnight. Harold Trinkunas: Cuba is really about moving, among other things, a stumbling block to better relations with Latin America, and Argentina is about restoring a positive relationship with a key swing state in the region that was once one of our most important allies in the region. Downloads Download the transcript Authors Richard E. FeinbergTed PicconeHarold Trinkunas Image Source: © Alexandre Meneghini / Reuters Full Article
pr Will the Al-Bashir Regime in Sudan Survive Recent Pressures? By webfeeds.brookings.edu Published On :: Fri, 01 Nov 2013 11:26:00 -0400 It appears that Omar al-Bashir’s regime in Khartoum may be counting down to its demise as internal and external pressures seem poised to boil over and finally wrest the country out of his control. While the international community has imposed painful trade sanctions and the International Criminal Court has sought to bring al-Bashir to justice for his role in the Darfur conflict, Sudan’s own citizens have been increasingly demonstrative of their dissatisfaction and desire for change. During the last few years, al-Bashir has faced growing opposition from restless urban youth who are no longer willing to live with the status quo. There have also been fears within the old guard—the military and hardcore Islamists—that Sudan could fall victim to uprisings like those in Egypt and Tunisia. In addition to the significant dislocations to the Sudanese economy caused by trade sanctions by Western countries, Khartoum has also lost significant revenues from the sale of oil produced in South Sudan’s oil fields due to ongoing disputes. To deal with these large shortfalls, al-Bashir’s government has imposed severe austerity measures on the economy, including major reductions in government subsidies, most notably on food and fuel. In response, a broad cross-section of the population took to the streets in protest. In September of this year, like their counterparts in Egypt and Tunisia before them, large numbers of unemployed and restless Sudanese youth took to the streets to demand the ouster of al-Bashir and his government. Government security forces responded with a vengeance, arresting large numbers of protesters and either killing or causing the deaths of many of them. Within the military, which, together with Islamists, has been the base of al-Bashir’s support since the 1989 coup, there is significant discontent. In addition, there is evidence that some members of al-Bashir’s party—the National Congress Party—are not happy with the president for his failure to deal effectively with the country’s multifarious problems. Today, Sudan’s economy is falling apart—there is galloping inflation, high unemployment, especially among urban youth, and many Sudanese live below the poverty level. In addition, Khartoum is still unable to deal properly with the demands of various ethnic minorities, which are waging violent protests to force the government to allow them to rule themselves. Many groups want genuine institutional reforms and a governing process that is truly democratic and characterized by the rule of law. Added to the litany of problems Sudan faces is the fact that it remains embroiled in conflict with South Sudan over the future of the Abyei region and its rich oil reserves. The scheduled 2011 vote for Abyei citizens to decide between South Sudan and Sudan did not occur and just recently opposing stakeholders in the region have argued over when and how to hold the referendum with one group boycotting the other’s efforts. Thus, the region remains in limbo. Hardcore Islamists, long in the president’s corner, are now warning al-Bashir that he is not likely to successfully solve Sudan’s complex problems by simply cracking down on protesters. The question now is: Will al-Bashir give in to the demands of his protesting fellow citizens and initiate the necessary democratic reforms, or will he continue to resist and eventually suffer a fate similar to the one that befell his counterparts in Egypt and Tunisia? On January 30, 2011, al-Bashir’s vulnerability to a similar uprising was first made apparent when protesters took to the streets of Khartoum and Al-Ubayyid after using online social networking sites to coordinate demonstrations. The government response was swift and extremely brutal—several students were arrested and one was killed. Sporadic and uncoordinated protests, particularly among university students, were also witnessed in the coming months. Then, on September 23, 2013, riots broke out in response to the removal of state subsidies on fuel and cooking gas in Khartoum. The violence spread first across Khartoum and Omdurman in the heart of the regime’s power base, and then to other cities in the days that followed. Protesters, calling for the removal of al-Bashir, blocked roads and set government buildings on fire. As usual, the regime responded brutally, killing more than 50 protesters according to some witnesses and arresting thousands of Sudanese citizens. Although the Sudanese situation in 2013 is similar to the 2011 situations of its North African neighbors in terms of social frustration over incumbent regimes, Sudan differs from them in three main ways. First, the majority of Sudanese do not use social media; hence, it is much more difficult to coordinate protests using tools like Facebook and Twitter. Second, the government has cracked down on the press and blocked the free flow of information, further disconnecting citizens from potentially valuable information. Third, Bashir’s regime is much less tolerant of protests and demonstrations and has demonstrated a proclivity for using as much force as quickly as possible to snuff out public uprisings. Such efforts are likely to buy only temporary reprieve for the dying regime as it clutches to power, and such responses cannot force the people to give up their demands for improvements in their standard of living, as well as for respect of their fundamental rights. Without a credible opposition party to coordinate and peacefully channel the frustrations of restless youth into a peaceful revolution, current events in Sudan are likely to force the country into another bloody civil war. There is, of course, a possibility that, given the fact that Sudan does not have the type of institutional arrangements (free and independent press; independent judiciary; regular, free, credible and fair elections) that can provide citizens with the tools to either change their government or petition the latter for relief from tyranny, Sudan could soon become another failed state, such as Somalia. It would then become, like Somalia, a magnet for terrorists and extremist groups seeking to destabilize the region. In addition, another civil war in Sudan would flood the region with refugees and exacerbate the problems now facing many countries in East Africa. Of course, unrest in Sudan could easily spill over into neighboring countries such as Eritrea, Ethiopia, South Sudan and Uganda and create a serious humanitarian crisis in the Horn of Africa. Dictators and autocrats can be undone by their inability or unwillingness to learn from history, even if that history is not theirs. Given the fact that Sudan’s neighbors have been embroiled in revolutions initiated and carried out by young people frustrated by their countries’ failure to provide them with jobs and protect their fundamental rights, one wonders why al-Bashir thinks the same fate would not befall him and his regime. It has become apparent that al-Bashir is unwilling or unable to recognize the fact that the world is no longer willing to tolerate his regime’s disregard for basic human rights and that Sudan, if it hopes to regain its standing as an accepted member of the international community, cannot afford to serve as a hiding place for dictators. At the moment, al-Bashir faces a lot of problems emanating from inside and outside the country. It would be wise for al-Bashir and his government to start constructive dialogue with the people of Sudan in an effort to develop the modalities to peacefully transition to democracy. A credible first step would be for al-Bashir to form a transitional government that includes opposition parties. One of the most important functions of such a government should be to engage all relevant stakeholder groups in democratic constitution making to develop and adopt institutional arrangements that guarantee the rule of law and, hence, provide citizens with a governing process that protects their fundamental rights and provides them with the tools for self-actualization. Of course, while institutional reforms are a long-term project, in the short term, the transitional government must put into place mechanisms to protect the fundamental rights of citizens, as well as improve relations with South Sudan in order to secure the peaceful coexistence that is critical for investment and economic growth. Authors Temesgen DeressaJohn Mukum MbakuBryce Campbell Image Source: © Zohra Bensemra / Reuters Full Article
pr Burkina Faso Protests Extending Presidential Term Limits By webfeeds.brookings.edu Published On :: Thu, 30 Oct 2014 13:08:00 -0400 On Tuesday, October 28, 2014, tens of thousands of citizens of Burkina Faso gathered in its capital city, Ouagadougou, and its second biggest city, Bobo Dioulasso, to protest proposed changes to its constitution regarding term limits. A vote was planned for Thursday, on whether to extend the current limit of two terms to three. This vote is extremely controversial: Current President Blaise Compaoré, who came to power in a coup in 1987, has ruled the country for 27 years. Allowing him to run for election in November 2015 could extend his reign for another five years. In Ouagadougou on Wednesday, citizens angry over the possibility that parliament might make it possible for Campaoré to stay in power indefinitely set fire to the parliament and forced legislators to postpone the vote that had been set for Thursday, October 30, 2014 to decide the constitutional issue. A History of Autocracy in Burkina Faso The West African country has been plagued by dictators, autocracies and coups in the past. At independence on August 5, 1960, Maurice Yaméogo, leader of the Voltaic Democratic Union (Union démocratique voltaïque), became the country’s first president. Shortly after assuming power, Yaméogo banned all political opposition, forcing mass riots and demonstrations that only came to an end after the military intervened in 1966. Lt. Col. Sangoulé Lamizana and a collection of military elites took control of the government and subsequently dissolved the National Assembly as well as suspended the constitution. Lamizana stayed in power until November 1980 when the military overthrew the government and installed Col. Saye Zerbo as the new president. Two years later, Col. Zerbo’s government was overthrown by Maj. Dr. Jean-Baptiste Ouédraogo and the Council of Popular Salvation (CSP—Conseil du salut du peuple). Although it promised to transition the country to civilian rule and provide a new constitution, the Ouédraogo regime banned all political organizations, including opposition parties. There soon arose a political struggle within the CSP. The radicals, led by Captain Thomas Sankara, eventually overthrew the government in August 1983, and Capt. Sankara emerged as the country’s new leader. In 1984, the Sankara government changed the country’s name from Upper Volta to Burkina Faso and introduced many institutional reforms that effectively aligned the country with Marxist ideals. On October 15, 1987, Capt. Blaise Compaoré, a former colleague of Sankara’s, killed Sankara and several of his confidants in a successful coup d’état. In 1991, Campaoré was elected president in an election in which only 25 percent of the electorate participated because of a boycott movement organized and carried out by opposition parties. In 1998, he won reelection for another seven-year term. As president, Campaoré reversed all the progressive policies that Sankara had implemented. President Blaise Compaoré’s Time in Power In 2000, the country’s post-Cold War 1991 constitution was amended to impose a limit of two five-year consecutive terms on the presidency. However, Campaoré’s supporters argued that because he was in office when the amendments went into effect, they did not apply to him and, hence, he was qualified to run for re-election in 2005. Despite the fact that the opposition fielded several candidates, Campoaré won 80.35 percent of the votes cast in the 2005 presidential election. And, in the presidential elections held in November 2010, he captured 80.2 percent of votes. Over more than a quarter century in power, Campaoré has used an unusual formula to achieve relative stability in Burkina Faso—authoritarianism mixed with traces of democracy. The complex governance system has relied primarily on Campaoré’s dominant and charismatic political power and has failed to build sustainable institutions—specifically, those capable of maintaining the rule of law and enhancing peaceful coexistence in his absence. Constitutionally mandated presidential term limits strengthen the rule of law and provide a significant level of stability and predictability to the country’s governance institutions. In response to the efforts by Burkinabé members of parliament to change the constitution to enable Compaoré to secure another term in office, U.S. government officials have recently stated that “democratic institutions are strengthened when established rules are adhered to with consistency.” On his part, Campaoré has proclaimed that his main and immediate concern “is not to build a future for myself—but to see how the future of this country will take shape.” If this is indeed true, then he should exit gracefully from the Burkinabé political scene and henceforth serve as an elder statesman, providing his country’s new leadership with the advice and support that they need to deepen and institutionalize democracy, as well as enhance economic, social, political and human development. Insisting, as President Campoaré has done, that the constitution be changed so that he can seek an additional term in power not only destroys the country’s fragile stability but also sends the wrong message to citizens about the rule of law—while citizens must be law-abiding, the president does not have to abide by the country’s settled law; if the law stands in the way of the president’s personal ambitions, he can simply change the law to provide him with the wherewithal to achieve those objectives. Such behavior from the country’s chief executive does not augur well for deepening the country’s democracy, an objective that is dear to many Burkinabé. The question to ask President Campoaré is: How do you want history to remember you? As a self-serving political opportunist who used his public position to accumulate personal power and wealth, at the expense of fellow citizens, or as a public servant who led and directed his country’s transformation into a peaceful, safe and productive society? Authors John Mukum Mbaku Full Article