al Refugees: Why Seeking Asylum is Legal and Australia’s Policies are Not By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 Full Article
al Radio Australia – Sep 6, 2014 By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 Full Article
al Australia’s Asylum Bill is High-Handed and Cambodia Deal Just a Quick Fix By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 Full Article
al ABC News Australia – Dec 2, 2014 By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 Full Article
al Australia’s Obligations Still Apply Despite High Court Win By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 Full Article
al Terrorists and Detainees: Do We Need a New National Security Court? By webfeeds.brookings.edu Published On :: In the wake of the 9/11 attacks and the capture of hundreds of suspected al Qaeda and Taliban fighters, we have been engaged in a national debate as to the proper standards and procedures for detaining “enemy combatants” and prosecuting them for war crimes. Dissatisfaction with the procedures established at Guantanamo for detention decisions and… Full Article
al The Impact of Domestic Drones on Privacy, Safety and National Security By webfeeds.brookings.edu Published On :: Legal and technology experts hosted a policy discussion on how drones and forthcoming Federal Aviation Agency regulations into unmanned aerial vehicles will affect Americans’ privacy, safety and the country’s overall security on April 4, 2012 at Brookings. The event followed a new aviation bill, signed in February, which will open domestic skies to “unmanned aircraft… Full Article
al As the venture capital game gets bigger, the Midwest keeps missing out By webfeeds.brookings.edu Published On :: Thu, 06 Jun 2019 19:17:16 +0000 Those working to accelerate economic growth in the Heartland must face some stark realities. The Great Lakes region continues to export wealth to coastal economies, even as investment leaders try to equalize growth between the coasts and the Heartland. The region sees only a tiny fraction of venture capital (VC) deals, despite producing one quarter… Full Article
al How Promise programs can help former industrial communities By webfeeds.brookings.edu Published On :: Wed, 17 Jul 2019 14:08:06 +0000 The nation is seeing accelerating gaps in economic opportunity and prosperity between more educated, tech-savvy, knowledge workers congregating in the nation’s “superstar” cities (and a few university-town hothouses) and residents of older industrial cities and the small towns of “flyover country.” These growing divides are shaping public discourse, as policymakers and thought leaders advance recipes… Full Article
al Does decarbonization mean de-coalification? Discussing carbon reduction policies By webfeeds.brookings.edu Published On :: In September, the Energy Security and Climate Initiative (ESCI) at Brookings held the third meeting of its Coal Task Force (CTF), during which participants discussed the dynamics of three carbon policy instruments: performance standards, cap and trade, and a carbon tax. The dialogue revolved around lessons learned from implementing these policy mechanisms, especially as they… Full Article
al The halfway point of the U.S. Arctic Council chairmanship By webfeeds.brookings.edu Published On :: On April 24, 2015, the United States assumed chairmanship of the Arctic Council for a two-year term. Over the course of the last year, the United States has outlined plans within three central priorities: improving economic and living conditions for Arctic communities; Arctic Ocean safety, security, and stewardship; and addressing the impacts of climate change.… Full Article
al India’s energy and climate policy: Can India meet the challenge of industrialization and climate change? By webfeeds.brookings.edu Published On :: In Paris this past December, 195 nations came to an historical agreement to reduce carbon emissions and limit the devastating impacts of climate change. While it was indeed a triumphant event worthy of great praise, these nations are now faced with the daunting task of having to achieve their intended climate goals. For many developing… Full Article
al The presidential candidates’ views on energy and climate By webfeeds.brookings.edu Published On :: This election cycle, what will separate Democrats from Republicans on energy policy and their approach to climate change? Republicans tend to be fairly strong supporters of the fossil fuel industry, and to various degrees deny that climate change is occurring. Democratic candidates emphasize the importance of further expanding the share of renewable energy at the… Full Article Uncategorized
al Implementing Common Core: The problem of instructional time By webfeeds.brookings.edu Published On :: Thu, 09 Jul 2015 00:00:00 -0400 This is part two of my analysis of instruction and Common Core’s implementation. I dubbed the three-part examination of instruction “The Good, The Bad, and the Ugly.” Having discussed “the “good” in part one, I now turn to “the bad.” One particular aspect of the Common Core math standards—the treatment of standard algorithms in whole number arithmetic—will lead some teachers to waste instructional time. A Model of Time and Learning In 1963, psychologist John B. Carroll published a short essay, “A Model of School Learning” in Teachers College Record. Carroll proposed a parsimonious model of learning that expressed the degree of learning (or what today is commonly called achievement) as a function of the ratio of time spent on learning to the time needed to learn. The numerator, time spent learning, has also been given the term opportunity to learn. The denominator, time needed to learn, is synonymous with student aptitude. By expressing aptitude as time needed to learn, Carroll refreshingly broke through his era’s debate about the origins of intelligence (nature vs. nurture) and the vocabulary that labels students as having more or less intelligence. He also spoke directly to a primary challenge of teaching: how to effectively produce learning in classrooms populated by students needing vastly different amounts of time to learn the exact same content.[i] The source of that variation is largely irrelevant to the constraints placed on instructional decisions. Teachers obviously have limited control over the denominator of the ratio (they must take kids as they are) and less than one might think over the numerator. Teachers allot time to instruction only after educational authorities have decided the number of hours in the school day, the number of days in the school year, the number of minutes in class periods in middle and high schools, and the amount of time set aside for lunch, recess, passing periods, various pull-out programs, pep rallies, and the like. There are also announcements over the PA system, stray dogs that may wander into the classroom, and other unscheduled encroachments on instructional time. The model has had a profound influence on educational thought. As of July 5, 2015, Google Scholar reported 2,931 citations of Carroll’s article. Benjamin Bloom’s “mastery learning” was deeply influenced by Carroll. It is predicated on the idea that optimal learning occurs when time spent on learning—rather than content—is allowed to vary, providing to each student the individual amount of time he or she needs to learn a common curriculum. This is often referred to as “students working at their own pace,” and progress is measured by mastery of content rather than seat time. David C. Berliner’s 1990 discussion of time includes an analysis of mediating variables in the numerator of Carroll’s model, including the amount of time students are willing to spend on learning. Carroll called this persistence, and Berliner links the construct to student engagement and time on task—topics of keen interest to researchers today. Berliner notes that although both are typically described in terms of motivation, they can be measured empirically in increments of time. Most applications of Carroll’s model have been interested in what happens when insufficient time is provided for learning—in other words, when the numerator of the ratio is significantly less than the denominator. When that happens, students don’t have an adequate opportunity to learn. They need more time. As applied to Common Core and instruction, one should also be aware of problems that arise from the inefficient distribution of time. Time is a limited resource that teachers deploy in the production of learning. Below I discuss instances when the CCSS-M may lead to the numerator in Carroll’s model being significantly larger than the denominator—when teachers spend more time teaching a concept or skill than is necessary. Because time is limited and fixed, wasted time on one topic will shorten the amount of time available to teach other topics. Excessive instructional time may also negatively affect student engagement. Students who have fully learned content that continues to be taught may become bored; they must endure instruction that they do not need. Standard Algorithms and Alternative Strategies Jason Zimba, one of the lead authors of the Common Core Math standards, and Barry Garelick, a critic of the standards, had a recent, interesting exchange about when standard algorithms are called for in the CCSS-M. A standard algorithm is a series of steps designed to compute accurately and quickly. In the U.S., students are typically taught the standard algorithms of addition, subtraction, multiplication, and division with whole numbers. Most readers of this post will recognize the standard algorithm for addition. It involves lining up two or more multi-digit numbers according to place-value, with one number written over the other, and adding the columns from right to left with “carrying” (or regrouping) as needed. The standard algorithm is the only algorithm required for students to learn, although others are mentioned beginning with the first grade standards. Curiously, though, CCSS-M doesn’t require students to know the standard algorithms for addition and subtraction until fourth grade. This opens the door for a lot of wasted time. Garelick questioned the wisdom of teaching several alternative strategies for addition. He asked whether, under the Common Core, only the standard algorithm could be taught—or at least, could it be taught first. As he explains: Delaying teaching of the standard algorithm until fourth grade and relying on place value “strategies” and drawings to add numbers is thought to provide students with the conceptual understanding of adding and subtracting multi-digit numbers. What happens, instead, is that the means to help learn, explain or memorize the procedure become a procedure unto itself and students are required to use inefficient cumbersome methods for two years. This is done in the belief that the alternative approaches confer understanding, so are superior to the standard algorithm. To teach the standard algorithm first would in reformers’ minds be rote learning. Reformers believe that by having students using strategies in lieu of the standard algorithm, students are still learning “skills” (albeit inefficient and confusing ones), and these skills support understanding of the standard algorithm. Students are left with a panoply of methods (praised as a good thing because students should have more than one way to solve problems), that confuse more than enlighten. Zimba responded that the standard algorithm could, indeed, be the only method taught because it meets a crucial test: reinforcing knowledge of place value and the properties of operations. He goes on to say that other algorithms also may be taught that are consistent with the standards, but that the decision to do so is left in the hands of local educators and curriculum designers: In short, the Common Core requires the standard algorithm; additional algorithms aren’t named, and they aren’t required…Standards can’t settle every disagreement—nor should they. As this discussion of just a single slice of the math curriculum illustrates, teachers and curriculum authors following the standards still may, and still must, make an enormous range of decisions. Zimba defends delaying mastery of the standard algorithm until fourth grade, referring to it as a “culminating” standard that he would, if he were teaching, introduce in earlier grades. Zimba illustrates the curricular progression he would employ in a table, showing that he would introduce the standard algorithm for addition late in first grade (with two-digit addends) and then extend the complexity of its use and provide practice towards fluency until reaching the culminating standard in fourth grade. Zimba would introduce the subtraction algorithm in second grade and similarly ramp up its complexity until fourth grade. It is important to note that in CCSS-M the word “algorithm” appears for the first time (in plural form) in the third grade standards: 3.NBT.2 Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction. The term “strategies and algorithms” is curious. Zimba explains, “It is true that the word ‘algorithms’ here is plural, but that could be read as simply leaving more choice in the hands of the teacher about which algorithm(s) to teach—not as a requirement for each student to learn two or more general algorithms for each operation!” I have described before the “dog whistles” embedded in the Common Core, signals to educational progressives—in this case, math reformers—that despite these being standards, the CCSS-M will allow them great latitude. Using the plural “algorithms” in this third grade standard and not specifying the standard algorithm until fourth grade is a perfect example of such a dog whistle. Why All the Fuss about Standard Algorithms? It appears that the Common Core authors wanted to reach a political compromise on standard algorithms. Standard algorithms were a key point of contention in the “Math Wars” of the 1990s. The 1997 California Framework for Mathematics required that students know the standard algorithms for all four operations—addition, subtraction, multiplication, and division—by the end of fourth grade.[ii] The 2000 Massachusetts Mathematics Curriculum Framework called for learning the standard algorithms for addition and subtraction by the end of second grade and for multiplication and division by the end of fourth grade. These two frameworks were heavily influenced by mathematicians (from Stanford in California and Harvard in Massachusetts) and quickly became favorites of math traditionalists. In both states’ frameworks, the standard algorithm requirements were in direct opposition to the reform-oriented frameworks that preceded them—in which standard algorithms were barely mentioned and alternative algorithms or “strategies” were encouraged. Now that the CCSS-M has replaced these two frameworks, the requirement for knowing the standard algorithms in California and Massachusetts slips from third or fourth grade all the way to sixth grade. That’s what reformers get in the compromise. They are given a green light to continue teaching alternative algorithms, as long as the algorithms are consistent with teaching place value and properties of arithmetic. But the standard algorithm is the only one students are required to learn. And that exclusivity is intended to please the traditionalists. I agree with Garelick that the compromise leads to problems. In a 2013 Chalkboard post, I described a first grade math program in which parents were explicitly requested not to teach the standard algorithm for addition when helping their children at home. The students were being taught how to represent addition with drawings that clustered objects into groups of ten. The exercises were both time consuming and tedious. When the parents met with the school principal to discuss the matter, the principal told them that the math program was following the Common Core by promoting deeper learning. The parents withdrew their child from the school and enrolled him in private school. The value of standard algorithms is that they are efficient and packed with mathematics. Once students have mastered single-digit operations and the meaning of place value, the standard algorithms reveal to students that they can take procedures that they already know work well with one- and two-digit numbers, and by applying them over and over again, solve problems with large numbers. Traditionalists and reformers have different goals. Reformers believe exposure to several algorithms encourages flexible thinking and the ability to draw on multiple strategies for solving problems. Traditionalists believe that a bigger problem than students learning too few algorithms is that too few students learn even one algorithm. I have been a critic of the math reform movement since I taught in the 1980s. But some of their complaints have merit. All too often, instruction on standard algorithms has left out meaning. As Karen C. Fuson and Sybilla Beckmann point out, “an unfortunate dichotomy” emerged in math instruction: teachers taught “strategies” that implied understanding and “algorithms” that implied procedural steps that were to be memorized. Michael Battista’s research has provided many instances of students clinging to algorithms without understanding. He gives an example of a student who has not quite mastered the standard algorithm for addition and makes numerous errors on a worksheet. On one item, for example, the student forgets to carry and calculates that 19 + 6 = 15. In a post-worksheet interview, the student counts 6 units from 19 and arrives at 25. Despite the obvious discrepancy—(25 is not 15, the student agrees)—he declares that his answers on the worksheet must be correct because the algorithm he used “always works.”[iii] Math reformers rightfully argue that blind faith in procedure has no place in a thinking mathematical classroom. Who can disagree with that? Students should be able to evaluate the validity of answers, regardless of the procedures used, and propose alternative solutions. Standard algorithms are tools to help them do that, but students must be able to apply them, not in a robotic way, but with understanding. Conclusion Let’s return to Carroll’s model of time and learning. I conclude by making two points—one about curriculum and instruction, the other about implementation. In the study of numbers, a coherent K-12 math curriculum, similar to that of the previous California and Massachusetts frameworks, can be sketched in a few short sentences. Addition with whole numbers (including the standard algorithm) is taught in first grade, subtraction in second grade, multiplication in third grade, and division in fourth grade. Thus, the study of whole number arithmetic is completed by the end of fourth grade. Grades five through seven focus on rational numbers (fractions, decimals, percentages), and grades eight through twelve study advanced mathematics. Proficiency is sought along three dimensions: 1) fluency with calculations, 2) conceptual understanding, 3) ability to solve problems. Placing the CCSS-M standard for knowing the standard algorithms of addition and subtraction in fourth grade delays this progression by two years. Placing the standard for the division algorithm in sixth grade continues the two-year delay. For many fourth graders, time spent working on addition and subtraction will be wasted time. They already have a firm understanding of addition and subtraction. The same thing for many sixth graders—time devoted to the division algorithm will be wasted time that should be devoted to the study of rational numbers. The numerator in Carroll’s instructional time model will be greater than the denominator, indicating the inefficient allocation of time to instruction. As Jason Zimba points out, not everyone agrees on when the standard algorithms should be taught, the alternative algorithms that should be taught, the manner in which any algorithm should be taught, or the amount of instructional time that should be spent on computational procedures. Such decisions are made by local educators. Variation in these decisions will introduce variation in the implementation of the math standards. It is true that standards, any standards, cannot control implementation, especially the twists and turns in how they are interpreted by educators and brought to life in classroom instruction. But in this case, the standards themselves are responsible for the myriad approaches, many unproductive, that we are sure to see as schools teach various algorithms under the Common Core. [i] Tracking, ability grouping, differentiated learning, programmed learning, individualized instruction, and personalized learning (including today’s flipped classrooms) are all attempts to solve the challenge of student heterogeneity. [ii] An earlier version of this post incorrectly stated that the California framework required that students know the standard algorithms for all four operations by the end of third grade. I regret the error. [iii] Michael T. Battista (2001). “Research and Reform in Mathematics Education,” pp. 32-84 in The Great Curriculum Debate: How Should We Teach Reading and Math? (T. Loveless, ed., Brookings Instiution Press). Authors Tom Loveless Full Article
al No, the sky is not falling: Interpreting the latest SAT scores By webfeeds.brookings.edu Published On :: Thu, 01 Oct 2015 12:00:00 -0400 Earlier this month, the College Board released SAT scores for the high school graduating class of 2015. Both math and reading scores declined from 2014, continuing a steady downward trend that has been in place for the past decade. Pundits of contrasting political stripes seized on the scores to bolster their political agendas. Michael Petrilli of the Fordham Foundation argued that falling SAT scores show that high schools need more reform, presumably those his organization supports, in particular, charter schools and accountability.* For Carol Burris of the Network for Public Education, the declining scores were evidence of the failure of polices her organization opposes, namely, Common Core, No Child Left Behind, and accountability. Petrilli and Burris are both misusing SAT scores. The SAT is not designed to measure national achievement; the score losses from 2014 were miniscule; and most of the declines are probably the result of demographic changes in the SAT population. Let’s examine each of these points in greater detail. The SAT is not designed to measure national achievement It never was. The SAT was originally meant to measure a student’s aptitude for college independent of that student’s exposure to a particular curriculum. The test’s founders believed that gauging aptitude, rather than achievement, would serve the cause of fairness. A bright student from a high school in rural Nebraska or the mountains of West Virginia, they held, should have the same shot at attending elite universities as a student from an Eastern prep school, despite not having been exposed to the great literature and higher mathematics taught at prep schools. The SAT would measure reasoning and analytical skills, not the mastery of any particular body of knowledge. Its scores would level the playing field in terms of curricular exposure while providing a reasonable estimate of an individual’s probability of success in college. Note that even in this capacity, the scores never suffice alone; they are only used to make admissions decisions by colleges and universities, including such luminaries as Harvard and Stanford, in combination with a lot of other information—grade point averages, curricular resumes, essays, reference letters, extra-curricular activities—all of which constitute a student’s complete application. Today’s SAT has moved towards being a content-oriented test, but not entirely. Next year, the College Board will introduce a revised SAT to more closely reflect high school curricula. Even then, SAT scores should not be used to make judgements about U.S. high school performance, whether it’s a single high school, a state’s high schools, or all of the high schools in the country. The SAT sample is self-selected. In 2015, it only included about one-half of the nation’s high school graduates: 1.7 million out of approximately 3.3 million total. And that’s about one-ninth of approximately 16 million high school students. Generalizing SAT scores to these larger populations violates a basic rule of social science. The College Board issues a warning when it releases SAT scores: “Since the population of test takers is self-selected, using aggregate SAT scores to compare or evaluate teachers, schools, districts, states, or other educational units is not valid, and the College Board strongly discourages such uses.” TIME’s coverage of the SAT release included a statement by Andrew Ho of Harvard University, who succinctly makes the point: “I think SAT and ACT are tests with important purposes, but measuring overall national educational progress is not one of them.” The score changes from 2014 were miniscule SAT scores changed very little from 2014 to 2015. Reading scores dropped from 497 to 495. Math scores also fell two points, from 513 to 511. Both declines are equal to about 0.017 standard deviations (SD).[i] To illustrate how small these changes truly are, let’s examine a metric I have used previously in discussing test scores. The average American male is 5’10” in height with a SD of about 3 inches. A 0.017 SD change in height is equal to about 1/20 of an inch (0.051). Do you really think you’d notice a difference in the height of two men standing next to each other if they only differed by 1/20th of an inch? You wouldn’t. Similarly, the change in SAT scores from 2014 to 2015 is trivial.[ii] A more serious concern is the SAT trend over the past decade. Since 2005, reading scores are down 13 points, from 508 to 495, and math scores are down nine points, from 520 to 511. These are equivalent to declines of 0.12 SD for reading and 0.08 SD for math.[iii] Representing changes that have accumulated over a decade, these losses are still quite small. In the Washington Post, Michael Petrilli asked “why is education reform hitting a brick wall in high school?” He also stated that “you see this in all kinds of evidence.” You do not see a decline in the best evidence, the National Assessment of Educational Progress (NAEP). Contrary to the SAT, NAEP is designed to monitor national achievement. Its test scores are based on a random sampling design, meaning that the scores can be construed as representative of U.S. students. NAEP administers two different tests to high school age students, the long term trend (LTT NAEP), given to 17-year-olds, and the main NAEP, given to twelfth graders. Table 1 compares the past ten years’ change in test scores of the SAT with changes in NAEP.[iv] The long term trend NAEP was not administered in 2005 or 2015, so the closest years it was given are shown. The NAEP tests show high school students making small gains over the past decade. They do not confirm the losses on the SAT. Table 1. Comparison of changes in SAT, Main NAEP (12th grade), and LTT NAEP (17-year-olds) scores. Changes expressed as SD units of base year. SAT 2005-2015 Main NAEP 2005-2015 LTT NAEP 2004-2012 Reading -0.12* +.05* +.09* Math -0.08* +.09* +.03 *p<.05 Petrilli raised another concern related to NAEP scores by examining cohort trends in NAEP scores. The trend for the 17-year-old cohort of 2012, for example, can be constructed by using the scores of 13-year-olds in 2008 and 9-year-olds in 2004. By tracking NAEP changes over time in this manner, one can get a rough idea of a particular cohort’s achievement as students grow older and proceed through the school system. Examining three cohorts, Fordham’s analysis shows that the gains between ages 13 and 17 are about half as large as those registered between ages nine and 13. Kids gain more on NAEP when they are younger than when they are older. There is nothing new here. NAEP scholars have been aware of this phenomenon for a long time. Fordham points to particular elements of education reform that it favors—charter schools, vouchers, and accountability—as the probable cause. It is true that those reforms more likely target elementary and middle schools than high schools. But the research literature on age discrepancies in NAEP gains (which is not cited in the Fordham analysis) renders doubtful the thesis that education policies are responsible for the phenomenon.[v] Whether high school age students try as hard as they could on NAEP has been pointed to as one explanation. A 1996 analysis of NAEP answer sheets found that 25-to-30 percent of twelfth graders displayed off-task test behaviors—doodling, leaving items blank—compared to 13 percent of eighth graders and six percent of fourth graders. A 2004 national commission on the twelfth grade NAEP recommended incentives (scholarships, certificates, letters of recognition from the President) to boost high school students’ motivation to do well on NAEP. Why would high school seniors or juniors take NAEP seriously when this low stakes test is taken in the midst of taking SAT or ACT tests for college admission, end of course exams that affect high school GPA, AP tests that can affect placement in college courses, state accountability tests that can lead to their schools being deemed a success or failure, and high school exit exams that must be passed to graduate?[vi] Other possible explanations for the phenomenon are: 1) differences in the scales between the ages tested on LTT NAEP (in other words, a one-point gain on the scale between ages nine and 13 may not represent the same amount of learning as a one-point gain between ages 13 and 17); 2) different rates of participation in NAEP among elementary, middle, and high schools;[vii] and 3) social trends that affect all high school students, not just those in public schools. The third possibility can be explored by analyzing trends for students attending private schools. If Fordham had disaggregated the NAEP data by public and private schools (the scores of Catholic school students are available), it would have found that the pattern among private school students is similar—younger students gain more than older students on NAEP. That similarity casts doubt on the notion that policies governing public schools are responsible for the smaller gains among older students.[viii] Changes in the SAT population Writing in the Washington Post, Carol Burris addresses the question of whether demographic changes have influenced the decline in SAT scores. She concludes that they have not, and in particular, she concludes that the growing proportion of students receiving exam fee waivers has probably not affected scores. She bases that conclusion on an analysis of SAT participation disaggregated by level of family income. Burris notes that the percentage of SAT takers has been stable across income groups in recent years. That criterion is not trustworthy. About 39 percent of students in 2015 declined to provide information on family income. The 61 percent that answered the family income question are probably skewed against low-income students who are on fee waivers (the assumption being that they may feel uncomfortable answering a question about family income).[ix] Don’t forget that the SAT population as a whole is a self-selected sample. A self-selected subsample from a self-selected sample tells us even less than the original sample, which told us almost nothing. The fee waiver share of SAT takers increased from 21 percent in 2011 to 25 percent in 2015. The simple fact that fee waivers serve low-income families, whose children tend to be lower-scoring SAT takers, is important, but not the whole story here. Students from disadvantaged families have always taken the SAT. But they paid for it themselves. If an additional increment of disadvantaged families take the SAT because they don’t have to pay for it, it is important to consider whether the new entrants to the pool of SAT test takers possess unmeasured characteristics that correlate with achievement—beyond the effect already attributed to socioeconomic status. Robert Kelchen, an assistant professor of higher education at Seton Hall University, calculated the effect on national SAT scores of just three jurisdictions (Washington, DC, Delaware, and Idaho) adopting policies of mandatory SAT testing paid for by the state. He estimated that these policies explain about 21 percent of the nationwide decline in test scores between 2011 and 2015. He also notes that a more thorough analysis, incorporating fee waivers of other states and districts, would surely boost that figure. Fee waivers in two dozen Texas school districts, for example, are granted to all juniors and seniors in high school. And all students in those districts (including Dallas and Fort Worth) are required to take the SAT beginning in the junior year. Such universal testing policies can increase access and serve the cause of equity, but they will also, at least for a while, lead to a decline in SAT scores. Here, I offer my own back of the envelope calculation of the relationship of demographic changes with SAT scores. The College Board reports test scores and participation rates for nine racial and ethnic groups.[x] These data are preferable to family income because a) almost all students answer the race/ethnicity question (only four percent are non-responses versus 39 percent for family income), and b) it seems a safe assumption that students are more likely to know their race or ethnicity compared to their family’s income. The question tackled in Table 2 is this: how much would the national SAT scores have changed from 2005 to 2015 if the scores of each racial/ethnic group stayed exactly the same as in 2005, but each group’s proportion of the total population were allowed to vary? In other words, the scores are fixed at the 2005 level for each group—no change. The SAT national scores are then recalculated using the 2015 proportions that each group represented in the national population. Table 2. SAT Scores and Demographic Changes in the SAT Population (2005-2015) Projected Change Based on Change in Proportions Actual Change Projected Change as Percentage of Actual Change Reading -9 -13 69% Math -7 -9 78% The data suggest that two-thirds to three-quarters of the SAT score decline from 2005 to 2015 is associated with demographic changes in the test-taking population. The analysis is admittedly crude. The relationships are correlational, not causal. The race/ethnicity categories are surely serving as proxies for a bundle of other characteristics affecting SAT scores, some unobserved and others (e.g., family income, parental education, language status, class rank) that are included in the SAT questionnaire but produce data difficult to interpret. Conclusion Using an annual decline in SAT scores to indict high schools is bogus. The SAT should not be used to measure national achievement. SAT changes from 2014-2015 are tiny. The downward trend over the past decade represents a larger decline in SAT scores, but one that is still small in magnitude and correlated with changes in the SAT test-taking population. In contrast to SAT scores, NAEP scores, which are designed to monitor national achievement, report slight gains for 17-year-olds over the past ten years. It is true that LTT NAEP gains are larger among students from ages nine to 13 than from ages 13 to 17, but research has uncovered several plausible explanations for why that occurs. The public should exercise great caution in accepting the findings of test score analyses. Test scores are often misinterpreted to promote political agendas, and much of the alarmist rhetoric provoked by small declines in scores is unjustified. * In fairness to Petrilli, he acknowledges in his post, “The SATs aren’t even the best gauge—not all students take them, and those who do are hardly representative.” [i] The 2014 SD for both SAT reading and math was 115. [ii] A substantively trivial change may nevertheless reach statistical significance with large samples. [iii] The 2005 SDs were 113 for reading and 115 for math. [iv] Throughout this post, SAT’s Critical Reading (formerly, the SAT-Verbal section) is referred to as “reading.” I only examine SAT reading and math scores to allow for comparisons to NAEP. Moreover, SAT’s writing section will be dropped in 2016. [v] The larger gains by younger vs. older students on NAEP is explored in greater detail in the 2006 Brown Center Report, pp. 10-11. [vi] If these influences have remained stable over time, they would not affect trends in NAEP. It is hard to believe, however, that high stakes tests carry the same importance today to high school students as they did in the past. [vii] The 2004 blue ribbon commission report on the twelfth grade NAEP reported that by 2002 participation rates had fallen to 55 percent. That compares to 76 percent at eighth grade and 80 percent at fourth grade. Participation rates refer to the originally drawn sample, before replacements are made. NAEP is conducted with two stage sampling—schools first, then students within schools—meaning that the low participation rate is a product of both depressed school (82 percent) and student (77 percent) participation. See page 8 of: http://www.nagb.org/content/nagb/assets/documents/publications/12_gr_commission_rpt.pdf [viii] Private school data are spotty on the LTT NAEP because of problems meeting reporting standards, but analyses identical to Fordham’s can be conducted on Catholic school students for the 2008 and 2012 cohorts of 17-year-olds. [ix] The non-response rate in 2005 was 33 percent. [x] The nine response categories are: American Indian or Alaska Native; Asian, Asian American, or Pacific Islander; Black or African American; Mexican or Mexican American; Puerto Rican; Other Hispanic, Latino, or Latin American; White; Other; and No Response. Authors Tom Loveless Full Article
al Principals as instructional leaders: An international perspective By webfeeds.brookings.edu Published On :: Thu, 24 Mar 2016 00:00:00 -0400 Full Article
al Common Core’s major political challenges for the remainder of 2016 By webfeeds.brookings.edu Published On :: Wed, 30 Mar 2016 07:00:00 -0400 The 2016 Brown Center Report (BCR), which was published last week, presented a study of Common Core State Standards (CCSS). In this post, I’d like to elaborate on a topic touched upon but deserving further attention: what to expect in Common Core’s immediate political future. I discuss four key challenges that CCSS will face between now and the end of the year. Let’s set the stage for the discussion. The BCR study produced two major findings. First, several changes that CCSS promotes in curriculum and instruction appear to be taking place at the school level. Second, states that adopted CCSS and have been implementing the standards have registered about the same gains and losses on NAEP as states that either adopted and rescinded CCSS or never adopted CCSS in the first place. These are merely associations and cannot be interpreted as saying anything about CCSS’s causal impact. Politically, that doesn’t really matter. The big story is that NAEP scores have been flat for six years, an unprecedented stagnation in national achievement that states have experienced regardless of their stance on CCSS. Yes, it’s unfair, but CCSS is paying a political price for those disappointing NAEP scores. No clear NAEP differences have emerged between CCSS adopters and non-adopters to reverse that political dynamic. "Yes, it’s unfair, but CCSS is paying a political price for those disappointing NAEP scores. No clear NAEP differences have emerged between CCSS adopters and non-adopters to reverse that political dynamic." TIMSS and PISA scores in November-December NAEP has two separate test programs. The scores released in 2015 were for the main NAEP, which began in 1990. The long term trend (LTT) NAEP, a different test that was first given in 1969, has not been administered since 2012. It was scheduled to be given in 2016, but was cancelled due to budgetary constraints. It was next scheduled for 2020, but last fall officials cancelled that round of testing as well, meaning that the LTT NAEP won’t be given again until 2024. With the LTT NAEP on hold, only two international assessments will soon offer estimates of U.S. achievement that, like the two NAEP tests, are based on scientific sampling: PISA and TIMSS. Both tests were administered in 2015, and the new scores will be released around the Thanksgiving-Christmas period of 2016. If PISA and TIMSS confirm the stagnant trend in U.S. achievement, expect CCSS to take another political hit. America’s performance on international tests engenders a lot of hand wringing anyway, so the reaction to disappointing PISA or TIMSS scores may be even more pronounced than what the disappointing NAEP scores generated. Is teacher support still declining? Watch Education Next’s survey on Common Core (usually released in August/September) and pay close attention to teacher support for CCSS. The trend line has been heading steadily south. In 2013, 76 percent of teachers said they supported CCSS and only 12 percent were opposed. In 2014, teacher support fell to 43 percent and opposition grew to 37 percent. In 2015, opponents outnumbered supporters for the first time, 50 percent to 37 percent. Further erosion of teacher support will indicate that Common Core’s implementation is in trouble at the ground level. Don’t forget: teachers are the final implementers of standards. An effort by Common Core supporters to change NAEP The 2015 NAEP math scores were disappointing. Watch for an attempt by Common Core supporters to change the NAEP math tests. Michael Cohen, President of Achieve, a prominent pro-CCSS organization, released a statement about the 2015 NAEP scores that included the following: "The National Assessment Governing Board, which oversees NAEP, should carefully review its frameworks and assessments in order to ensure that NAEP is in step with the leadership of the states. It appears that there is a mismatch between NAEP and all states' math standards, no matter if they are common standards or not.” Reviewing and potentially revising the NAEP math framework is long overdue. The last adoption was in 2004. The argument for changing NAEP to place greater emphasis on number and operations, revisions that would bring NAEP into closer alignment with Common Core, also has merit. I have a longstanding position on the NAEP math framework. In 2001, I urged the National Assessment Governing Board (NAGB) to reject the draft 2004 framework because it was weak on numbers and operations—and especially weak on assessing student proficiency with whole numbers, fractions, decimals, and percentages. Common Core’s math standards are right in line with my 2001 complaint. Despite my sympathy for Common Core advocates’ position, a change in NAEP should not be made because of Common Core. In that 2001 testimony, I urged NAGB to end the marriage of NAEP with the 1989 standards of the National Council of Teachers of Mathematics, the math reform document that had guided the main NAEP since its inception. Reform movements come and go, I argued. NAGB’s job is to keep NAEP rigorously neutral. The assessment’s integrity depends upon it. NAEP was originally intended to function as a measuring stick, not as a PR device for one reform or another. If NAEP is changed it must be done very carefully and should be rooted in the mathematics children must learn. The political consequences of it appearing that powerful groups in Washington, DC are changing “The Nation’s Report Card” in order for Common Core to look better will hurt both Common Core and NAEP. Will Opt Out grow? Watch the Opt Out movement. In 2015, several organized groups of parents refused to allow their children to take Common Core tests. In New York state alone, about 60,000 opted out in 2014, skyrocketing to 200,000 in 2015. Common Core testing for 2016 begins now and goes through May. It will be important to see whether Opt Out can expand to other states, grow in numbers, and branch out beyond middle- and upper-income neighborhoods. Conclusion Common Core is now several years into implementation. Supporters have had a difficult time persuading skeptics that any positive results have occurred. The best evidence has been mixed on that question. CCSS advocates say it is too early to tell, and we’ll just have to wait to see the benefits. That defense won’t work much longer. Time is running out. The political challenges that Common Core faces the remainder of this year may determine whether it survives. Authors Tom Loveless Image Source: Jim Young / Reuters Full Article
al Government spending: yes, it really can cut the U.S. deficit By webfeeds.brookings.edu Published On :: Fri, 03 Apr 2015 09:19:00 -0400 Hypocrisy is not scarce in the world of politics. But the current House and Senate budget resolutions set new lows. Each proposes to cut about $5 trillion from government spending over the next decade in pursuit of a balanced budget. Whatever one may think of putting the goal of reducing spending when the ratio of the debt-to-GDP is projected to be stable above investing in the nation’s future, you would think that deficit-reduction hawks wouldn’t cut spending that has been proven to lower the deficit. Yes, there are expenditures that actually lower the deficit, typically by many dollars for each dollar spent. In this category are outlays on ‘program integrity’ to find and punish fraud, tax evasion, and plain old bureaucratic mistakes. You might suppose that those outlays would be spared. Guess again. Consider the following: Medicare. Roughly 10% of Medicare’s $600 billion budget goes for what officials delicately call ‘improper payments, according to the 2014 financial report of the Department of Health and Human Services. Some are improper merely because providers ‘up-code’ legitimate services to boost their incomes. Some payments go for services that serve no valid purpose. And some go for phantom services that were never provided. Whatever the cause, approximately $60 billion of improper payments is not ‘chump change.’ Medicare tries to root out these improper payments, but it lacks sufficient staff to do the job. What it does spend on ‘program integrity’ yields an estimated $14.40? for each dollar spent, about $10 billion a year in total. That number counts only directly measurable savings, such as recoveries and claim denials. A full reckoning of savings would add in the hard-to-measure ‘policeman on the beat’ effect that discourages violations by would-be cheats. Fat targets remain. A recent report from the Institute of Medicine presented findings that veritably scream ‘fraud.’ Per person spending on durable medical equipment and home health care is ten times higher in Miami-Dade County, Florida than the national average. Such equipment and home health accounts for nearly three-quarters of the geographical variation in per person Medicare spending. Yet, only 4% of current recoveries of improper payments come from audits of these two items and little from the highest spending locations. Why doesn’t Medicare spend more and go after the remaining overpayments, you may wonder? The simple answer is that Congress gives Medicare too little money for administration. Direct overhead expenses of Medicare amount to only about 1.5% of program outlays—6% if one includes the internal administrative costs of private health plans that serve Medicare enrollees. Medicare doesn’t need to spend as much on administration as the average of 19% spent by private insurers, because for example, Medicare need not pay dividends to private shareholders or advertise. But spending more on Medicare administration would both pay for itself—$2 for each added dollar spent, according to the conservative estimate in the President’s most recent budget—and improve the quality of care. With more staff, Medicare could stop more improper payments and reduce the use of approved therapies in unapproved ways that do no good and may cause harm. Taxes. Compare two numbers: $540 billion and $468 billion. The first number is the amount of taxes owed but not paid. The second number is the projected federal budget deficit for 2015, according to the Congressional Budget Office. Collecting all taxes legally owed but not paid is an impossibility. It just isn’t worth going after every violation. But current enforcement falls far short of practical limits. Expenditures on enforcement directly yields $4 to $6 for each dollar spent on enforcement. Indirect savings are many times larger—the cop-on-the-beat effect again. So, in an era of ostentatious concern about budget deficits, you would expect fiscal fretting in Congress to lead to increased efforts to collect what the law says people owe in taxes. Wrong again. Between 2010 and 2014, the IRS budget was cut in real terms by 20%. At the same time, the agency had to shoulder new tasks under health reform, as well as process an avalanche of applications for tax exemptions unleashed by the 2010 Supreme Court decision in the Citizens United case. With less money to spend and more to do, enforcement staff dropped by 15% and inflation adjusted collections dropped 13%. One should acknowledge that enforcement will not do away with most avoidance and evasion. Needlessly complex tax laws are the root cause of most tax underpayment. Tax reform would do even more than improved administration to increase the ratio of taxes paid to taxes due. But until that glorious day when Congress finds the wit and will to make the tax system simpler and fairer, it would behoove a nation trying to make ends meet to spend $2 billion to $3 billion more each year to directly collect $10 billion to 15 billion a year more of legally owed taxes and, almost certainly, raise far more than that by frightening borderline scoff-laws. Disability Insurance. Thirteen million people with disabling conditions who are judged incapable of engaging in substantial gainful activity received $161 billion in disability insurance in 2013. If the disabling conditions improve enough so that beneficiaries can return to work, benefits are supposed to be stopped. Such improvement is rare. But when administrators believe that there is some chance, the law requires them to check. They may ask beneficiaries to fill out a questionnaire or, in some cases, undergo a new medical exam at government expense. Each dollar spent in these ways generated an estimated $16 in savings in 2013. Still, the Social Security Administration is so understaffed that SSA has a backlog of 1.3 million disability reviews. Current estimates indicate that spending a little over $1 billion a year more on such reviews over the next decade would save $43 billion. Rather than giving Social Security the staff and spending authority to work down this backlog and realize those savings, Congress has been cutting the agency’s administrative budget and sequestration threatens further cuts. Claiming that better administration will balance the budget would be wrong. But it would help. And it would stop some people from shirking their legal responsibilities and lighten the burdens of those who shoulder theirs. The failure of Congress to provide enough staff to run programs costing hundreds of billions of dollars a year as efficiently and honestly as possible is about as good a definition of criminal negligence as one can find. Authors Henry J. Aaron Full Article
al Eurozone desperately needs a fiscal transfer mechanism to soften the effects of competitiveness imbalances By webfeeds.brookings.edu Published On :: Thu, 18 Jun 2015 00:00:00 -0400 The eurozone has three problems: national debt obligations that cannot be met, medium-term imbalances in trade competitiveness, and long-term structural flaws. The short-run problem requires more of the monetary easing that Germany has, with appalling shortsightedness, been resisting, and less of the near-term fiscal restraint that Germany has, with equally appalling shortsightedness, been seeking. To insist that Greece meet all of its near-term current debt service obligations makes about as much sense as did French and British insistence that Germany honor its reparations obligations after World War I. The latter could not be and were not honored. The former cannot and will not be honored either. The medium-term problem is that, given a single currency, labor costs are too high in Greece and too low in Germany and some other northern European countries. Because adjustments in currency values cannot correct these imbalances, differences in growth of wages must do the job—either wage deflation and continued depression in Greece and other peripheral countries, wage inflation in Germany, or both. The former is a recipe for intense and sustained misery. The latter, however politically improbable it may now seem, is the better alternative. The long-term problem is that the eurozone lacks the fiscal transfer mechanisms necessary to soften the effects of competitiveness imbalances while other forms of adjustment take effect. This lack places extraordinary demands on the willingness of individual nations to undertake internal policies to reduce such imbalances. Until such fiscal transfer mechanisms are created, crises such as the current one are bound to recur. Present circumstances call for a combination of short-term expansionary policies that have to be led or accepted by the surplus nations, notably Germany, who will also have to recognize and accept that not all Greek debts will be paid or that debt service payments will not be made on time and at originally negotiated interest rates. The price for those concessions will be a current and credible commitment eventually to restore and maintain fiscal balance by the peripheral countries, notably Greece. Authors Henry J. Aaron Publication: The International Economy Image Source: © Vincent Kessler / Reuters Full Article
al King v. Burwell: Chalk one up for common sense By webfeeds.brookings.edu Published On :: Thu, 25 Jun 2015 15:33:00 -0400 The Supreme Court today decided that Congress meant what it said when it enacted the Affordable Care Act (ACA). The ACA requires people in all 50 states to carry health insurance and provided tax credits to help them afford it. To have offered such credits only in the dozen states that set up their own exchanges would have been cruel and unsustainable because premiums for many people would have been unaffordable. But the law said that such credits could be paid in exchanges ‘established by a state,’ which led some to claim that the credits could not be paid to people enrolled by the federally operated exchange. In his opinion, Chief Justice Roberts euphemistically calls that wording ‘inartful.’ Six Supreme Court justices decided that, read in its entirety, the law provides tax credits in every state, whether the state manages the exchange itself or lets the federal government do it for them. That decision is unsurprising. More surprising is that the Court agreed to hear the case. When it did so, cases on the same issue were making their ways through four federal circuits. In only one of the four circuits was there a standing decision, and it found that tax credits were available everywhere. It is customary for the Supreme Court to wait to take a case until action in lower courts is complete or two circuits have disagreed. In this situation, the justices, eyeing the electoral calendar, may have preferred to hear the case sooner rather than later to avoid confronting it in the middle of a presidential election. Whatever the Court’s motives for taking the case, their willingness to hear the case caused supporters of the Affordable Care Act enormous unease. Were the more conservative members of the Court poised to accept an interpretation of the law that ACA supporters found ridiculous but that inartful legislative drafting gave the gloss of plausibility? Judicial demeanor at oral argument was not comforting. A 5-4 decision disallowing payment of tax credits seemed ominously plausible. Future Challenges for the ACA The Court’s 6-3 decision ended those fears. The existential threat to health reform from litigation is over. But efforts to undo the Affordable Care Act are not at an end. They will continue in the political sphere. And that is where they should be. ACA opponents know that there is little chance for them to roll back the Affordable Care Act in any fundamental way as long as a Democrat is in the White House. To dismantle the law, they must win the presidency in 2016. But winning the presidency will not be enough. It would be mid 2017 before ACA opponents could draft and enact legislation to curb the Affordable Care Act and months more before it could take effect. To borrow a metaphor from the military, even if those opposed to the ACA win the presidency, they will have to deal with ‘facts on the ground.’ Well over 30 million Americans will be receiving health insurance under the Affordable Care Act. That will include people who can afford health insurance because of the tax credits the Supreme Court affirmed today. It will include millions more insured through Medicaid in the steadily growing number of states that have agreed to extend Medicaid coverage. It will include the young adult children covered under parental plans because the ACA requires this option. Insurance companies will have millions more customers because of the ACA. Hospitals will fill more beds because previously uninsured people will be able to afford care and will have fewer unpaid bills generated by people who were uninsured but the hospitals had to admit under previous law. Drug companies and device manufacturers will be enjoying increased sales because of the ACA. The elderly will have better drug coverage because the ACA has eliminated the notorious ‘donut hole’—the drug expenditures that Medicare previously did not cover. Those facts will discourage any frontal assault on the ACA, particularly if the rate of increase of health spending remains as well controlled as it has been for the past seven years. Of course, differences between supporters and opponents of the ACA will not vanish. But those differences will not preclude constructive legislation. Beginning in 2017, the ACA gives states, an opening to propose alternative ways of achieving the goals of the Affordable Care Act, alone on in groups, by alternative means. The law authorizes the president to approve such waivers if they serve the goals of the law. The United States is large and diverse. Use of this authority may help diffuse the bitter acrimony surrounding Obamacare, as my colleague, Stuart Butler, has suggested. At the same time, Obamacare supporters have their own list of changes that they believe would improve the law. At the top of the list is fixing the ‘family glitch,’ a drafting error that unintentionally deprives many families of access to the insurance exchanges and to tax credits that would make insurance affordable. As Chief Justice Roberts wrote near the end of his opinion of the Court, “In a democracy, the power to make the law rests with those chosen by the people....Congress passed the Affordable Care Act to improve health insurance markets, not to destroy them.” The Supreme Court decision assuring that tax credits are available in all states spares the nation chaos and turmoil. It returns the debate about health care policy to the political arena where it belongs. In so doing, it brings a bit closer the time when the two parties may find it in their interest to sit down and deal with the twin realities of the Affordable Care Act: it is imperfect legislation that needs fixing, and it is decidedly here to stay. Authors Henry J. Aaron Image Source: © Jim Tanner / Reuters Full Article
al Can taxing the rich reduce inequality? You bet it can! By webfeeds.brookings.edu Published On :: Tue, 27 Oct 2015 00:00:00 -0400 Two recently posted papers by Brookings colleagues purport to show that “even a large increase in the top marginal rate would barely reduce inequality.”[1] This conclusion, based on one commonly used measure of inequality, is an incomplete and misleading answer to the question posed: would a stand-alone increase in the top income tax bracket materially reduce inequality? More importantly, it is the wrong question to pose, as a stand-alone increase in the top bracket rate would be bad tax policy that would exacerbate tax avoidance incentives. Sensible tax policy would package that change with at least one other tax modification, and such a package would have an even more striking effect on income inequality. In brief: A stand-alone increase in the top tax bracket would be bad tax policy, but it would meaningfully increase the degree to which the tax system reduces economic inequality. It would have this effect even though it would fall on just ½ of 1 percent of all taxpayers and barely half of their income. Tax policy significantly reduces inequality. But transfer payments and other spending reduce it far more. In combination, taxes and public spending materially offset the inequality generated by market income. The revenue from a well-crafted increase in taxes on upper-income Americans, dedicated to a prudent expansions of public spending, would go far to counter the powerful forces that have made income inequality more extreme in the United States than in any other major developed economy. [1] The quotation is from Peter R. Orszag, “Education and Taxes Can’t Reduce Inequality,” Bloomberg View, September 28, 2015 (at http://bv.ms/1KPJXtx). The two papers are William G. Gale, Melissa S. Kearney, and Peter R. Orszag, “Would a significant increase in the top income tax rate substantially alter income inequality?” September 28, 2015 (at http://brook.gs/1KK40IX) and “Raising the top tax rate would not do much to reduce overall income inequality–additional observations,” October 12, 2015 (at http://brook.gs/1WfXR2G). Downloads Download the paper Authors Henry J. Aaron Image Source: © Jonathan Ernst / Reuters Full Article
al The impossible (pipe) dream—single-payer health reform By webfeeds.brookings.edu Published On :: Tue, 26 Jan 2016 08:38:00 -0500 Led by presidential candidate Bernie Sanders, one-time supporters of ‘single-payer’ health reform are rekindling their romance with a health reform idea that was, is, and will remain a dream. Single-payer health reform is a dream because, as the old joke goes, ‘you can’t get there from here. Let’s be clear: opposing a proposal only because one believes it cannot be passed is usually a dodge.One should judge the merits. Strong leaders prove their skill by persuading people to embrace their visions. But single-payer is different. It is radical in a way that no legislation has ever been in the United States. Not so, you may be thinking. Remember such transformative laws as the Social Security Act, Medicare, the Homestead Act, and the Interstate Highway Act. And, yes, remember the Affordable Care Act. Those and many other inspired legislative acts seemed revolutionary enough at the time. But none really was. None overturned entrenched and valued contractual and legislative arrangements. None reshuffled trillions—or in less inflated days, billions—of dollars devoted to the same general purpose as the new legislation. All either extended services previously available to only a few, or created wholly new arrangements. To understand the difference between those past achievements and the idea of replacing current health insurance arrangements with a single-payer system, compare the Affordable Care Act with Sanders’ single-payer proposal. Criticized by some for alleged radicalism, the ACA is actually stunningly incremental. Most of the ACA’s expanded coverage comes through extension of Medicaid, an existing public program that serves more than 60 million people. The rest comes through purchase of private insurance in “exchanges,” which embody the conservative ideal of a market that promotes competition among private venders, or through regulations that extended the ability of adult offspring to remain covered under parental plans. The ACA minimally altered insurance coverage for the 170 million people covered through employment-based health insurance. The ACA added a few small benefits to Medicare but left it otherwise untouched. It left unaltered the tax breaks that support group insurance coverage for most working age Americans and their families. It also left alone the military health programs serving 14 million people. Private nonprofit and for-profit hospitals, other vendors, and privately employed professionals continue to deliver most care. In contrast, Senator Sanders’ plan, like the earlier proposal sponsored by Representative John Conyers (D-Michigan) which Sanders co-sponsored, would scrap all of those arrangements. Instead, people would simply go to the medical care provider of their choice and bills would be paid from a national trust fund. That sounds simple and attractive, but it raises vexatious questions. How much would it cost the federal government? Where would the money to cover the costs come from? What would happen to the $700 billion that employers now spend on health insurance? How would the $600 billion a year reductions in total health spending that Sanders says his plan would generate come from? What would happen to special facilities for veterans and families of members of the armed services? Sanders has answers for some of these questions, but not for others. Both the answers and non-answers show why single payer is unlike past major social legislation. The answer to the question of how much single payer would cost the federal government is simple: $4.1 trillion a year, or $1.4 trillion more than the federal government now spends on programs that the Sanders plan would replace. The money would come from new taxes. Half the added revenue would come from doubling the payroll tax that employers now pay for Social Security. This tax approximates what employers now collectively spend on health insurance for their employees...if they provide health insurance. But many don’t. Some employers would face large tax increases. Others would reap windfall gains. The cost question is particularly knotty, as Sanders assumes a 20 percent cut in spending averaged over ten years, even as roughly 30 million currently uninsured people would gain coverage. Those savings, even if actually realized, would start slowly, which means cuts of 30 percent or more by Year 10. Where would they come from? Savings from reduced red-tape associated with individual insurance would cover a small fraction of this target. The major source would have to be fewer services or reduced prices. Who would determine which of the services physicians regard as desirable -- and patients have come to expect -- are no longer ‘needed’? How would those be achieved without massive bankruptcies among hospitals, as columnist Ezra Klein has suggested, and would follow such spending cuts? What would be the reaction to the prospect of drastic cuts in salaries of health care personnel – would we have a shortage of doctors and nurses? Would patients tolerate a reduction in services? If people thought that services under the Sanders plan were inadequate, would they be allowed to ‘top up’ with private insurance? If so, what happens to simplicity? If not, why not? Let me be clear: we know that high quality health care can be delivered at much lower cost than is the U.S. norm. We know because other countries do it. In fact, some of them have plans not unlike the one Senator Sanders is proposing. We know that single-payer mechanisms work in some countries. But those systems evolved over decades, based on gradual and incremental change from what existed before. That is the way that public policy is made in democracies. Radical change may occur after a catastrophic economic collapse or a major war. But in normal times, democracies do not tolerate radical discontinuity. If you doubt me, consider the tumult precipitated by the really quite conservative Affordable Care Act. Editor's note: This piece originally appeared in Newsweek. Authors Henry J. Aaron Publication: Newsweek Image Source: © Jim Young / Reuters Full Article
al What America’s retirees really deserve By webfeeds.brookings.edu Published On :: Thu, 18 Feb 2016 12:11:00 -0500 Social Security faces a financial shortfall. If Congress does nothing about it, current projections indicate that benefits will be cut automatically by 21 percent in 2034. Congress could close the gap by raising revenues, lowering benefits, or doing some of both. If benefits seem generous, Congress is likely to lean toward benefit cuts more than revenue increases. If they seem stingy, then the reverse. Given the split between the two parties on whether to cut benefits or to raise them, evidence on the adequacy of benefits is central to this key policy debate. Those perceptions will help determine whether Social Security continues to provide basic retirement income for workers with comparatively low earnings histories and a foundation of retirement income for most others or it will become just a minimal safety-net backstop against extreme destitution? Down-in-the-weeds disagreements among analysts often seem too arcane for anyone other than specialists. But sometimes they are too important to ignore. A current debate about the adequacy of Social Security benefits is an example. The not-so-simple question is this: are Social Security benefits ‘generous’ or ‘stingy’? To answer this question, people long looked to the Office of the Social Security Actuary. For many years that office published estimates of something called the ‘replacement rate’—that is, how high are benefits paid to retirees and the disabled relative what they earned during their working years. A 2014 retiree with median earnings had average lifetime earnings of about $46,000. That worker qualified for a benefit at age 66 of about $19,000, a replacement rate of about 41%. Replacement rates vary with earnings. Dollar benefits rise with earnings, but they rise less than proportionately. As a result, replacement rates of low earners are higher than replacement rates of high earners. As you might suppose, there are many ways in which to compute such ‘replacement rates. Because of analytical disputes on which method is best, the Social Security trustees in 2014 decided to stop including replacement rate estimates in their annual reports. In December 2015, the Congressional Budget Office (CBO) offered what it considered a better measure of the generosity of Social Security. It estimated that replacement rates for middle income recipients were about 60%–dramatically higher than the 41% that the Social Security Trustees had estimated. The gap between the estimates of CBO and those of Social Security is even larger than it seems. To see why, one needs to recognize that to sustain living standards retirees on average need only about 75% to 80% as much income as they did when working. Retirees need less income because they are spared some work-related expenses, such as transportation to and from work. Those are only average of course; some need more, some less. If one believed the SSA actuaries, Social Security provides median earners barely more than half of what they need to be as well off as they were when working. Benefit cuts from that modest level would threaten the well-being for the majority of retirees who are entirely or mostly dependent on Social Security benefits—and especially for those with large medical expenses uncovered by Medicare. On the other hand, if one accepted CBO’s estimates, Social Security provids more than three-quarters of the retirement income target. Against that baseline, benefit cuts would still sting, but they would pose less of a threat, and not much of a threat at all for most retirees who have some income from private pensions or personal savings. When the CBO estimates came out, conservative commentators welcomed the findings and cited CBO’s well-established and well-earned reputation for objectivity. They correctly noted that many retirees have additional income from private pensions, 401ks, or other personal savings, and asserted that there was no general retirement income shortage. By inference, cutting benefits a bit to help close the long-term funding gap would be no big deal. Social Security advocates were put on the defensive, hard-pressed to challenge the estimates of the widely-respected Congressional Budget Office. But earlier this year, CBO acknowledged that it had made mistakes in its Decameter estimates and revised them. The new CBO estimate put the replacement rate for middle-level earners at around 42%, almost the same as the estimate of the Social Security actuaries, not the much higher level that had sent ripples through the policy community. One conservative analyst, Andrew Biggs, who had trumpeted the initial CBO finding in The Wall Street Journal, promptly and honorably retracted his article. Two aspects of this green-eyeshade kerfuffle stand out. The first is that policy debates often depend on obscure technical analyses that are, in turn, remarkably sensitive to ‘black-box’ methods to which few or no outsiders have ready access. The second is that CBO burnished its reputation for honesty by owning up to its own mistakes — in this case, a whopping overestimate of a key number. Such candor is all too rare; it merits notice and praise. But there is a broader lesson as well. Technical issues of comparable complexity surround numerous current political disputes. Is Bernie Sanders’ single-payer plan affordable? Will Marco Rubio’s tax plan cause deficits to balloon? To vote rationally, people must struggle to see through the rhetorical chaff that surrounds candidates’ favorite claims. There is, alas, no substitute for paying close attention to the data, even if they are ‘down in the weeds.’ Editor's note: This piece originally appeared in Fortune. Authors Henry J. Aaron Publication: Fortune Image Source: Ho New Full Article
al The stunning ignorance of Trump's health care plan By webfeeds.brookings.edu Published On :: Mon, 07 Mar 2016 16:32:00 -0500 One cannot help feeling a bit silly taking seriously the policy proposals of a person who seems not to take policy seriously himself. Donald Trump's policy positions have evolved faster over the years than a teenager's moods. He was for a woman's right to choose; now he is against it. He was for a wealth tax to pay off the national debt before proposing a tax plan that would enrich the wealthy and balloon the national debt. He was for universal health care but opposed to any practical way to achieve it. Based on his previous flexibility, Trump's here-today proposals may well be gone tomorrow. As a sometime-Democrat, sometime-Republican, sometime-independent, who is now the leading candidate for the Republican presidential nomination, Trump has just issued his latest pronouncements on health care policy. So, what the hell, let's give them more respect than he has given his own past policy statements. Perhaps unsurprisingly, those earlier pronouncements are notable for their detachment from fact and lack of internal logic. The one-time supporter of universal health care now joins other candidates in his newly-embraced party in calling for repeal of the only serious legislative attempt in American history to move toward universal coverage, the Affordable Care Act. Among his stated reasons for repeal, he alleges that the act has "resulted in runaway costs," promoted health care rationing, reduced competition and narrowed choice. Each of these statements is clearly and demonstrably false. Health care spending per person has grown less rapidly in the six years since the Affordable Care Act was enacted than in any corresponding period in the last four decades. There is now less health care rationing than at any time in living memory, if the term rationing includes denial of care because it is unaffordable. Rationing because of unaffordability is certainly down for the more than 20 million people who are newly insured because of the Affordable Care Act. Hospital re-admissions, a standard indicator of low quality, are down, and the health care exchanges that Trump now says he would abolish, but that resemble the "health marts" he once espoused, have brought more choice to individual shoppers than private employers now offer or ever offered their workers. Trump's proposed alternative to the Affordable Care Act is even worse than his criticism of it. He would retain the highly popular provision in the act that bars insurance companies from denying people coverage because of preexisting conditions, a practice all too common in the years before the health care law. But he would do away with two other provisions of the Affordable Care Act that are essential to make that reform sustainable: the mandate that people carry insurance and the financial assistance to make that requirement feasible for people of modest means. Without those last two provisions, barring insurers from using preexisting conditions to jack up premiums or deny coverage would destroy the insurance market. Why? Because without the mandate and the financial aid, people would have powerful financial incentives to wait until they were seriously ill to buy insurance. They could safely do so, confident that some insurer would have to sell them coverage as soon as they became ill. Insurers that set affordable prices would go broke. If insurers set prices high enough to cover costs, few customers could afford them. In simple terms, Trump's promise to bar insurers from using preexisting conditions to screen customers but simultaneously to scrap the companion provisions that make the bar feasible is either the fraudulent offer of a huckster who takes voters for fools, or clear evidence of stunning ignorance about how insurance works. Take your pick. Unfortunately, none of the other Republican candidates offers a plan demonstrably superior to Trump's. All begin by calling for repeal and replacement of the Affordable Care Act. But none has yet advanced a well-crafted replacement. It is not that the Affordable Care Act is perfect legislation. It isn't. But, as the old saying goes, you can't beat something with nothing. And so far as health care reform is concerned, nothing is what the Republican candidates now have on offer. Editor's note: This piece originally appeared in U.S. News and World Report. Authors Henry J. Aaron Publication: U.S. News and World Report Image Source: © Lucy Nicholson / Reuters Full Article
al Recent Social Security blogs—some corrections By webfeeds.brookings.edu Published On :: Fri, 15 Apr 2016 12:00:00 -0400 Recently, Brookings has posted two articles commenting on proposals to raise the full retirement age for Social Security retirement benefits from 67 to 70. One revealed a fundamental misunderstanding of how the program actually works and what the effects of the policy change would be. The other proposes changes to the system that would subvert the fundamental purpose of the Social Security in the name of ‘reforming’ it. A number of Republican presidential candidates and others have proposed raising the full retirement age. In a recent blog, Robert Shapiro, a Democrat, opposed this move, a position I applaud. But he did so based on alleged effects the proposal would in fact not have, and misunderstanding about how the program actually works. In another blog, Stuart Butler, a conservative, noted correctly that increasing the full benefit age would ‘bolster the system’s finances,’ but misunderstood this proposal’s effects. He proposed instead to end Social Security as a universal pension based on past earnings and to replace it with income-related welfare for the elderly and disabled (which he calls insurance). Let’s start with the misunderstandings common to both authors and to many others. Each writes as if raising the ‘full retirement age’ from 67 to 70 would fall more heavily on those with comparatively low incomes and short life expectancies. In fact, raising the ‘full retirement age’ would cut Social Security Old-Age Insurance benefits by the same proportion for rich and poor alike, and for people whose life expectancies are long or short. To see why, one needs to understand how Social Security works and what ‘raising the full retirement age’ means. People may claim Social Security retirement benefits starting at age 62. If they wait, they get larger benefits—about 6-8 percent more for each year they delay claiming up to age 70. Those who don’t claim their benefits until age 70 qualify for benefits -- 77 percent higher than those with the same earnings history who claim at age 62. The increments approximately compensate the average person for waiting, so that the lifetime value of benefits is independent of the age at which they claim. Mechanically, the computation pivots on the benefit payable at the ‘full retirement age,’ now age 66, but set to increase to age 67 under current law. Raising the full retirement age still more, from 67 to 70, would mean that people age 70 would get the same benefit payable under current law at age 67. That is a benefit cut of 24 percent. Because the annual percentage adjustment for waiting to claim would be unchanged, people who claim benefits at any age, down to age 62, would also receive benefits reduced by 24 percent. In plain English, ‘raising the full benefit age from 67 to 70' is simply a 24 percent across-the-board cut in benefits for all new claimants, whatever their incomes and whatever their life-expectancies. Thus, Robert Shapiro mistakenly writes that boosting the full-benefit age would ‘effectively nullify Social Security for millions of Americans’ with comparatively low life expectancies. It wouldn’t. Anyone who wanted to claim benefits at age 62 still could. Their benefits would be reduced. But so would benefits of people who retire at older ages. Equally mistaken is Stuart Butler’s comment that increasing the full-benefit age from 67 to 70 would ‘cut total lifetime retirement benefits proportionately more for those on the bottom rungs of the income ladder.’ It wouldn’t. The cut would be proportionately the same for everyone, regardless of past earnings or life expectancy. Both Shapiro and Butler, along with many others including my other colleagues Barry Bosworth and Gary Burtless, have noted correctly that life expectancies of high earners have risen considerably, while those of low earners have risen little or not at all. As a result, the lifetime value of Social Security Old-Age Insurance benefits has grown more for high- than for low-earners. That development has been at least partly offset by trends in Social Security Disability Insurance, which goes disproportionately to those with comparatively low earnings and life expectancies and which has been growing far faster than Old-Age Insurance, the largest component of Social Security. But even if the lifetime value of all Social Security benefits has risen faster for high earners than for low earners, an across the board cut in benefits does nothing to offset that trend. In the name of lowering overall Social Security spending, it would cut benefits by the same proportion for those whose life expectancies have risen not at all because the life expectancy of others has risen. Such ‘evenhandeness’ calls to mind Anatole France’s comment that French law ‘in its majestic equality, ...forbids rich and poor alike to sleep under bridges, beg in streets, or steal loaves of bread.’ Faulty analyses, such as those of Shapiro and Butler, cannot conceal a genuine challenge to policy makers. Social Security does face a projected, long-term funding shortfall. Trends in life expectancies may well have made the system less progressive overall than it was in the past. What should be done? For starters, one needs to recognize that for those in successive age cohorts who retire at any given age, rising life expectancy does not lower, but rather increases their need for Social Security retirement benefits because whatever personal savings they may have accumulated gets stretched more thinly to cover more retirement years. For those who remain healthy, the best response to rising longevity may be to retire later. Later retirement means more time to save and fewer years to depend on savings. Here is where the wrong-headedness of Butler’s proposal, to phase down benefits for those with current incomes of $25,000 or more and eliminate them for those with incomes over $100,000, becomes apparent. The only source of income for full retirees is personal savings and, to an ever diminishing degree, employer-financed pensions. Converting Social Security from a program whose benefits are based on past earnings to one that is based on current income from savings would impose a tax-like penalty on such savings, just as would a direct tax on those savings. Conservatives and liberals alike should understand that taxing something is not the way to encourage it. Still, working longer by definition lowers retirement income needs. That is why some analysts have proposed raising the age at which retirement benefits may first be claimed from age 62 to some later age. But this proposal, like across-the-board benefit cuts, falls alike on those who can work longer without undue hardship and on those in physically demanding jobs they can no longer perform, those whose abilities are reduced, and those who have low life expectancies. This group includes not only blue-collar workers, but also many white-collar employees, as indicated by a recent study of the Boston College Retirement Center. If entitlement to Social Security retirement benefits is delayed, it is incumbent on policymakers to link that change to other ‘backstop’ policies that protect those for whom continued work poses a serious burden. It is also incumbent on private employers to design ways to make workplaces friendlier to an aging workforce. The challenge of adjusting Social Security in the face of unevenly distributed increases in longevity, growing income inequality, and the prospective shortfall in Social Security financing is real. The issues are difficult. But solutions are unlikely to emerge from confusion about the way Social Security operates and the actual effects of proposed changes to the program. And it will not be advanced by proposals that would bring to Social Security the failed Vietnam War strategy of destroying a village in order to save it. Authors Henry J. Aaron Image Source: © Sam Mircovich / Reuters Full Article
al The next stage in health reform By webfeeds.brookings.edu Published On :: Thu, 26 May 2016 10:40:00 -0400 Health reform (aka Obamacare) is entering a new stage. The recent announcement by United Health Care that it will stop selling insurance to individuals and families through most health insurance exchanges marks the transition. In the next stage, federal and state policy makers must decide how to use broad regulatory powers they have under the Affordable Care Act (ACA) to stabilize, expand, and diversify risk pools, improve local market competition, encourage insurers to compete on product quality rather than premium alone, and promote effective risk management. In addition, insurance companies must master rate setting, plan design, and network management and effectively manage the health risk of their enrollees in order to stay profitable, and consumers must learn how to choose and use the best plan for their circumstances. Six months ago, United Health Care (UHC) announced that it was thinking about pulling out of the ACA exchanges. Now, they are pulling out of all but a “handful” of marketplaces. UHC is the largest private vendor of health insurance in the nation. Nonetheless, the impact on people who buy insurance through the ACA exchanges will be modest, according to careful analyses from the Kaiser Family Foundation and the Urban Institute. The effect is modest for three reasons. One is that in some states UHC focuses on group insurance, not on insurance sold to individuals, where they are not always a major presence. Secondly, premiums of UHC products in individual markets are relatively high. Third, in most states and counties ACA purchasers will still have a choice of two or more other options. In addition, UHC’s departure may coincide with or actually cause the entry of other insurers, as seems to be happening in Iowa. The announcement by UHC is noteworthy, however. It signals the beginning for ACA exchanges of a new stage in their development, with challenges and opportunities different from and in many ways more important than those they faced during the first three years of operation, when the challenge was just to get up and running. From the time when HealthCare.Gov and the various state exchanges opened their doors until now, administrators grappled non-stop with administrative challenges—how to enroll people, helping them make an informed choice among insurance offerings, computing the right amount of assistance each individual or family should receive, modifying plans when income or family circumstances change, and performing various ‘back office’ tasks such as transferring data to and from insurance companies. The chaotic first weeks after the exchanges opened on October 1, 2013 have been well documented, not least by critics of the ACA. Less well known are the countless behind-the-scenes crises, patches, and work-arounds that harried exchange administrators used for years afterwards to keep the exchanges open and functioning. The ACA forced not just exchange administrators but also insurers to cope with a new system and with new enrollees. Many new exchange customers were uninsured prior to signing up for marketplace coverage. Insurers had little or no information on what their use of health care would be. That meant that insurers could not be sure where to set premiums or how aggressively to try to control costs, for example by limiting networks of physicians and hospitals enrollees could use. Some did the job well or got lucky. Some didn’t. United seems to have fallen in the second category. United could have stayed in the 30 or so state markets they are leaving and tried to figure out ways to compete more effectively, but since their marketplace premiums were often not competitive and most of their business was with large groups, management decided to focus on that highly profitable segment of the insurance market. Some insurers, are seeking sizeable premium increases for insurance year 2017, in part because of unexpectedly high usage of health care by new exchange enrollees. United is not alone in having a rough time in the exchanges. So did most of the cooperative plans that were set up under the ACA. Of the 23 cooperative plans that were established, more than half have gone out of business and more may follow. These developments do not signal the end of the ACA or even indicate a crisis. They do mark the end of an initial period when exchanges were learning how best to cope with clerical challenges posed by a quite complicated law and when insurance companies were breaking into new markets. In the next phase of ACA implementation, federal and state policy makers will face different challenges: how to stabilize, expand, and diversify marketplace risk pools, promote local market competition, and encourage insurers to compete on product quality rather than premium alone. Insurance company executives will have to figure out how to master rate setting, plan design, and network management and manage risk for customers with different characteristics than those to which they have become accustomed. Achieving these goals will require state and federal authorities to go beyond the core implementation decisions that have absorbed most of their attention to date and exercise powers the ACA gives them. For example, section 1332 of the ACA authorizes states to apply for waivers starting in 2017 under which they can seek to achieve the goals of the 2010 law in ways different from those specified in the original legislation. Along quite different lines, efforts are already underway in many state-based marketplaces, such as the District of Columbia, to expand and diversify the individual market risk pool by expanding marketing efforts to enroll new consumers, especially young adults. Minnesota’s Health Care Task Force recently recommended options to stabilize marketplace premiums, including reinsurance, maximum limits on the excess capital reserves or surpluses of health plans, and the merger of individual and small group markets, as Massachusetts and Vermont have done. In normal markets, prices must cover costs, and while some companies prosper, some do not. In that respect, ACA markets are quite normal. Some regional and national insurers, along with a number of new entrants, have experienced losses in their marketplace business in 2016. One reason seems to be that insurers priced their plans aggressively in 2014 and 2015 to gain customers and then held steady in 2016. Now, many are proposing significant premium hikes for 2017. Others, like United, are withdrawing from some states. ACA exchange administrators and state insurance officials must now take steps to encourage continued or new insurer participation, including by new entrants such as Medicaid managed care organizations (MCOs). For example, in New Mexico, where in 2016 Blue Cross Blue Shield withdrew from the state exchange, state officials now need to work with that insurer to ensure a smooth transition as it re-enters the New Mexico marketplace and to encourage other insurers to join it. In addition, state insurance regulators can use their rate review authority to benefit enrollees by promoting fair and competitive pricing among marketplace insurers. During the rate review process, which sometimes evolves into a bargaining process, insurance regulators often have the ability to put downward pressure on rates, although they must be careful to avoid the risk of underpricing of marketplace plans which could compromise the financial viability of insurers and cause them to withdraw from the market. Exchanges have an important role in the affordability of marketplace plans too. For example ACA marketplace officials in the District of Columbia and Connecticut work closely with state regulators during the rate review process in an effort to keep rates affordable and adequate to assure insurers a fair rate of return. Several studies now indicate that in selecting among health insurance plans people tend to give disproportionate weight to premium price, and insufficient attention to other cost provisions—deductibles and cost sharing—and to quality of service and care. A core objective of the ACA is to encourage insurance customers to evaluate plans comprehensively. This objective will be hard to achieve, as health insurance is perhaps the most complicated product most people buy. But it will be next to impossible unless customers have tools that help them take account of the cost implications of all plan features and report accurately and understandably on plan quality and service. HealthCare.gov and state-based marketplaces, to varying degrees, are already offering consumers access to a number of decision support tools, such as total cost calculators, integrated provider directories, and formulary look-ups, along with tools that indicate provider network size. These should be refined over time. In addition, efforts are now underway at the federal and state level to provide more data to consumers so that they can make quality-driven plan choices. In 2018, the marketplaces will be required to display federally developed quality ratings and enrollee satisfaction information. The District of Columbia is examining the possibility of adding additional measures. California has proposed that starting in 2018 plans may only contract with providers and hospitals that have met state-specified metrics of quality care and promote safety of enrollees at a reasonable price. Such efforts will proliferate, even if not all succeed. Beyond regulatory efforts noted above, insurance companies themselves have a critical role to play in contributing to the continued success of the ACA. As insurers come to understand the risk profiles of marketplace enrollees, they will be better able to set rates, design plans, and manage networks and thereby stay profitable. In addition, insurers are best positioned to maintain the stability of their individual market risk pools by developing and financing marketing plans to increase the volume and diversity of their exchange enrollments. It is important, in addition, that insurers, such as UHC, stop creaming off good risks from the ACA marketplaces by marketing limited coverage insurance products, such as dread disease policies and short term plans. If they do not do so voluntarily, state insurance regulators and the exchanges should join in stopping them from doing so. Most of the attention paid to the ACA to date has focused on efforts to extend health coverage to the previously uninsured and to the administrative stumbles associated with that effort. While insurance coverage will broaden further, the period of rapid growth in coverage is at an end. And while administrative challenges remain, the basics are now in place. Now, the exchanges face the hard work of promoting vigorous and sustainable competition among insurers and of providing their customers with information so that insurers compete on what matters: cost, service, and quality of health care. Editor's note: This piece originally appeared in Real Clear Markets. Kevin Lucia and Justin Giovannelli contributed to this article with generous support from The Commonwealth Fund. Authors Henry J. AaronJustin GiovannelliKevin Lucia Image Source: © Brian Snyder / Reuters Full Article
al Iraqi Shia leaders split over loyalty to Iran By webfeeds.brookings.edu Published On :: Sun, 05 Apr 2020 09:07:25 +0000 Full Article
al Not just a typographical change: Why Brookings is capitalizing Black By webfeeds.brookings.edu Published On :: Wed, 18 Sep 2019 15:25:45 +0000 Brookings is adopting a long-overdue policy to properly recognize the identity of Black Americans and other people of ethnic and indigenous descent in our research and writings. This update comes just as the 1619 Project is re-educating Americans about the foundational role that Black laborers played in making American capitalism and prosperity possible. Without Black… Full Article
al Federal fiscal aid to cities and states must be massive and immediate By webfeeds.brookings.edu Published On :: Tue, 24 Mar 2020 13:39:35 +0000 And why “relief” and “bailout” are two very different things There is a glaring shortfall in the ongoing negotiations between Congress and the White House to design the next emergency relief package to stave off a coronavirus-triggered economic crisis: Relief to close the massive resource gap confronting state and local governments as they tackle safety… Full Article
al COVID-19 outbreak highlights critical gaps in school emergency preparedness By webfeeds.brookings.edu Published On :: Wed, 11 Mar 2020 13:49:02 +0000 The COVID-19 epidemic sweeping the globe has affected millions of students, whose school closures have more often than not caught them, their teachers, and families by surprise. For some, it means missing class altogether, while others are trialing online learning—often facing difficulties with online connections, as well as motivational and psychosocial well-being challenges. These problems… Full Article
al The polarizing effect of Islamic State aggression on the global jihadi movement By webfeeds.brookings.edu Published On :: Wed, 27 Jul 2016 17:26:41 +0000 Full Article
al Obama’s exit calculus on the peace process By webfeeds.brookings.edu Published On :: Wed, 27 Jul 2016 17:29:00 +0000 One issue that has traditionally shared bipartisan support is how the United States should approach the Israeli-Palestinian conflict, write Sarah Yerkes and Ariella Platcha. However, this year both parties have shifted their positions farther from the center and from past Democratic and Republican platforms. How will that affect Obama’s strategy? Full Article Uncategorized
al The U.S. needs a national prevention network to defeat ISIS By webfeeds.brookings.edu Published On :: Wed, 03 Aug 2016 15:40:11 +0000 The recent release of a Congressional report highlighting that the United States is the “top target” of the Islamic State coincided with yet another gathering of members of the global coalition to counter ISIL to take stock of the effort. There, Defense Secretary Carter echoed the sentiments of an increasing number of political and military leaders when he said that military […] Full Article
al Taking the long view: Budgeting for investments in human capital By webfeeds.brookings.edu Published On :: Mon, 08 Feb 2016 13:42:00 -0500 Tomorrow, President Obama unveils his last budget, and we’re sure to see plenty of proposals for spending on education and skills. In the past, the Administration has focused on investments in early childhood education, community colleges, and infrastructure and research. From a budgetary standpoint, the problem with these investments is how to capture their benefits as well as their costs. Show me the evidence First step: find out what works. The Obama Administration has been emphatic about the need for solid evidence in deciding what to fund. The good news is that we now have quite a lot of it, showing that investing in human capital from early education through college can make a difference. Not all programs are successful, of course, and we are still learning what works and what doesn’t. But we know enough to conclude that investing in a variety of health, education, and mobility programs can positively affect education, employment, and earnings in adulthood. Solid investments in human capital For example: 1. Young, low-income children whose families move to better neighborhoods using housing vouchers see a 31 percent increase in earnings; 2. Quality early childhood and school reform programs can raise lifetime income per child by an average of about $200,000, for at an upfront cost of about $20,000; 3. Boosting college completion rates, for instance via the Accelerated Study in Associate Programs (ASAP) in the City University of New York, leads to higher earnings. Underinvesting in human capital? If such estimates are correct (and we recognize there are uncertainties), policymakers are probably underinvesting in such programs because they are looking at the short-term costs but not at longer-term benefits and budget savings. First, the CBO’s standard practice is to use a 10-year budget window, which means long-range effects are often ignored. Second, although the CBO does try to take into account behavioral responses, such as increased take-up rates of a program, or improved productivity and earnings, it often lacks the research needed to make such estimates. Third, the usual assumption is that the rate of return on public investments in human capital is less than that for private investment. This is now questionable, especially given low interest rates. Dynamic scoring for human capital investments? A hot topic in budget politics right now is so-called “dynamic scoring.” This means incorporating macroeconomic effects, such as an increase in the labor force or productivity gains, into cost estimates. In 2015, the House adopted a rule requiring such scoring, when practicable, for major legislation. But appropriations bills are excluded, and quantitative analyses are restricted to the existing 10-year budget window. The interest in dynamic scoring is currently strongest among politicians pushing major tax bills, on the grounds that tax cuts could boost growth. But the principles behind dynamic scoring apply equally to improvements in productivity that could result from proposals to subsidize college education, for example—as proposed by both Senator Sanders and Secretary Clinton. Of course, it is tough to estimate the value of these potential benefits. But it is worth asking whether current budget rules lead to myopia in our assessments of what such investments might accomplish, and thus to an over-statement of their “true” cost. Authors Beth AkersIsabel V. Sawhill Image Source: © Jonathan Ernst / Reuters Full Article
al The gender pay gap: To equality and beyond By webfeeds.brookings.edu Published On :: Tue, 12 Apr 2016 00:00:00 -0400 Today marks Equal Pay Day. How are we doing? We have come a long way since I wrote my doctoral dissertation on the pay gap back in the late 1960s. From earning 59 percent of what men made in 1974 to earning 79 percent in 2015 (among year-round, full-time workers), women have broken a lot of barriers. There is no reason why the remaining gap can’t be closed. The gap could easily move in favor of women. After all, they are now better educated than men. They earn 60 percent of all bachelor’s degrees and the majority of graduate degrees. Adjusting for educational attainment, the current earnings gap widens, with the biggest relative gaps at the highest levels of education: If we want to encourage people to get more education, we can't discriminate against the best educated just because they are women. What’s behind the pay gap? One source of the current gap is the fact that women still take more time off from work to care for their families. These family responsibilities may also affect the kinds of work they choose. Harvard professor Claudia Goldin notes that they are more likely to work in occupations where it is easier to combine work and family life. These divided work-family loyalties are holding women back more than pay discrimination per se. This should change when men are more willing to share equally on the home front, as Richard Reeves and I have argued elsewhere. Pay gap policies: Paid leave, child care, early education But there is much to be done while waiting for this more egalitarian world to arrive. Paid family leave and more support for early child care and education would go a long way toward relieving families, and women in particular, of the dual burden they now face. In the process, the pay gap should shrink or even move in favor of women. The Economic Policy Institute (EPI) has just released a very informative report on these issues. They call for an aggressive expansion of both early childhood education and child care subsidies for low and moderate income families. Specifically, they propose to cap child care expenses at 10 percent of income, which would provide an average subsidy of $3,272 to working families with children and much more than this to lower-income families. The EPI authors argue that child care subsidies would provide needed in-kind benefits to lower income families (check!), boost women’s labor force participation in a way that would benefit the overall economy (check!), and reduce the gender pay gap (check!). In short, childcare subsidies are a win-win-win. Paid leave and the pay gap For present purposes I want to focus on the likely effects on the pay gap. In the mid-1990s, the U.S. had the highest rate of female labor force participation compared to Germany, Canada, and Japan. Now we have the lowest. One reason is because other advanced countries have expanded paid leave and child care support for employed mothers while the U.S. has not: Getting to and past parity If we want to eliminate the pay gap and perhaps even reverse it, the primary focus must be on women’s continuing difficulties in balancing work and family life. We should certainly attend to any remaining instances of pay discrimination in the workplace, as called for in the Paycheck Fairness Act. But the biggest source of the problem is not employer discrimination; it is women’s continued double burden. Authors Isabel V. Sawhill Image Source: © Brendan McDermid / Reuters Full Article
al Modeling equal opportunity By webfeeds.brookings.edu Published On :: Mon, 13 Jun 2016 13:09:00 -0400 The Horatio Alger ideal of upward mobility has a strong grip on the American imagination (Reeves 2014). But recent years have seen growing concern about the distance between the rhetoric of opportunity and the reality of intergenerational mobility trends and patterns. The related issues of equal opportunity, intergenerational mobility, and inequality have all risen up the agenda, for both scholars and policymakers. A growing literature suggests that the United States has fairly low rates of relative income mobility, by comparison to other countries, but also wide variation within the country. President Barack Obama has described the lack of upward mobility, along with income inequality, as “the defining challenge of our time.” Speaker Paul Ryan believes that “the engines of upward mobility have stalled.” But political debates about equality of opportunity and social and economic mobility often provide as much heat as light. Vitally important questions of definition and motivation are often left unanswered. To what extent can “equality of opportunity” be read across from patterns of intergenerational mobility, which measure only outcomes? Is the main concern with absolute mobility (how people fare compared to their parents)—or with relative mobility (how people fare with regard to their peers)? Should the metric for mobility be earnings, income, education, well-being, or some other yardstick? Is the primary concern with upward mobility from the bottom, or with mobility across the spectrum? In this paper, we discuss the normative and definitional questions that guide the selection of measures intended to capture “equality of opportunity”; briefly summarize the state of knowledge on intergenerational mobility in the United States; describe a new microsimulation model designed to examine the process of mobility—the Social Genome Model (SGM); and how it can be used to frame and measure the process, as well as some preliminary estimates of the simulated impact of policy interventions across different life stages on rates of mobility. The three steps being taken in mobility research can be described as the what, the why, and the how. First, it is important to establish what the existing patterns and trends in mobility are. Second, to understand why they exist—in other words, to uncover and describe the “transmission mechanisms” between the outcomes of one generation and the next. Third, to consider how to weaken those mechanisms—or, put differently, how to break the cycles of advantage and disadvantage. Download "Modeling Equal Opportunity" » Downloads Download "Modeling Equal Opportunity" Authors Isabel V. SawhillRichard V. Reeves Publication: Russell Sage Foundation Journal of Social Sciences Full Article
al Money for nothing: Why a universal basic income is a step too far By webfeeds.brookings.edu Published On :: Wed, 15 Jun 2016 12:00:00 -0400 The idea of a universal basic income (UBI) is certainly an intriguing one, and has been gaining traction. Swiss voters just turned it down. But it is still alive in Finland, in the Netherlands, in Alaska, in Oakland, CA, and in parts of Canada. Advocates of a UBI include Charles Murray on the right and Anthony Atkinson on the left. This surprising alliance alone makes it interesting, and it is a reasonable response to a growing pool of Americans made jobless by the march of technology and a safety net that is overly complex and bureaucratic. A comprehensive and excellent analysis in The Economist points out that while fears about technological unemployment have previously proved misleading, “the past is not always a good guide to the future.” Hurting the poor Robert Greenstein argues, however, that a UBI would actually hurt the poor by reallocating support up the income scale. His logic is inescapable: either we have to spend additional trillions providing income grants to all Americans or we have to limit assistance to those who need it most. One option is to provide unconditional payments along the lines of a UBI, but to phase it out as income rises. Libertarians like this approach since it gets rid of bureaucracies and leaves the poor free to spend the money on whatever they choose, rather than providing specific funds for particular needs. Liberals fear that such unconditional assistance would be unpopular and would be an easy target for elimination in the face of budget pressures. Right now most of our social programs are conditional. With the exception of the aged and the disabled, assistance is tied to work or to the consumption of necessities such as food, housing, or medical care, and our two largest means-tested programs are Food Stamps and the Earned Income Tax Credit. The case for paternalism Liberals have been less willing to openly acknowledge that a little paternalism in social policy may not be such a bad thing. In fact, progressives and libertarians alike are loath to admit that many of the poor and jobless are lacking more than just cash. They may be addicted to drugs or alcohol, suffer from mental health issues, have criminal records, or have difficulty functioning in a complex society. Money may be needed but money by itself does not cure such ills. A humane and wealthy society should provide the disadvantaged with adequate services and support. But there is nothing wrong with making assistance conditional on individuals fulfilling some obligation whether it is work, training, getting treatment, or living in a supportive but supervised environment. In the end, the biggest problem with a universal basic income may not be its costs or its distributive implications, but the flawed assumption that money cures all ills. Authors Isabel V. Sawhill Image Source: © Tom Polansek / Reuters Full Article
al Social mobility: A promise that could still be kept By webfeeds.brookings.edu Published On :: Fri, 29 Jul 2016 10:47:00 -0400 As a rhetorical ideal, greater opportunity is hard to beat. Just about all candidates for high elected office declare their commitments to promoting opportunity – who, after all, could be against it? But opportunity is, to borrow a term from the philosopher and political theorist Isaiah Berlin, a "protean" word, with different meanings for different people at different times. Typically, opportunity is closely entwined with an idea of upward mobility, especially between generations. The American Dream is couched in terms of a daughter or son of bartenders or farm workers becoming a lawyer, or perhaps even a U.S. senator. But even here, there are competing definitions of upward mobility. It might mean being better off than your parents were at a similar age. This is what researchers call "absolute mobility," and largely relies on economic growth – the proverbial rising tide that raises most boats. Or it could mean moving to a higher rung of the ladder within society, and so ending up in a better relative position than one's parents. Scholars label this movement "relative mobility." And while there are many ways to think about status or standard of living – education, wealth, health, occupation – the most common yardstick is household income at or near middle age (which, somewhat depressingly, tends to be defined as 40). As a basic principle, we ought to care about both kinds of mobility as proxies for opportunity. We want children to have the chance to do absolutely and relatively well in comparison to their parents. On the One Hand… So how are we doing? The good news is that economic standards of living have improved over time. Most children are therefore better off than their parents. Among children born in the 1970s and 1980s, 84 percent had higher incomes (even after adjusting for inflation) than their parents did at a similar age, according to a Pew study. Absolute upward income mobility, then, has been strong, and has helped children from every income class, especially those nearer the bottom of the ladder. More than 9 in 10 of those born into families in the bottom fifth of the income distribution have been upwardly mobile in this absolute sense. There's a catch, though. Strong absolute mobility goes hand in hand with strong economic growth. So it is quite likely that these rates of generational progress will slow, since the potential growth rate of the economy has probably diminished. This risk is heightened by an increasingly unequal division of the proceeds of growth in recent years. Today's parents are certainly worried. Surveys show that they are far less certain than earlier cohorts that their children will be better off than they are. If the story on absolute mobility may be about to turn for the worse, the picture for relative mobility is already pretty bad. The basic message here: pick your parents carefully. If you are born to parents in the poorest fifth of the income distribution, your chance of remaining stuck in that income group is around 35 to 40 percent. If you manage to be born into a higher-income family, the chances are similarly good that you will remain there in adulthood. It would be wrong, however, to say that class positions are fixed. There is still a fair amount of fluidity or social mobility in America – just not as much as most people seem to believe or want. Relative mobility is especially sticky in the tails at the high and low end of the distribution. Mobility is also considerably lower for blacks than for whites, with blacks much less likely to escape from the bottom rungs of the ladder. Equally ominously, they are much more likely to fall down from the middle quintile. Relative mobility rates in the United States are lower than the rhetoric about equal opportunity might suggest and lower than people believe. But are they getting worse? Current evidence suggests not. In fact, the trend line for relative mobility has been quite flat for the past few decades, according to work by Raj Chetty of Stanford and his co-researchers. It is simply not the case that the amount of intergenerational relative mobility has declined over time. Whether this will remain the case as the generations of children exposed to growing income inequality mature is not yet clear, though. As one of us (Sawhill) has noted, when the rungs on the ladder of opportunity grow further apart, it becomes more difficult to climb the ladder. To the same point, in his latest book, Our Kids – The American Dream in Crisis, Robert Putnam of Harvard argues that the growing gaps not just in income but also in neighborhood conditions, family structure, parenting styles and educational opportunities will almost inevitably lead to less social mobility in the future. Indeed, these multiple disadvantages or advantages are increasingly clustered, making it harder for children growing up in disadvantaged circumstances to achieve the dream of becoming middle class. The Geography of Opportunity Another way to assess the amount of mobility in the United States is to compare it to that found in other high-income nations. Mobility rates are highest in Scandinavia and lowest in the United States, Britain and Italy, with Australia, Western Europe and Canada lying somewhere in between, according to analyses by Jo Blanden, of the University of Surrey and Miles Corak of the University of Ottawa. Interestingly, the most recent research suggests that the United States stands out most for its lack of downward mobility from the top. Or, to paraphrase Billie Holiday, God blesses the child that's got his own. Any differences among countries, while notable, are more than matched by differences within Pioneering work (again by Raj Chetty and his colleagues) shows that some cities have much higher rates of upward mobility than others. From a mobility perspective, it is better to grow up in San Francisco, Seattle or Boston than in Atlanta, Baltimore or Detroit. Families that move to these high-mobility communities when their children are still relatively young enhance the chances that the children will have more education and higher incomes in early adulthood. Greater mobility can be found in places with better schools, fewer single parents, greater social capital, lower income inequality and less residential segregation. However, the extent to which these factors are causes rather than simply correlates of higher or lower mobility is not yet known. Scholarly efforts to establish why it is that some children move up the ladder and others don't are still in their infancy. Models of Mobility What is it about their families, their communities and their own characteristics that determine why they do or do not achieve some measure of success later in life? To help get at this vital question, the Brookings Institution has created a life-cycle model of children's trajectories, using data from the National Longitudinal Survey of Youth on about 5,000 children from birth to age 40. (The resulting Social Genome Model is now a partnership among three institutions: Brookings, the Urban Institute and Child Trends). Our model tracks children's progress through multiple life stages with a corresponding set of success measures at the end of each. For example, children are considered successful at the end of elementary school if they have mastered basic reading and math skills and have acquired the behavioral or non-cognitive competencies that have been shown to predict later success. At the end of adolescence, success is measured by whether the young person has completed high school with a GPA average of 2.5 or better and has not been convicted of a crime or had a baby as a teenager. These metrics capture common-sense intuition about what drives success. But they are also aligned with the empirical evidence on life trajectories. Educational achievement, for example, has a strong effect on later earnings and income, and this well-known linkage is reflected in the model. We have worked hard to adjust for confounding variables but cannot be sure that all such effects are truly causal. We do know that the model does a good job of predicting or projecting later outcomes. Three findings from the model stand out. First, it's clear that success is a cumulative process. According to our measures, a child who is ready for school at age 5 is almost twice as likely to be successful at the end of elementary school as one who is not. This doesn't mean that a life course is set in stone this early, however. Children who get off track at an early age frequently get back on track at a later age; it's just that their chances are not nearly as good. So this is a powerful argument for intervening early in life. But it is not an argument for giving up on older youth. Second, the chances of clearing our last hurdle – being middle class by middle age (specifically, having an income of around $68,000 for a family of four by age 40) – vary quite significantly. A little over half of all children born in the 1980s and 1990s achieved this goal. But those who are black or born into low-income families were very much less likely than others to achieve this benchmark. Third, the effect of a child's circumstances at birth is strong. We use a multidimensional measure here, including not just the family's income but also the mother's education, the marital status of the parents and the birth weight of the child. Together, these factors have substantial effects on a child's subsequent success. Maternal education seems especially important. The Social Genome Model, then, is a useful tool for looking under the hood at why some children succeed and others don't. But it can also be used to assess the likely impact of a variety of interventions designed to improve upward mobility. For one illustrative simulation, we hand-picked a battery of programs shown to be effective at different life stages – a parenting program, a high-quality early-edcation program, a reading and socio-emotional learning program in elementary school, a comprehensive high school reform model – and assessed the possible impact for low-income children benefiting from each of them, or all of them. No single program does very much to close the gap between children from lower- and higher-income families. But the combined effects of multiple programs – that is, from intervening early and often in a child's life – has a surprisingly big impact. The gap of almost 20 percentage points in the chances of low-income and high-income children reaching the middle class shrinks to six percentage points. In other words, we are able to close about two-thirds of the initial gap in the life chances of these two groups of children. The black-white gap narrows, too. Looking at the cumulative impact on adult incomes over a working life (all appropriately discounted with time) and comparing these lifetime income benefits to the costs of the programs, we believe that such investments would pass a cost-benefit test from the perspective of society as a whole and even from the narrower prospective of the taxpayers who fund the programs. What Now? Understanding the processes that lie beneath the patterns of social mobility is critical. It is not enough to know how good the odds of escaping are for a child born into poverty. We want to know why. We can never eliminate the effects of family background on an individual's life chances. But the wide variation among countries and among cities in the U.S. suggests that we could do better – and that public policy may have an important role to play. Models like the Social Genome are intended to assist in that endeavor, in part by allowing policymakers to bench- test competing initiatives based on the statistical evidence. America's presumed exceptionalism is rooted in part on a belief that class-based distinctions are less important than in Western Europe. From this perspective, it is distressing to learn that American children do not have exceptional opportunities to get ahead – and that the consequences of gaps in children's initial circumstances might embed themselves in the social fabric over time, leading to even less social mobility in the future. But there is also some cause for optimism. Programs that compensate at least to some degree for disadvantages earlier in life really can close opportunity gaps and increase rates of social mobility. Moreover, by most any reasonable reckoning, the return on the public investment is high. Editor's note: This piece originally appeared in the Milken Institute Review. Authors Richard V. ReevesIsabel V. Sawhill Publication: Milken Institute Review Image Source: Eric Audras Full Article
al Israel’s Netanyahu is indicted amid political gridlock By webfeeds.brookings.edu Published On :: Thu, 21 Nov 2019 22:29:37 +0000 Israeli Attorney General Avichai Mandelblit ended months of speculation today in announcing his decision to indict Prime Minister Benjamin Netanyahu on charges of bribery, fraud, and breach of trust. The move caps a dramatic and tumultuous year in Israeli politics. If convicted, Netanyahu could face prison time, potentially making him the second consecutive Israeli prime… Full Article
al Around the halls: Experts discuss the recent US airstrikes in Iraq and the fallout By webfeeds.brookings.edu Published On :: Thu, 02 Jan 2020 19:53:38 +0000 U.S. airstrikes in Iraq on December 29 — in response to the killing of an American contractor two days prior — killed two dozen members of the Iranian-backed militia Kata'ib Hezbollah. In the days since, thousands of pro-Iranian demonstrators gathered outside the U.S. embassy in Baghdad, with some forcing their way into the embassy compound… Full Article
al Around the halls: What Brookings experts hope to hear in the Iowa debate By webfeeds.brookings.edu Published On :: Tue, 14 Jan 2020 01:55:34 +0000 Iran and the recent the U.S. strike that killed Quds Force commander Qasem Soleimani will loom large for the Democratic candidates participating in the debate in Iowa. It may be tempting for the candidates to use this issue primarily as an opportunity to criticize the current administration and issue vague appeals for a return to… Full Article
al Around the halls: Brookings experts on the Middle East react to the White House’s peace plan By webfeeds.brookings.edu Published On :: Wed, 29 Jan 2020 16:33:09 +0000 On January 28 at the White House, President Trump unveiled his plan for Middle East peace alongside Israeli Prime Minister Benjanim Netanyahu. Below, Brookings experts on the peace process and the region more broadly offer their initial takes on the announcement. Natan Sachs (@natansachs), Director of the Center for Middle East Policy: This is a… Full Article
al In Israel, Benny Gantz decides to join with rival Netanyahu By webfeeds.brookings.edu Published On :: Fri, 27 Mar 2020 21:09:18 +0000 After three national elections, a worldwide pandemic, months of a government operating with no new budget, a prime minister indicted in three criminal cases, and a genuine constitutional crisis between the parliament and the supreme court, Israel has landed bruised and damaged where it could have been a year ago. This week, Israeli opposition leader… Full Article
al What does the Gantz-Netanyahu coalition government mean for Israel? By webfeeds.brookings.edu Published On :: Tue, 21 Apr 2020 21:02:27 +0000 After three inconclusive elections over the last year, Israel at last has a new government, in the form of a coalition deal between political rivals Benjamin Netanyahu and Benny Gantz. Director of the Center for Middle East Policy Natan Sachs examines the terms of the power-sharing deal, what it means for Israel's domestic priorities as… Full Article
al Universal Service Fund Reform: Expanding Broadband Internet Access in the United States By webfeeds.brookings.edu Published On :: Tue, 05 Apr 2011 10:51:00 -0400 Executive SummaryTwo-thirds of Americans have broadband Internet access in their homes.[1] But because of poor infrastructure or high prices, the remaining third of Americans do not. In some areas, broadband Internet is plainly unavailable because of inadequate infrastructure: More than 14 million Americans – approximately 5 percent of the total population – live in areas where terrestrial (as opposed to mobile) fixed broadband connectivity is unavailable.[2] The effects of insufficient infrastructure development have contributed to racial and cultural disparities in broadband access; for example, terrestrial broadband is available to only 10 percent of residents on tribal lands.[3] Even where terrestrial broadband connectivity is available, however, the high price of broadband service can be prohibitive, especially to lower income Americans. While 93 percent of adults earning more than $75,000 per year are wired for broadband at home, the terrestrial broadband adoption rate is only 40 percent among adults earning less than $20,000 annually.[4] These costs also contribute to racial disparities; almost 70 percent of whites have adopted terrestrial broadband at home, but only 59 percent of blacks and 49 percent of Hispanics have done the same.[5] America's wireless infrastructure is better developed, but many Americans still lack wireless broadband coverage. According to a recent study, 3G wireless networks cover a good portion of the country, including 98 percent of the United States population,[6] but certain states have dramatically lower coverage rates than others. For example, only 71 percent of West Virginia's population is covered by a 3G network.[7] Wireless providers will likely use existing 3G infrastructure to enable the impending transition to 4G networks.[8] Unless wireless infrastructure expands quickly, those Americans that remain unconnected may be left behind. Though America is responsible for the invention and development of Internet technology, the United States has fallen behind competing nations on a variety of important indicators, including broadband adoption rate and price. According to the Organization for Economic Cooperation and Development's survey of 31 developed nations, the United States is ranked fourteenth in broadband penetration rate (i.e. the number of subscribers per 100 inhabitants); only 27.1 percent of Americans have adopted wired broadband subscriptions, compared to 37.8 percent of residents of the Netherlands.[9] America also trails in ensuring the affordability of broadband service. The average price for a medium-speed (2.5Mbps-10Mbps) Internet plan in America is the seventeenth lowest among its competitor nations. For a medium-speed plan, the average American must pay $38 per month, while an average subscriber in Japan (ranked first) pays only $22 for a connection of the same quality.[10] The National Broadband Plan (NBP), drafted by the Federal Communication Commission and released in 2010, seeks to provide all Americans with affordable broadband Internet access.[11] Doing so will not be cheap; analysts project that developing the infrastructure necessary for full broadband penetration will require $24 billion in subsidies and spending.[12] President Obama’s stimulus package has already set aside $4.9 billion to develop broadband infrastructure,[13] and some small ongoing federal programs receive an annual appropriation to promote broadband penetration.[14] However, these funding streams will only account for one-third of the $24 billion necessary to achieve the FCC's goal of full broadband penetration.[15] Moreover, developing infrastructure alone is not enough; many low-income Americans are unable to afford Internet access, even if it is offered in their locality. To close this funding gap and to make broadband more accessible, the National Broadband Plan proposes to transform the Universal Service Fund – a subsidy program that spends $8.7 billion every year to develop infrastructure and improve affordability for telephone service – into a program that would do the same for broadband Internet. [1] Federal Communications Commission, Connecting America: The National Broadband Plan 23 (2010) [hereinafter National Broadband Plan]. [2] Id. at 10. [3] Id. at 23. [4] Id. [5] Id. [6] Id. at 146. [7] Id. [8] Id. [9] Organization for Economic Cooperation and Development, OECD Broadband Portal, OECD.org, (table 1d(1)) (last accessed Jan. 28, 2011). [10] Id. (table 4m) (last accessed Jan. 28, 2011). [11] National Broadband Plan, supra note 1, at 9-10. [12] Id. at 136. [13] Id. at 139. [14] Id. [15] Id. Downloads Download the Full Paper Authors Jeffrey Rosen Image Source: Donald E. Carroll Full Article
al Interpreting the Constitution in the Digital Era By webfeeds.brookings.edu Published On :: Wed, 30 Nov 2011 11:23:00 -0500 In an interview on NPR's Fresh Air, Jeffrey Rosen discusses how technological changes are challenging basic Constitutional principles of freedom of speech and our own individual autonomy.TERRY GROSS, HOST:This is FRESH AIR. I'm Terry Gross. The digital world that we've come to rely on - the Internet, social networks, GPS's, street maps—also creates opportunities to collect information about us, track our movements and invade our privacy. Add to that brain scans that might reveal criminal tendencies and new developments in genetic medicine and biotechnology, and you have a lot of potential challenges to basic Constitutional principles that our founding father couldn't possibly have imagined. My guest, Jeffrey Rosen has put together a new book that explores those challenges. Along with Benjamin Wittes, he co-edited Constitution 3.0: Freedom and Technological Change. It's a publication of the Brookings Institution's Project on Technology and the Constitution, which Rosen directs. He's also a law professor at George Washington University and legal editor for The New Republic. His new book is a collection of essays in which a diverse group of legal scholars imagine plausible technological developments in or near the year 2025 that would stress current Constitutional law, and they propose possible solutions. Jeffrey Rosen, welcome back to FRESH AIR. So what are the particular parts of the Constitution that you think really come into play here with new technologies? JEFFREY ROSEN: Well, what's so striking is that none of the existing amendments give clear answers to the most basic questions we're having today. So, for example, think about global positioning system technologies, which the Supreme Court is now considering. Can the police, without a warrant, put a secret GPS device on the bottom of someone's car and track him 24/7 for a month? Well, the relevant constitutional text is the Fourth Amendment which says the right of the people to be secure in their persons, houses, papers and effects against unreasonable searches and seizures, shall not be violated. But that doesn't answer the question: Is it an unreasonable search of our persons or effects to be monitored in public spaces? Some courts have said no. Several lower court judges and the Obama administration argue that we have no expectation of privacy in public, because it's theoretically possible for our neighbors to put a tail on us or for the police to track us for 100 miles, as the court has said. Therefore, we have to assume the risk that we're being monitored, ubiquitously, 24/7 for a month. But not everyone agrees. In a visionary opinion, Judge Douglas Ginsburg on the U.S. Court of Appeals for the D.C. Circuit said there's a tremendous difference between short-term and long-term surveillance. We may expect that our neighbors are watching when we walk on the street for a few blocks, but no one in practice expects to be tailed or surveilled for a month. Ginsburg said we do have an expectation of privacy in the whole of our movements, and therefore when the police are going to engage in long-term surveillance, because they can learn so much more about us, they should have a warrant. There was a remarkable moment in the oral argument for the global positioning system case. Chief Justice John Roberts, who asked the first question, he said: Isn't there a difference between 100-mile search of the kind we've approved in the past and watching someone for a month? The government's lawyer resisted, and Roberts said: Is it the U.S. government's position that the police could put GPS devices inside the clothes of the members of this court, of these justices, or under our cars and track us for a month? And when the government's lawyer said yes, I think he may have lost the case. Click here to read the full transcript » Click here to download the full interview » Authors Jeffrey Rosen Publication: NPR Image Source: Tom Grill Full Article
al Constitution 3.0: Freedom, Technological Change and the Law By webfeeds.brookings.edu Published On :: Tue, 13 Dec 2011 10:00:00 -0500 Event Information December 13, 201110:00 AM - 11:30 AM ESTSaul/Zilkha RoomsThe Brookings Institution1775 Massachusetts Avenue, NWWashington, DC 20036 Register for the Event Technology unimaginable at the time of the nation’s founding now poses stark challenges to America’s core constitutional principles. Policymakers and legal scholars are closely examining how constitutional law is tested by technological change and how to preserve constitutional principles without hindering progress. In Constitution 3.0: Freedom and Technological Change (Brookings Institution Press, 2011), Governance Studies Senior Fellow Benjamin Wittes and Nonresident Senior Fellow Jeffrey Rosen asked a diverse group of leading scholars to imagine how technological developments plausible by the year 2025 could stress current constitutional law. The resulting essays explore scenarios involving information technology, genetic engineering, security, privacy and beyond. On December 13, the Governance Studies program at Brookings hosted a Judicial Issues Forum examining the scenarios posed in Constitution 3.0 and the challenge of adapting our constitutional values to the technology of the near future. Wittes and Rosen offered key highlights and insights from the book and was joined by two key contributors, O. Carter Snead and Timothy Wu, who discussed their essays. After the program, panelists took audience questions. Video Constitution 3.0: Freedom, Technological Change and the Law Audio Constitution 3.0: Freedom, Technological Change and the Law Transcript Uncorrected Transcript (.pdf) Event Materials 20111213_constitution_technology Full Article
al Constitution 3.0 : Freedom and Technological Change By webfeeds.brookings.edu Published On :: Tue, 13 Dec 2011 00:00:00 -0500 Brookings Institution Press 2011 271pp. Technological changes are posing stark challenges to America’s core values. Basic constitutional principles find themselves under stress from stunning advances that were unimaginable even a few decades ago, much less during the Founders’ era. Policymakers and scholars must begin thinking about how constitutional principles are being tested by technological change and how to ensure that those principles can be preserved without hindering technological progress. Constitution 3.0, a product of the Brookings Institution’s landmark Future of the Constitution program, presents an invaluable roadmap for responding to the challenge of adapting our constitutional values to future technological developments. Renowned legal analysts Jeffrey Rosen and Benjamin Wittes asked a diverse group of leading scholars to imagine plausible technological developments in or near the year 2025 that would stress current constitutional law and to propose possible solutions. Some tackled issues certain to arise in the very near future, while others addressed more speculative or hypothetical questions. Some favor judicial responses to the scenarios they pose; others prefer legislative or regulatory responses. Here is a sampling of the questions raised and answered in Constitution 3.0: • How do we ensure our security in the face of the biotechnology revolution and our overwhelming dependence on internationally networked computers? • How do we protect free speech and privacy in a world in which Google and Facebook have more control than any government or judge? • How will advances in brain scan technologies affect the constitutional right against self-incrimination? • Are Fourth Amendment protections against unreasonable search and seizure obsolete in an age of ubiquitous video and unlimited data storage and processing? • How vigorously should society and the law respect the autonomy of individuals to manipulate their genes and design their own babies? Individually and collectively, the deeply thoughtful analyses in Constitution 3.0 present an innovative roadmap for adapting our core legal values, in the interest of keeping the Constitution relevant through the 21st century. Contributors include: Jamie Boyle, Erich Cohen, Robert George, Jack Goldsmith, Orin Kerr, Lawrence Lessig, Stephen Morse, John Robertson, Jeffrey Rosen, Christopher Slobogin, O. Carter Snead, Benjamin Wittes, Tim Wu, and Jonathan Zittrain. ABOUT THE EDITORS Jeffrey Rosen Jeffrey Rosen is a non-resident senior fellow in Governance Studies at the Brookings Institution and a professor of law at the George Washington University in Washington, D.C. He also serves as legal editor for the New Republic and is the author of several books, including The Supreme Court: The Personalities and Rivalries that Defined America (Times Books, 2007) and The Naked Crowd: Reclaiming Security and Freedom in an Anxious Age (Random House, 2005). Benjamin Wittes Benjamin Wittes is a senior fellow in Governance Studies at the Brookings Institution and served nine years as an editorial writer with the Washington Post. His previous books include Detention and Denial: The Case for Candor after Guantánamo (Brookings, 2010) and Law and the Long War: The Future of Justice in the Age of Terror (Penguin, 2008), and he is cofounder of the Lawfare blog. Downloads Table of ContentsSample Chapter Ordering Information: {CD2E3D28-0096-4D03-B2DE-6567EB62AD1E}, 978-0-8157-2212-0, $29.95 Add to Cart{9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 9780815724506, $22.95 Add to Cart Full Article
al Walk, Don’t Drive, to the Real Estate Recovery By webfeeds.brookings.edu Published On :: The front page and lead home page New York Times story this past Saturday had the startling headline: “Bad Times Linger in Homebuilding.” The Times concludes that “A long term shift in behavior seems to be underway. Instead of wanting the biggest and newest, even if it requires a long commute, buyers now demand something… Full Article Uncategorized
al Are the Millennials Driving Downtown Corporate Relocations? By webfeeds.brookings.edu Published On :: In spite of the U.S. Census data for the past decade showing continued job de-centralization, there is now much anecdotal evidence for the just the opposite. The Chicago Crain’s Business Journal reports that companies such as Allstate, Motorola, AT&T, GE Capital, and even Sears are re-considering their fringe suburban locations, generally in stand alone campuses,… Full Article Uncategorized