the Common Core and classroom instruction: The good, the bad, and the ugly By webfeeds.brookings.edu Published On :: Thu, 14 May 2015 00:00:00 -0400 This post continues a series begun in 2014 on implementing the Common Core State Standards (CCSS). The first installment introduced an analytical scheme investigating CCSS implementation along four dimensions: curriculum, instruction, assessment, and accountability. Three posts focused on curriculum. This post turns to instruction. Although the impact of CCSS on how teachers teach is discussed, the post is also concerned with the inverse relationship, how decisions that teachers make about instruction shape the implementation of CCSS. A couple of points before we get started. The previous posts on curriculum led readers from the upper levels of the educational system—federal and state policies—down to curricular decisions made “in the trenches”—in districts, schools, and classrooms. Standards emanate from the top of the system and are produced by politicians, policymakers, and experts. Curricular decisions are shared across education’s systemic levels. Instruction, on the other hand, is dominated by practitioners. The daily decisions that teachers make about how to teach under CCSS—and not the idealizations of instruction embraced by upper-level authorities—will ultimately determine what “CCSS instruction” really means. I ended the last post on CCSS by describing how curriculum and instruction can be so closely intertwined that the boundary between them is blurred. Sometimes stating a precise curricular objective dictates, or at least constrains, the range of instructional strategies that teachers may consider. That post focused on English-Language Arts. The current post focuses on mathematics in the elementary grades and describes examples of how CCSS will shape math instruction. As a former elementary school teacher, I offer my own personal opinion on these effects. The Good Certain aspects of the Common Core, when implemented, are likely to have a positive impact on the instruction of mathematics. For example, Common Core stresses that students recognize fractions as numbers on a number line. The emphasis begins in third grade: CCSS.MATH.CONTENT.3.NF.A.2 Understand a fraction as a number on the number line; represent fractions on a number line diagram. CCSS.MATH.CONTENT.3.NF.A.2.A Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1/b and that the endpoint of the part based at 0 locates the number 1/b on the number line. CCSS.MATH.CONTENT.3.NF.A.2.B Represent a fraction a/b on a number line diagram by marking off a lengths 1/b from 0. Recognize that the resulting interval has size a/b and that its endpoint locates the number a/b on the number line. When I first read this section of the Common Core standards, I stood up and cheered. Berkeley mathematician Hung-Hsi Wu has been working with teachers for years to get them to understand the importance of using number lines in teaching fractions.[1] American textbooks rely heavily on part-whole representations to introduce fractions. Typically, students see pizzas and apples and other objects—typically other foods or money—that are divided up into equal parts. Such models are limited. They work okay with simple addition and subtraction. Common denominators present a bit of a challenge, but ½ pizza can be shown to be also 2/4, a half dollar equal to two quarters, and so on. With multiplication and division, all the little tricks students learned with whole number arithmetic suddenly go haywire. Students are accustomed to the fact that multiplying two whole numbers yields a product that is larger than either number being multiplied: 4 X 5 = 20 and 20 is larger than both 4 and 5.[2] How in the world can ¼ X 1/5 = 1/20, a number much smaller than either 1/4or 1/5? The part-whole representation has convinced many students that fractions are not numbers. Instead, they are seen as strange expressions comprising two numbers with a small horizontal bar separating them. I taught sixth grade but occasionally visited my colleagues’ classes in the lower grades. I recall one exchange with second or third graders that went something like this: “Give me a number between seven and nine.” Giggles. “Eight!” they shouted. “Give me a number between two and three.” Giggles. “There isn’t one!” they shouted. “Really?” I’d ask and draw a number line. After spending some time placing whole numbers on the number line, I’d observe, “There’s a lot of space between two and three. Is it just empty?” Silence. Puzzled little faces. Then a quiet voice. “Two and a half?” You have no idea how many children do not make the transition to understanding fractions as numbers and because of stumbling at this crucial stage, spend the rest of their careers as students of mathematics convinced that fractions are an impenetrable mystery. And that’s not true of just students. California adopted a test for teachers in the 1980s, the California Basic Educational Skills Test (CBEST). Beginning in 1982, even teachers already in the classroom had to pass it. I made a nice after-school and summer income tutoring colleagues who didn’t know fractions from Fermat’s Last Theorem. To be fair, primary teachers, teaching kindergarten or grades 1-2, would not teach fractions as part of their math curriculum and probably hadn’t worked with a fraction in decades. So they are no different than non-literary types who think Hamlet is just a play about a young guy who can’t make up his mind, has a weird relationship with his mother, and winds up dying at the end. Division is the most difficult operation to grasp for those arrested at the part-whole stage of understanding fractions. A problem that Liping Ma posed to teachers is now legendary.[3] She asked small groups of American and Chinese elementary teachers to divide 1 ¾ by ½ and to create a word problem that illustrates the calculation. All 72 Chinese teachers gave the correct answer and 65 developed an appropriate word problem. Only nine of the 23 American teachers solved the problem correctly. A single American teacher was able to devise an appropriate word problem. Granted, the American sample was not selected to be representative of American teachers as a whole, but the stark findings of the exercise did not shock anyone who has worked closely with elementary teachers in the U.S. They are often weak at math. Many of the teachers in Ma’s study had vague ideas of an “invert and multiply” rule but lacked a conceptual understanding of why it worked. A linguistic convention exacerbates the difficulty. Students may cling to the mistaken notion that “dividing in half” means “dividing by one-half.” It does not. Dividing in half means dividing by two. The number line can help clear up such confusion. Consider a basic, whole-number division problem for which third graders will already know the answer: 8 divided by 2 equals 4. It is evident that a segment 8 units in length (measured from 0 to 8) is divided by a segment 2 units in length (measured from 0 to 2) exactly 4 times. Modeling 12 divided by 2 and other basic facts with 2 as a divisor will convince students that whole number division works quite well on a number line. Now consider the number ½ as a divisor. It will become clear to students that 8 divided by ½ equals 16, and they can illustrate that fact on a number line by showing how a segment ½ units in length divides a segment 8 units in length exactly 16 times; it divides a segment 12 units in length 24 times; and so on. Students will be relieved to discover that on a number line division with fractions works the same as division with whole numbers. Now, let’s return to Liping Ma’s problem: 1 ¾ divided by ½. This problem would not be presented in third grade, but it might be in fifth or sixth grades. Students who have been working with fractions on a number line for two or three years will have little trouble solving it. They will see that the problem simply asks them to divide a line segment of 1 3/4 units by a segment of ½ units. The answer is 3 ½ . Some students might estimate that the solution is between 3 and 4 because 1 ¾ lies between 1 ½ and 2, which on the number line are the points at which the ½ unit segment, laid end on end, falls exactly three and four times. Other students will have learned about reciprocals and that multiplication and division are inverse operations. They will immediately grasp that dividing by ½ is the same as multiplying by 2—and since 1 ¾ x 2 = 3 ½, that is the answer. Creating a word problem involving string or rope or some other linearly measured object is also surely within their grasp. Conclusion I applaud the CCSS for introducing number lines and fractions in third grade. I believe it will instill in children an important idea: fractions are numbers. That foundational understanding will aid them as they work with more abstract representations of fractions in later grades. Fractions are a monumental barrier for kids who struggle with math, so the significance of this contribution should not be underestimated. I mentioned above that instruction and curriculum are often intertwined. I began this series of posts by defining curriculum as the “stuff” of learning—the content of what is taught in school, especially as embodied in the materials used in instruction. Instruction refers to the “how” of teaching—how teachers organize, present, and explain those materials. It’s each teacher’s repertoire of instructional strategies and techniques that differentiates one teacher from another even as they teach the same content. Choosing to use a number line to teach fractions is obviously an instructional decision, but it also involves curriculum. The number line is mathematical content, not just a teaching tool. Guiding third grade teachers towards using a number line does not guarantee effective instruction. In fact, it is reasonable to expect variation in how teachers will implement the CCSS standards listed above. A small body of research exists to guide practice. One of the best resources for teachers to consult is a practice guide published by the What Works Clearinghouse: Developing Effective Fractions Instruction for Kindergarten Through Eighth Grade (see full disclosure below).[4] The guide recommends the use of number lines as its second recommendation, but it also states that the evidence supporting the effectiveness of number lines in teaching fractions is inferred from studies involving whole numbers and decimals. We need much more research on how and when number lines should be used in teaching fractions. Professor Wu states the following, “The shift of emphasis from models of a fraction in the initial stage to an almost exclusive model of a fraction as a point on the number line can be done gradually and gracefully beginning somewhere in grade four. This shift is implicit in the Common Core Standards.”[5] I agree, but the shift is also subtle. CCSS standards include the use of other representations—fraction strips, fraction bars, rectangles (which are excellent for showing multiplication of two fractions) and other graphical means of modeling fractions. Some teachers will manage the shift to number lines adroitly—and others will not. As a consequence, the quality of implementation will vary from classroom to classroom based on the instructional decisions that teachers make. The current post has focused on what I believe to be a positive aspect of CCSS based on the implementation of the standards through instruction. Future posts in the series—covering the “bad” and the “ugly”—will describe aspects of instruction on which I am less optimistic. [1] See H. Wu (2014). “Teaching Fractions According to the Common Core Standards,” https://math.berkeley.edu/~wu/CCSS-Fractions_1.pdf. Also see "What's Sophisticated about Elementary Mathematics?" http://www.aft.org/sites/default/files/periodicals/wu_0.pdf [2] Students learn that 0 and 1 are exceptions and have their own special rules in multiplication. [3] Liping Ma, Knowing and Teaching Elementary Mathematics. [4] The practice guide can be found at: http://ies.ed.gov/ncee/wwc/pdf/practice_guides/fractions_pg_093010.pdf I serve as a content expert in elementary mathematics for the What Works Clearinghouse. I had nothing to do, however, with the publication cited. [5] Wu, page 3. Authors Tom Loveless Full Article
the Implementing Common Core: The problem of instructional time By webfeeds.brookings.edu Published On :: Thu, 09 Jul 2015 00:00:00 -0400 This is part two of my analysis of instruction and Common Core’s implementation. I dubbed the three-part examination of instruction “The Good, The Bad, and the Ugly.” Having discussed “the “good” in part one, I now turn to “the bad.” One particular aspect of the Common Core math standards—the treatment of standard algorithms in whole number arithmetic—will lead some teachers to waste instructional time. A Model of Time and Learning In 1963, psychologist John B. Carroll published a short essay, “A Model of School Learning” in Teachers College Record. Carroll proposed a parsimonious model of learning that expressed the degree of learning (or what today is commonly called achievement) as a function of the ratio of time spent on learning to the time needed to learn. The numerator, time spent learning, has also been given the term opportunity to learn. The denominator, time needed to learn, is synonymous with student aptitude. By expressing aptitude as time needed to learn, Carroll refreshingly broke through his era’s debate about the origins of intelligence (nature vs. nurture) and the vocabulary that labels students as having more or less intelligence. He also spoke directly to a primary challenge of teaching: how to effectively produce learning in classrooms populated by students needing vastly different amounts of time to learn the exact same content.[i] The source of that variation is largely irrelevant to the constraints placed on instructional decisions. Teachers obviously have limited control over the denominator of the ratio (they must take kids as they are) and less than one might think over the numerator. Teachers allot time to instruction only after educational authorities have decided the number of hours in the school day, the number of days in the school year, the number of minutes in class periods in middle and high schools, and the amount of time set aside for lunch, recess, passing periods, various pull-out programs, pep rallies, and the like. There are also announcements over the PA system, stray dogs that may wander into the classroom, and other unscheduled encroachments on instructional time. The model has had a profound influence on educational thought. As of July 5, 2015, Google Scholar reported 2,931 citations of Carroll’s article. Benjamin Bloom’s “mastery learning” was deeply influenced by Carroll. It is predicated on the idea that optimal learning occurs when time spent on learning—rather than content—is allowed to vary, providing to each student the individual amount of time he or she needs to learn a common curriculum. This is often referred to as “students working at their own pace,” and progress is measured by mastery of content rather than seat time. David C. Berliner’s 1990 discussion of time includes an analysis of mediating variables in the numerator of Carroll’s model, including the amount of time students are willing to spend on learning. Carroll called this persistence, and Berliner links the construct to student engagement and time on task—topics of keen interest to researchers today. Berliner notes that although both are typically described in terms of motivation, they can be measured empirically in increments of time. Most applications of Carroll’s model have been interested in what happens when insufficient time is provided for learning—in other words, when the numerator of the ratio is significantly less than the denominator. When that happens, students don’t have an adequate opportunity to learn. They need more time. As applied to Common Core and instruction, one should also be aware of problems that arise from the inefficient distribution of time. Time is a limited resource that teachers deploy in the production of learning. Below I discuss instances when the CCSS-M may lead to the numerator in Carroll’s model being significantly larger than the denominator—when teachers spend more time teaching a concept or skill than is necessary. Because time is limited and fixed, wasted time on one topic will shorten the amount of time available to teach other topics. Excessive instructional time may also negatively affect student engagement. Students who have fully learned content that continues to be taught may become bored; they must endure instruction that they do not need. Standard Algorithms and Alternative Strategies Jason Zimba, one of the lead authors of the Common Core Math standards, and Barry Garelick, a critic of the standards, had a recent, interesting exchange about when standard algorithms are called for in the CCSS-M. A standard algorithm is a series of steps designed to compute accurately and quickly. In the U.S., students are typically taught the standard algorithms of addition, subtraction, multiplication, and division with whole numbers. Most readers of this post will recognize the standard algorithm for addition. It involves lining up two or more multi-digit numbers according to place-value, with one number written over the other, and adding the columns from right to left with “carrying” (or regrouping) as needed. The standard algorithm is the only algorithm required for students to learn, although others are mentioned beginning with the first grade standards. Curiously, though, CCSS-M doesn’t require students to know the standard algorithms for addition and subtraction until fourth grade. This opens the door for a lot of wasted time. Garelick questioned the wisdom of teaching several alternative strategies for addition. He asked whether, under the Common Core, only the standard algorithm could be taught—or at least, could it be taught first. As he explains: Delaying teaching of the standard algorithm until fourth grade and relying on place value “strategies” and drawings to add numbers is thought to provide students with the conceptual understanding of adding and subtracting multi-digit numbers. What happens, instead, is that the means to help learn, explain or memorize the procedure become a procedure unto itself and students are required to use inefficient cumbersome methods for two years. This is done in the belief that the alternative approaches confer understanding, so are superior to the standard algorithm. To teach the standard algorithm first would in reformers’ minds be rote learning. Reformers believe that by having students using strategies in lieu of the standard algorithm, students are still learning “skills” (albeit inefficient and confusing ones), and these skills support understanding of the standard algorithm. Students are left with a panoply of methods (praised as a good thing because students should have more than one way to solve problems), that confuse more than enlighten. Zimba responded that the standard algorithm could, indeed, be the only method taught because it meets a crucial test: reinforcing knowledge of place value and the properties of operations. He goes on to say that other algorithms also may be taught that are consistent with the standards, but that the decision to do so is left in the hands of local educators and curriculum designers: In short, the Common Core requires the standard algorithm; additional algorithms aren’t named, and they aren’t required…Standards can’t settle every disagreement—nor should they. As this discussion of just a single slice of the math curriculum illustrates, teachers and curriculum authors following the standards still may, and still must, make an enormous range of decisions. Zimba defends delaying mastery of the standard algorithm until fourth grade, referring to it as a “culminating” standard that he would, if he were teaching, introduce in earlier grades. Zimba illustrates the curricular progression he would employ in a table, showing that he would introduce the standard algorithm for addition late in first grade (with two-digit addends) and then extend the complexity of its use and provide practice towards fluency until reaching the culminating standard in fourth grade. Zimba would introduce the subtraction algorithm in second grade and similarly ramp up its complexity until fourth grade. It is important to note that in CCSS-M the word “algorithm” appears for the first time (in plural form) in the third grade standards: 3.NBT.2 Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction. The term “strategies and algorithms” is curious. Zimba explains, “It is true that the word ‘algorithms’ here is plural, but that could be read as simply leaving more choice in the hands of the teacher about which algorithm(s) to teach—not as a requirement for each student to learn two or more general algorithms for each operation!” I have described before the “dog whistles” embedded in the Common Core, signals to educational progressives—in this case, math reformers—that despite these being standards, the CCSS-M will allow them great latitude. Using the plural “algorithms” in this third grade standard and not specifying the standard algorithm until fourth grade is a perfect example of such a dog whistle. Why All the Fuss about Standard Algorithms? It appears that the Common Core authors wanted to reach a political compromise on standard algorithms. Standard algorithms were a key point of contention in the “Math Wars” of the 1990s. The 1997 California Framework for Mathematics required that students know the standard algorithms for all four operations—addition, subtraction, multiplication, and division—by the end of fourth grade.[ii] The 2000 Massachusetts Mathematics Curriculum Framework called for learning the standard algorithms for addition and subtraction by the end of second grade and for multiplication and division by the end of fourth grade. These two frameworks were heavily influenced by mathematicians (from Stanford in California and Harvard in Massachusetts) and quickly became favorites of math traditionalists. In both states’ frameworks, the standard algorithm requirements were in direct opposition to the reform-oriented frameworks that preceded them—in which standard algorithms were barely mentioned and alternative algorithms or “strategies” were encouraged. Now that the CCSS-M has replaced these two frameworks, the requirement for knowing the standard algorithms in California and Massachusetts slips from third or fourth grade all the way to sixth grade. That’s what reformers get in the compromise. They are given a green light to continue teaching alternative algorithms, as long as the algorithms are consistent with teaching place value and properties of arithmetic. But the standard algorithm is the only one students are required to learn. And that exclusivity is intended to please the traditionalists. I agree with Garelick that the compromise leads to problems. In a 2013 Chalkboard post, I described a first grade math program in which parents were explicitly requested not to teach the standard algorithm for addition when helping their children at home. The students were being taught how to represent addition with drawings that clustered objects into groups of ten. The exercises were both time consuming and tedious. When the parents met with the school principal to discuss the matter, the principal told them that the math program was following the Common Core by promoting deeper learning. The parents withdrew their child from the school and enrolled him in private school. The value of standard algorithms is that they are efficient and packed with mathematics. Once students have mastered single-digit operations and the meaning of place value, the standard algorithms reveal to students that they can take procedures that they already know work well with one- and two-digit numbers, and by applying them over and over again, solve problems with large numbers. Traditionalists and reformers have different goals. Reformers believe exposure to several algorithms encourages flexible thinking and the ability to draw on multiple strategies for solving problems. Traditionalists believe that a bigger problem than students learning too few algorithms is that too few students learn even one algorithm. I have been a critic of the math reform movement since I taught in the 1980s. But some of their complaints have merit. All too often, instruction on standard algorithms has left out meaning. As Karen C. Fuson and Sybilla Beckmann point out, “an unfortunate dichotomy” emerged in math instruction: teachers taught “strategies” that implied understanding and “algorithms” that implied procedural steps that were to be memorized. Michael Battista’s research has provided many instances of students clinging to algorithms without understanding. He gives an example of a student who has not quite mastered the standard algorithm for addition and makes numerous errors on a worksheet. On one item, for example, the student forgets to carry and calculates that 19 + 6 = 15. In a post-worksheet interview, the student counts 6 units from 19 and arrives at 25. Despite the obvious discrepancy—(25 is not 15, the student agrees)—he declares that his answers on the worksheet must be correct because the algorithm he used “always works.”[iii] Math reformers rightfully argue that blind faith in procedure has no place in a thinking mathematical classroom. Who can disagree with that? Students should be able to evaluate the validity of answers, regardless of the procedures used, and propose alternative solutions. Standard algorithms are tools to help them do that, but students must be able to apply them, not in a robotic way, but with understanding. Conclusion Let’s return to Carroll’s model of time and learning. I conclude by making two points—one about curriculum and instruction, the other about implementation. In the study of numbers, a coherent K-12 math curriculum, similar to that of the previous California and Massachusetts frameworks, can be sketched in a few short sentences. Addition with whole numbers (including the standard algorithm) is taught in first grade, subtraction in second grade, multiplication in third grade, and division in fourth grade. Thus, the study of whole number arithmetic is completed by the end of fourth grade. Grades five through seven focus on rational numbers (fractions, decimals, percentages), and grades eight through twelve study advanced mathematics. Proficiency is sought along three dimensions: 1) fluency with calculations, 2) conceptual understanding, 3) ability to solve problems. Placing the CCSS-M standard for knowing the standard algorithms of addition and subtraction in fourth grade delays this progression by two years. Placing the standard for the division algorithm in sixth grade continues the two-year delay. For many fourth graders, time spent working on addition and subtraction will be wasted time. They already have a firm understanding of addition and subtraction. The same thing for many sixth graders—time devoted to the division algorithm will be wasted time that should be devoted to the study of rational numbers. The numerator in Carroll’s instructional time model will be greater than the denominator, indicating the inefficient allocation of time to instruction. As Jason Zimba points out, not everyone agrees on when the standard algorithms should be taught, the alternative algorithms that should be taught, the manner in which any algorithm should be taught, or the amount of instructional time that should be spent on computational procedures. Such decisions are made by local educators. Variation in these decisions will introduce variation in the implementation of the math standards. It is true that standards, any standards, cannot control implementation, especially the twists and turns in how they are interpreted by educators and brought to life in classroom instruction. But in this case, the standards themselves are responsible for the myriad approaches, many unproductive, that we are sure to see as schools teach various algorithms under the Common Core. [i] Tracking, ability grouping, differentiated learning, programmed learning, individualized instruction, and personalized learning (including today’s flipped classrooms) are all attempts to solve the challenge of student heterogeneity. [ii] An earlier version of this post incorrectly stated that the California framework required that students know the standard algorithms for all four operations by the end of third grade. I regret the error. [iii] Michael T. Battista (2001). “Research and Reform in Mathematics Education,” pp. 32-84 in The Great Curriculum Debate: How Should We Teach Reading and Math? (T. Loveless, ed., Brookings Instiution Press). Authors Tom Loveless Full Article
the No, the sky is not falling: Interpreting the latest SAT scores By webfeeds.brookings.edu Published On :: Thu, 01 Oct 2015 12:00:00 -0400 Earlier this month, the College Board released SAT scores for the high school graduating class of 2015. Both math and reading scores declined from 2014, continuing a steady downward trend that has been in place for the past decade. Pundits of contrasting political stripes seized on the scores to bolster their political agendas. Michael Petrilli of the Fordham Foundation argued that falling SAT scores show that high schools need more reform, presumably those his organization supports, in particular, charter schools and accountability.* For Carol Burris of the Network for Public Education, the declining scores were evidence of the failure of polices her organization opposes, namely, Common Core, No Child Left Behind, and accountability. Petrilli and Burris are both misusing SAT scores. The SAT is not designed to measure national achievement; the score losses from 2014 were miniscule; and most of the declines are probably the result of demographic changes in the SAT population. Let’s examine each of these points in greater detail. The SAT is not designed to measure national achievement It never was. The SAT was originally meant to measure a student’s aptitude for college independent of that student’s exposure to a particular curriculum. The test’s founders believed that gauging aptitude, rather than achievement, would serve the cause of fairness. A bright student from a high school in rural Nebraska or the mountains of West Virginia, they held, should have the same shot at attending elite universities as a student from an Eastern prep school, despite not having been exposed to the great literature and higher mathematics taught at prep schools. The SAT would measure reasoning and analytical skills, not the mastery of any particular body of knowledge. Its scores would level the playing field in terms of curricular exposure while providing a reasonable estimate of an individual’s probability of success in college. Note that even in this capacity, the scores never suffice alone; they are only used to make admissions decisions by colleges and universities, including such luminaries as Harvard and Stanford, in combination with a lot of other information—grade point averages, curricular resumes, essays, reference letters, extra-curricular activities—all of which constitute a student’s complete application. Today’s SAT has moved towards being a content-oriented test, but not entirely. Next year, the College Board will introduce a revised SAT to more closely reflect high school curricula. Even then, SAT scores should not be used to make judgements about U.S. high school performance, whether it’s a single high school, a state’s high schools, or all of the high schools in the country. The SAT sample is self-selected. In 2015, it only included about one-half of the nation’s high school graduates: 1.7 million out of approximately 3.3 million total. And that’s about one-ninth of approximately 16 million high school students. Generalizing SAT scores to these larger populations violates a basic rule of social science. The College Board issues a warning when it releases SAT scores: “Since the population of test takers is self-selected, using aggregate SAT scores to compare or evaluate teachers, schools, districts, states, or other educational units is not valid, and the College Board strongly discourages such uses.” TIME’s coverage of the SAT release included a statement by Andrew Ho of Harvard University, who succinctly makes the point: “I think SAT and ACT are tests with important purposes, but measuring overall national educational progress is not one of them.” The score changes from 2014 were miniscule SAT scores changed very little from 2014 to 2015. Reading scores dropped from 497 to 495. Math scores also fell two points, from 513 to 511. Both declines are equal to about 0.017 standard deviations (SD).[i] To illustrate how small these changes truly are, let’s examine a metric I have used previously in discussing test scores. The average American male is 5’10” in height with a SD of about 3 inches. A 0.017 SD change in height is equal to about 1/20 of an inch (0.051). Do you really think you’d notice a difference in the height of two men standing next to each other if they only differed by 1/20th of an inch? You wouldn’t. Similarly, the change in SAT scores from 2014 to 2015 is trivial.[ii] A more serious concern is the SAT trend over the past decade. Since 2005, reading scores are down 13 points, from 508 to 495, and math scores are down nine points, from 520 to 511. These are equivalent to declines of 0.12 SD for reading and 0.08 SD for math.[iii] Representing changes that have accumulated over a decade, these losses are still quite small. In the Washington Post, Michael Petrilli asked “why is education reform hitting a brick wall in high school?” He also stated that “you see this in all kinds of evidence.” You do not see a decline in the best evidence, the National Assessment of Educational Progress (NAEP). Contrary to the SAT, NAEP is designed to monitor national achievement. Its test scores are based on a random sampling design, meaning that the scores can be construed as representative of U.S. students. NAEP administers two different tests to high school age students, the long term trend (LTT NAEP), given to 17-year-olds, and the main NAEP, given to twelfth graders. Table 1 compares the past ten years’ change in test scores of the SAT with changes in NAEP.[iv] The long term trend NAEP was not administered in 2005 or 2015, so the closest years it was given are shown. The NAEP tests show high school students making small gains over the past decade. They do not confirm the losses on the SAT. Table 1. Comparison of changes in SAT, Main NAEP (12th grade), and LTT NAEP (17-year-olds) scores. Changes expressed as SD units of base year. SAT 2005-2015 Main NAEP 2005-2015 LTT NAEP 2004-2012 Reading -0.12* +.05* +.09* Math -0.08* +.09* +.03 *p<.05 Petrilli raised another concern related to NAEP scores by examining cohort trends in NAEP scores. The trend for the 17-year-old cohort of 2012, for example, can be constructed by using the scores of 13-year-olds in 2008 and 9-year-olds in 2004. By tracking NAEP changes over time in this manner, one can get a rough idea of a particular cohort’s achievement as students grow older and proceed through the school system. Examining three cohorts, Fordham’s analysis shows that the gains between ages 13 and 17 are about half as large as those registered between ages nine and 13. Kids gain more on NAEP when they are younger than when they are older. There is nothing new here. NAEP scholars have been aware of this phenomenon for a long time. Fordham points to particular elements of education reform that it favors—charter schools, vouchers, and accountability—as the probable cause. It is true that those reforms more likely target elementary and middle schools than high schools. But the research literature on age discrepancies in NAEP gains (which is not cited in the Fordham analysis) renders doubtful the thesis that education policies are responsible for the phenomenon.[v] Whether high school age students try as hard as they could on NAEP has been pointed to as one explanation. A 1996 analysis of NAEP answer sheets found that 25-to-30 percent of twelfth graders displayed off-task test behaviors—doodling, leaving items blank—compared to 13 percent of eighth graders and six percent of fourth graders. A 2004 national commission on the twelfth grade NAEP recommended incentives (scholarships, certificates, letters of recognition from the President) to boost high school students’ motivation to do well on NAEP. Why would high school seniors or juniors take NAEP seriously when this low stakes test is taken in the midst of taking SAT or ACT tests for college admission, end of course exams that affect high school GPA, AP tests that can affect placement in college courses, state accountability tests that can lead to their schools being deemed a success or failure, and high school exit exams that must be passed to graduate?[vi] Other possible explanations for the phenomenon are: 1) differences in the scales between the ages tested on LTT NAEP (in other words, a one-point gain on the scale between ages nine and 13 may not represent the same amount of learning as a one-point gain between ages 13 and 17); 2) different rates of participation in NAEP among elementary, middle, and high schools;[vii] and 3) social trends that affect all high school students, not just those in public schools. The third possibility can be explored by analyzing trends for students attending private schools. If Fordham had disaggregated the NAEP data by public and private schools (the scores of Catholic school students are available), it would have found that the pattern among private school students is similar—younger students gain more than older students on NAEP. That similarity casts doubt on the notion that policies governing public schools are responsible for the smaller gains among older students.[viii] Changes in the SAT population Writing in the Washington Post, Carol Burris addresses the question of whether demographic changes have influenced the decline in SAT scores. She concludes that they have not, and in particular, she concludes that the growing proportion of students receiving exam fee waivers has probably not affected scores. She bases that conclusion on an analysis of SAT participation disaggregated by level of family income. Burris notes that the percentage of SAT takers has been stable across income groups in recent years. That criterion is not trustworthy. About 39 percent of students in 2015 declined to provide information on family income. The 61 percent that answered the family income question are probably skewed against low-income students who are on fee waivers (the assumption being that they may feel uncomfortable answering a question about family income).[ix] Don’t forget that the SAT population as a whole is a self-selected sample. A self-selected subsample from a self-selected sample tells us even less than the original sample, which told us almost nothing. The fee waiver share of SAT takers increased from 21 percent in 2011 to 25 percent in 2015. The simple fact that fee waivers serve low-income families, whose children tend to be lower-scoring SAT takers, is important, but not the whole story here. Students from disadvantaged families have always taken the SAT. But they paid for it themselves. If an additional increment of disadvantaged families take the SAT because they don’t have to pay for it, it is important to consider whether the new entrants to the pool of SAT test takers possess unmeasured characteristics that correlate with achievement—beyond the effect already attributed to socioeconomic status. Robert Kelchen, an assistant professor of higher education at Seton Hall University, calculated the effect on national SAT scores of just three jurisdictions (Washington, DC, Delaware, and Idaho) adopting policies of mandatory SAT testing paid for by the state. He estimated that these policies explain about 21 percent of the nationwide decline in test scores between 2011 and 2015. He also notes that a more thorough analysis, incorporating fee waivers of other states and districts, would surely boost that figure. Fee waivers in two dozen Texas school districts, for example, are granted to all juniors and seniors in high school. And all students in those districts (including Dallas and Fort Worth) are required to take the SAT beginning in the junior year. Such universal testing policies can increase access and serve the cause of equity, but they will also, at least for a while, lead to a decline in SAT scores. Here, I offer my own back of the envelope calculation of the relationship of demographic changes with SAT scores. The College Board reports test scores and participation rates for nine racial and ethnic groups.[x] These data are preferable to family income because a) almost all students answer the race/ethnicity question (only four percent are non-responses versus 39 percent for family income), and b) it seems a safe assumption that students are more likely to know their race or ethnicity compared to their family’s income. The question tackled in Table 2 is this: how much would the national SAT scores have changed from 2005 to 2015 if the scores of each racial/ethnic group stayed exactly the same as in 2005, but each group’s proportion of the total population were allowed to vary? In other words, the scores are fixed at the 2005 level for each group—no change. The SAT national scores are then recalculated using the 2015 proportions that each group represented in the national population. Table 2. SAT Scores and Demographic Changes in the SAT Population (2005-2015) Projected Change Based on Change in Proportions Actual Change Projected Change as Percentage of Actual Change Reading -9 -13 69% Math -7 -9 78% The data suggest that two-thirds to three-quarters of the SAT score decline from 2005 to 2015 is associated with demographic changes in the test-taking population. The analysis is admittedly crude. The relationships are correlational, not causal. The race/ethnicity categories are surely serving as proxies for a bundle of other characteristics affecting SAT scores, some unobserved and others (e.g., family income, parental education, language status, class rank) that are included in the SAT questionnaire but produce data difficult to interpret. Conclusion Using an annual decline in SAT scores to indict high schools is bogus. The SAT should not be used to measure national achievement. SAT changes from 2014-2015 are tiny. The downward trend over the past decade represents a larger decline in SAT scores, but one that is still small in magnitude and correlated with changes in the SAT test-taking population. In contrast to SAT scores, NAEP scores, which are designed to monitor national achievement, report slight gains for 17-year-olds over the past ten years. It is true that LTT NAEP gains are larger among students from ages nine to 13 than from ages 13 to 17, but research has uncovered several plausible explanations for why that occurs. The public should exercise great caution in accepting the findings of test score analyses. Test scores are often misinterpreted to promote political agendas, and much of the alarmist rhetoric provoked by small declines in scores is unjustified. * In fairness to Petrilli, he acknowledges in his post, “The SATs aren’t even the best gauge—not all students take them, and those who do are hardly representative.” [i] The 2014 SD for both SAT reading and math was 115. [ii] A substantively trivial change may nevertheless reach statistical significance with large samples. [iii] The 2005 SDs were 113 for reading and 115 for math. [iv] Throughout this post, SAT’s Critical Reading (formerly, the SAT-Verbal section) is referred to as “reading.” I only examine SAT reading and math scores to allow for comparisons to NAEP. Moreover, SAT’s writing section will be dropped in 2016. [v] The larger gains by younger vs. older students on NAEP is explored in greater detail in the 2006 Brown Center Report, pp. 10-11. [vi] If these influences have remained stable over time, they would not affect trends in NAEP. It is hard to believe, however, that high stakes tests carry the same importance today to high school students as they did in the past. [vii] The 2004 blue ribbon commission report on the twelfth grade NAEP reported that by 2002 participation rates had fallen to 55 percent. That compares to 76 percent at eighth grade and 80 percent at fourth grade. Participation rates refer to the originally drawn sample, before replacements are made. NAEP is conducted with two stage sampling—schools first, then students within schools—meaning that the low participation rate is a product of both depressed school (82 percent) and student (77 percent) participation. See page 8 of: http://www.nagb.org/content/nagb/assets/documents/publications/12_gr_commission_rpt.pdf [viii] Private school data are spotty on the LTT NAEP because of problems meeting reporting standards, but analyses identical to Fordham’s can be conducted on Catholic school students for the 2008 and 2012 cohorts of 17-year-olds. [ix] The non-response rate in 2005 was 33 percent. [x] The nine response categories are: American Indian or Alaska Native; Asian, Asian American, or Pacific Islander; Black or African American; Mexican or Mexican American; Puerto Rican; Other Hispanic, Latino, or Latin American; White; Other; and No Response. Authors Tom Loveless Full Article
the Reading and math in the Common Core era By webfeeds.brookings.edu Published On :: Thu, 24 Mar 2016 00:00:00 -0400 Full Article
the Brookings Live: Reading and math in the Common Core era By webfeeds.brookings.edu Published On :: Mon, 28 Mar 2016 16:00:00 -0400 Event Information March 28, 20164:00 PM - 4:30 PM EDTOnline OnlyLive Webcast And more from the Brown Center Report on American Education The Common Core State Standards have been adopted as the reading and math standards in more than forty states, but are the frontline implementers—teachers and principals—enacting them? As part of the 2016 Brown Center Report on American Education, Tom Loveless examines the degree to which CCSS recommendations have penetrated schools and classrooms. He specifically looks at the impact the standards have had on the emphasis of non-fiction vs. fiction texts in reading, and on enrollment in advanced courses in mathematics. On March 28, the Brown Center hosted an online discussion of Loveless's findings, moderated by the Urban Institute's Matthew Chingos. In addition to the Common Core, Loveless and Chingos also discussed the other sections of the three-part Brown Center Report, including a study of the relationship between ability group tracking in eighth grade and AP performance in high school. Watch the archived video below. Spreecast is the social video platform that connects people. Check out Reading and Math in the Common Core Era on Spreecast. Full Article
the Common Core’s major political challenges for the remainder of 2016 By webfeeds.brookings.edu Published On :: Wed, 30 Mar 2016 07:00:00 -0400 The 2016 Brown Center Report (BCR), which was published last week, presented a study of Common Core State Standards (CCSS). In this post, I’d like to elaborate on a topic touched upon but deserving further attention: what to expect in Common Core’s immediate political future. I discuss four key challenges that CCSS will face between now and the end of the year. Let’s set the stage for the discussion. The BCR study produced two major findings. First, several changes that CCSS promotes in curriculum and instruction appear to be taking place at the school level. Second, states that adopted CCSS and have been implementing the standards have registered about the same gains and losses on NAEP as states that either adopted and rescinded CCSS or never adopted CCSS in the first place. These are merely associations and cannot be interpreted as saying anything about CCSS’s causal impact. Politically, that doesn’t really matter. The big story is that NAEP scores have been flat for six years, an unprecedented stagnation in national achievement that states have experienced regardless of their stance on CCSS. Yes, it’s unfair, but CCSS is paying a political price for those disappointing NAEP scores. No clear NAEP differences have emerged between CCSS adopters and non-adopters to reverse that political dynamic. "Yes, it’s unfair, but CCSS is paying a political price for those disappointing NAEP scores. No clear NAEP differences have emerged between CCSS adopters and non-adopters to reverse that political dynamic." TIMSS and PISA scores in November-December NAEP has two separate test programs. The scores released in 2015 were for the main NAEP, which began in 1990. The long term trend (LTT) NAEP, a different test that was first given in 1969, has not been administered since 2012. It was scheduled to be given in 2016, but was cancelled due to budgetary constraints. It was next scheduled for 2020, but last fall officials cancelled that round of testing as well, meaning that the LTT NAEP won’t be given again until 2024. With the LTT NAEP on hold, only two international assessments will soon offer estimates of U.S. achievement that, like the two NAEP tests, are based on scientific sampling: PISA and TIMSS. Both tests were administered in 2015, and the new scores will be released around the Thanksgiving-Christmas period of 2016. If PISA and TIMSS confirm the stagnant trend in U.S. achievement, expect CCSS to take another political hit. America’s performance on international tests engenders a lot of hand wringing anyway, so the reaction to disappointing PISA or TIMSS scores may be even more pronounced than what the disappointing NAEP scores generated. Is teacher support still declining? Watch Education Next’s survey on Common Core (usually released in August/September) and pay close attention to teacher support for CCSS. The trend line has been heading steadily south. In 2013, 76 percent of teachers said they supported CCSS and only 12 percent were opposed. In 2014, teacher support fell to 43 percent and opposition grew to 37 percent. In 2015, opponents outnumbered supporters for the first time, 50 percent to 37 percent. Further erosion of teacher support will indicate that Common Core’s implementation is in trouble at the ground level. Don’t forget: teachers are the final implementers of standards. An effort by Common Core supporters to change NAEP The 2015 NAEP math scores were disappointing. Watch for an attempt by Common Core supporters to change the NAEP math tests. Michael Cohen, President of Achieve, a prominent pro-CCSS organization, released a statement about the 2015 NAEP scores that included the following: "The National Assessment Governing Board, which oversees NAEP, should carefully review its frameworks and assessments in order to ensure that NAEP is in step with the leadership of the states. It appears that there is a mismatch between NAEP and all states' math standards, no matter if they are common standards or not.” Reviewing and potentially revising the NAEP math framework is long overdue. The last adoption was in 2004. The argument for changing NAEP to place greater emphasis on number and operations, revisions that would bring NAEP into closer alignment with Common Core, also has merit. I have a longstanding position on the NAEP math framework. In 2001, I urged the National Assessment Governing Board (NAGB) to reject the draft 2004 framework because it was weak on numbers and operations—and especially weak on assessing student proficiency with whole numbers, fractions, decimals, and percentages. Common Core’s math standards are right in line with my 2001 complaint. Despite my sympathy for Common Core advocates’ position, a change in NAEP should not be made because of Common Core. In that 2001 testimony, I urged NAGB to end the marriage of NAEP with the 1989 standards of the National Council of Teachers of Mathematics, the math reform document that had guided the main NAEP since its inception. Reform movements come and go, I argued. NAGB’s job is to keep NAEP rigorously neutral. The assessment’s integrity depends upon it. NAEP was originally intended to function as a measuring stick, not as a PR device for one reform or another. If NAEP is changed it must be done very carefully and should be rooted in the mathematics children must learn. The political consequences of it appearing that powerful groups in Washington, DC are changing “The Nation’s Report Card” in order for Common Core to look better will hurt both Common Core and NAEP. Will Opt Out grow? Watch the Opt Out movement. In 2015, several organized groups of parents refused to allow their children to take Common Core tests. In New York state alone, about 60,000 opted out in 2014, skyrocketing to 200,000 in 2015. Common Core testing for 2016 begins now and goes through May. It will be important to see whether Opt Out can expand to other states, grow in numbers, and branch out beyond middle- and upper-income neighborhoods. Conclusion Common Core is now several years into implementation. Supporters have had a difficult time persuading skeptics that any positive results have occurred. The best evidence has been mixed on that question. CCSS advocates say it is too early to tell, and we’ll just have to wait to see the benefits. That defense won’t work much longer. Time is running out. The political challenges that Common Core faces the remainder of this year may determine whether it survives. Authors Tom Loveless Image Source: Jim Young / Reuters Full Article
the The NAEP proficiency myth By webfeeds.brookings.edu Published On :: Mon, 13 Jun 2016 07:00:00 -0400 On May 16, I got into a Twitter argument with Campbell Brown of The 74, an education website. She released a video on Slate giving advice to the next president. The video begins: “Without question, to me, the issue is education. Two out of three eighth graders in this country cannot read or do math at grade level.” I study student achievement and was curious. I know of no valid evidence to make the claim that two out of three eighth graders are below grade level in reading and math. No evidence was cited in the video. I asked Brown for the evidentiary basis of the assertion. She cited the National Assessment of Educational Progress (NAEP). NAEP does not report the percentage of students performing at grade level. NAEP reports the percentage of students reaching a “proficient” level of performance. Here’s the problem. That’s not grade level. In this post, I hope to convince readers of two things: 1. Proficient on NAEP does not mean grade level performance. It’s significantly above that. 2. Using NAEP’s proficient level as a basis for education policy is a bad idea. Before going any further, let’s look at some history. NAEP history NAEP was launched nearly five decades ago. The first NAEP test was given in science in 1969, followed by a reading test in 1971 and math in 1973. For the first time, Americans were able to track the academic progress of the nation’s students. That set of assessments, which periodically tests students 9, 13, and 17 years old and was last given in 2012, is now known as the Long Term Trend (LTT) NAEP. It was joined by another set of NAEP tests in the 1990s. The Main NAEP assesses students by grade level (fourth, eighth, and twelfth) and, unlike the LTT, produces not only national but also state scores. The two tests, LTT and main, continue on parallel tracks today, and they are often confounded by casual NAEP observers. The main NAEP, which was last administered in 2015, is the test relevant to this post and will be the only one discussed hereafter. The NAEP governing board was concerned that the conventional metric for reporting results (scale scores) was meaningless to the public, so achievement standards (also known as performance standards) were introduced. The percentage of students scoring at advanced, proficient, basic, and below basic levels are reported each time the main NAEP is given. Does NAEP proficient mean grade level? The National Center for Education Statistics (NCES) states emphatically, “Proficient is not synonymous with grade level performance.” The National Assessment Governing Board has a brochure with information on NAEP, including a section devoted to myths and facts. There, you will find this: Myth: The NAEP Proficient level is like being on grade level. Fact: Proficient on NAEP means competency over challenging subject matter. This is not the same thing as being “on grade level,” which refers to performance on local curriculum and standards. NAEP is a general assessment of knowledge and skills in a particular subject. Equating NAEP proficiency with grade level is bogus. Indeed, the validity of the achievement levels themselves is questionable. They immediately came under fire in reviews by the U.S. Government Accountability Office, the National Academy of Sciences, and the National Academy of Education.[1] The National Academy of Sciences report was particularly scathing, labeling NAEP’s achievement levels as “fundamentally flawed.” Despite warnings of NAEP authorities and critical reviews from scholars, some commentators, typically from advocacy groups, continue to confound NAEP proficient with grade level. Organizations that support school reform, such as Achieve Inc. and Students First, prominently misuse the term on their websites. Achieve presses states to adopt cut points aligned with NAEP proficient as part of new Common Core-based accountability systems. Achieve argues that this will inform parents whether children “can do grade level work.” No, it will not. That claim is misleading. How unrealistic is NAEP proficient? Shortly after NCLB was signed into law, Robert Linn, one of the most prominent psychometricians of the past several decades, called ”the target of 100% proficient or above according to the NAEP standards more like wishful thinking than a realistic possibility.” History is on the side of that argument. When the first main NAEP in mathematics was given in 1990, only 13 % of eighth graders scored proficient and 2 % scored advanced. Imagine using “proficient” as synonymous with grade level—85 % scored below grade level! The 1990 national average in eighth grade scale scores was 263 (see Table 1). In 2015, the average was 282, a gain of 19 scale score points. Table 1. Main NAEP Eighth Grade Math Score, by achievement levels, 1990-2015 Year Scale Score Average Below Basic (%) Basic Proficient Advanced Proficient and Above 2015 282 29 38 25 8 33 2009 283 27 39 26 8 34 2003 278 32 39 23 5 28 1996 270 39 38 20 4 24 1990 263 48 37 13 2 15 That’s an impressive gain. Analysts who study NAEP often use 10 points on the NAEP scale as a back of the envelope estimate of one year’s worth of learning. Eighth graders have gained almost two years. The percentage of students scoring below basic has dropped from 48% in 1990 to 29% in 2015. The percentage of students scoring proficient or above has more than doubled, from 15% to 33%. That’s not bad news; it’s good news. But the cut point for NAEP proficient is 299. By that standard, two-thirds of eighth graders are still falling short. Even students in private schools, despite hailing from more socioeconomically advantaged homes and in some cases being selectively admitted by schools, fail miserably at attaining NAEP proficiency. More than half (53 percent) are below proficient. Today’s eighth graders have made it about half-way to NAEP proficient in 25 years, but they still need to gain almost two more years of math learning (17 points) to reach that level. And, don’t forget, that’s just the national average, so even when that lofty goal is achieved, half of the nation’s students will still fall short of proficient. Advocates of the NAEP proficient standard want it to be for all students. That is ridiculous. Another way to think about it: proficient for today’s eighth graders reflects approximately what the average twelfth grader knew in mathematics in 1990. Someday the average eighth grader may be able to do that level of mathematics. But it won’t be soon, and it won’t be every student. In the 2007 Brown Center Report on American Education, I questioned whether NAEP proficient is a reasonable achievement standard.[2] That year, a study by Gary Phillips of American Institutes for Research was published that projected the 2007 TIMSS scores on the NAEP scale. Phillips posed the question: based on TIMSS, how many students in other countries would score proficient or better on NAEP? The study’s methodology only produces approximations, but they are eye-popping. Here are just a few countries: Table 2. Projected Percent NAEP Proficient, Eighth Grade Math Singapore 73 Hong Kong SAR 66 Korea, Rep. of 65 Chinese Taipei 61 Japan 57 Belgium (Flemish) 40 United States 26 Israel 24 England 22 Italy 17 Norway 9 Singapore was the top scoring nation on TIMSS that year, but even there, more than a quarter of students fail to reach NAEP proficient. Japan is not usually considered a slouch on international math assessments, but 43% of its eighth graders fall short. The U.S. looks weak, with only 26% of students proficient. But England, Israel, and Italy are even weaker. Norway, a wealthy nation with per capita GDP almost twice that of the U.S., can only get 9 out of 100 eighth graders to NAEP proficient. Finland isn’t shown in the table because it didn’t participate in the 2007 TIMSS. But it did in 2011, with Finland and the U.S. scoring about the same in eighth grade math. Had Finland’s eighth graders taken NAEP in 2011, it’s a good bet that the proportion scoring below NAEP proficient would have been similar to that in the U.S. And yet articles such as “Why Finland Has the Best Schools,” appear regularly in the U.S. press.[3] Why it matters The National Center for Education Statistics warns that federal law requires that NAEP achievement levels be used on a trial basis until the Commissioner of Education Statistics determines that the achievement levels are “reasonable, valid, and informative to the public.” As the NCES website states, “So far, no Commissioner has made such a determination, and the achievement levels remain in a trial status. The achievement levels should continue to be interpreted and used with caution.” Confounding NAEP proficient with grade-level is uninformed. Designating NAEP proficient as the achievement benchmark for accountability systems is certainly not cautious use. If high school students are required to meet NAEP proficient to graduate from high school, large numbers will fail. If middle and elementary school students are forced to repeat grades because they fall short of a standard anchored to NAEP proficient, vast numbers will repeat grades. On NAEP, students are asked the highest level math course they’ve taken. On the 2015 twelfth grade NAEP, 19% of students said they either were taking or had taken calculus. These are the nation’s best and the brightest, the crème-de la crème of math students. Only one in five students work their way that high up the hierarchy of American math courses. If you are over 45 years old and reading this, the proportion who took calculus in high school is less than one out of ten. In the graduating class of 1990, for instance, only 7% of students had taken calculus.[4] Unsurprisingly, calculus students are also typically taught by the nation’s most knowledgeable math teachers. The nation’s elite math students paired with the nation’s elite math teachers: if any group can prove NAEP proficient a reasonable goal and succeed in getting all students over the NAEP proficiency bar, this is the group. But they don’t. A whopping 30% score below proficient on NAEP. For black and Hispanic calculus students, the figures are staggering. Two-thirds of black calculus students score below NAEP proficient. For Hispanics, the figure is 52%. The nation’s pre-calculus students also fair poorly (69% below proficient). Then the success rate falls off a cliff. In the class of 2015, more than nine out of ten students whose highest math course was Trigonometry or Algebra II fail to meet the NAEP proficient standard. Table 3. 2015 NAEP Twelfth Grade Math, Proficient by Highest Math Course Taken Highest Math Course Taken Percentage Below NAEP Proficient Calculus 30 Pre-calculus 69 Trig/Algebra II 92 Source: NAEP Data Explorer These data defy reason; they also refute common sense. For years, educators have urged students to take the toughest courses they can possibly take. Taken at face value, the data in Table 3 rip the heart out of that advice. These are the toughest courses, and yet huge numbers of the nation’s star students, by any standard aligned with NAEP proficient, would be told that they have failed. Some parents, misled by the confounding of proficient with grade level, might even mistakenly believe that their kids don’t know grade level math. Conclusion NAEP proficient is not synonymous with grade level. NAEP officials urge that proficient not be interpreted as reflecting grade level work. It is a standard set much higher than that. Scholarly panels have reviewed the NAEP achievement standards and found them flawed. The highest scoring nations of the world would appear to be mediocre or poor performers if judged by the NAEP proficient standard. Even large numbers of U.S. calculus students fall short. As states consider building benchmarks for student performance into accountability systems, they should not use NAEP proficient—or any standard aligned with NAEP proficient—as a benchmark. It is an unreasonable expectation, one that ill serves America’s students, parents, and teachers--and the effort to improve America’s schools. [1] Shepard, L. A., Glaser, R., Linn, R., & Bohrnstedt, G. (1993) Setting Performance Standards For Student Achievement: Background Studies. Report of the NAE Panel on the Evaluation of the NAEP Trial State Assessment: An Evaluation of the 1992 Achievement Levels. National Academy of Education. [2] Loveless, Tom. The 2007 Brown Center Report, pages 10-13. [3] William Doyle, “Why Finland Has The Best Schools,” Los Angeles Times, March 18, 2016. [4] NCES, America’s High School Graduates: Results of the 2009 NAEP High School Transcript Study. See Table 8, p. 49. Authors Tom Loveless Image Source: © Brian Snyder / Reuters Full Article
the Government spending: yes, it really can cut the U.S. deficit By webfeeds.brookings.edu Published On :: Fri, 03 Apr 2015 09:19:00 -0400 Hypocrisy is not scarce in the world of politics. But the current House and Senate budget resolutions set new lows. Each proposes to cut about $5 trillion from government spending over the next decade in pursuit of a balanced budget. Whatever one may think of putting the goal of reducing spending when the ratio of the debt-to-GDP is projected to be stable above investing in the nation’s future, you would think that deficit-reduction hawks wouldn’t cut spending that has been proven to lower the deficit. Yes, there are expenditures that actually lower the deficit, typically by many dollars for each dollar spent. In this category are outlays on ‘program integrity’ to find and punish fraud, tax evasion, and plain old bureaucratic mistakes. You might suppose that those outlays would be spared. Guess again. Consider the following: Medicare. Roughly 10% of Medicare’s $600 billion budget goes for what officials delicately call ‘improper payments, according to the 2014 financial report of the Department of Health and Human Services. Some are improper merely because providers ‘up-code’ legitimate services to boost their incomes. Some payments go for services that serve no valid purpose. And some go for phantom services that were never provided. Whatever the cause, approximately $60 billion of improper payments is not ‘chump change.’ Medicare tries to root out these improper payments, but it lacks sufficient staff to do the job. What it does spend on ‘program integrity’ yields an estimated $14.40? for each dollar spent, about $10 billion a year in total. That number counts only directly measurable savings, such as recoveries and claim denials. A full reckoning of savings would add in the hard-to-measure ‘policeman on the beat’ effect that discourages violations by would-be cheats. Fat targets remain. A recent report from the Institute of Medicine presented findings that veritably scream ‘fraud.’ Per person spending on durable medical equipment and home health care is ten times higher in Miami-Dade County, Florida than the national average. Such equipment and home health accounts for nearly three-quarters of the geographical variation in per person Medicare spending. Yet, only 4% of current recoveries of improper payments come from audits of these two items and little from the highest spending locations. Why doesn’t Medicare spend more and go after the remaining overpayments, you may wonder? The simple answer is that Congress gives Medicare too little money for administration. Direct overhead expenses of Medicare amount to only about 1.5% of program outlays—6% if one includes the internal administrative costs of private health plans that serve Medicare enrollees. Medicare doesn’t need to spend as much on administration as the average of 19% spent by private insurers, because for example, Medicare need not pay dividends to private shareholders or advertise. But spending more on Medicare administration would both pay for itself—$2 for each added dollar spent, according to the conservative estimate in the President’s most recent budget—and improve the quality of care. With more staff, Medicare could stop more improper payments and reduce the use of approved therapies in unapproved ways that do no good and may cause harm. Taxes. Compare two numbers: $540 billion and $468 billion. The first number is the amount of taxes owed but not paid. The second number is the projected federal budget deficit for 2015, according to the Congressional Budget Office. Collecting all taxes legally owed but not paid is an impossibility. It just isn’t worth going after every violation. But current enforcement falls far short of practical limits. Expenditures on enforcement directly yields $4 to $6 for each dollar spent on enforcement. Indirect savings are many times larger—the cop-on-the-beat effect again. So, in an era of ostentatious concern about budget deficits, you would expect fiscal fretting in Congress to lead to increased efforts to collect what the law says people owe in taxes. Wrong again. Between 2010 and 2014, the IRS budget was cut in real terms by 20%. At the same time, the agency had to shoulder new tasks under health reform, as well as process an avalanche of applications for tax exemptions unleashed by the 2010 Supreme Court decision in the Citizens United case. With less money to spend and more to do, enforcement staff dropped by 15% and inflation adjusted collections dropped 13%. One should acknowledge that enforcement will not do away with most avoidance and evasion. Needlessly complex tax laws are the root cause of most tax underpayment. Tax reform would do even more than improved administration to increase the ratio of taxes paid to taxes due. But until that glorious day when Congress finds the wit and will to make the tax system simpler and fairer, it would behoove a nation trying to make ends meet to spend $2 billion to $3 billion more each year to directly collect $10 billion to 15 billion a year more of legally owed taxes and, almost certainly, raise far more than that by frightening borderline scoff-laws. Disability Insurance. Thirteen million people with disabling conditions who are judged incapable of engaging in substantial gainful activity received $161 billion in disability insurance in 2013. If the disabling conditions improve enough so that beneficiaries can return to work, benefits are supposed to be stopped. Such improvement is rare. But when administrators believe that there is some chance, the law requires them to check. They may ask beneficiaries to fill out a questionnaire or, in some cases, undergo a new medical exam at government expense. Each dollar spent in these ways generated an estimated $16 in savings in 2013. Still, the Social Security Administration is so understaffed that SSA has a backlog of 1.3 million disability reviews. Current estimates indicate that spending a little over $1 billion a year more on such reviews over the next decade would save $43 billion. Rather than giving Social Security the staff and spending authority to work down this backlog and realize those savings, Congress has been cutting the agency’s administrative budget and sequestration threatens further cuts. Claiming that better administration will balance the budget would be wrong. But it would help. And it would stop some people from shirking their legal responsibilities and lighten the burdens of those who shoulder theirs. The failure of Congress to provide enough staff to run programs costing hundreds of billions of dollars a year as efficiently and honestly as possible is about as good a definition of criminal negligence as one can find. Authors Henry J. Aaron Full Article
the Three cheers for logrolling: The demise of the Sustainable Growth Rate (SGR) By webfeeds.brookings.edu Published On :: Wed, 22 Apr 2015 17:00:00 -0400 Editor's note: This post originally appeared in the New England Journal of Medicine's Perspective online series on April 22, 2015. Congress has finally euthanized the sustainable growth rate formula (SGR). Enacted in 1997 and intended to hold down growth of Medicare spending on physician services, the formula initially worked more or less as intended. Then it began to call for progressively larger and more unrealistic fee cuts — nearly 30% in some years, 21% in 2015. Aware that such cuts would be devastating, Congress repeatedly postponed them, and most observers understood that such cuts would never be implemented. Still, many physicians fretted that the unthinkable might happen. Now Congress has scrapped the SGR, replacing it with still-embryonic but promising incentives that could catalyze increased efficiency and greater cost control than the old, flawed formula could ever really have done, in a law that includes many other important provisions. How did such a radical change occur? And why now? The “how” was logrolling — the trading of votes by legislators in order to pass legislation of interest to each of them. Logrolling has become a dirty word, a much-reviled political practice. But the Medicare Access and CHIP (Children’s Health Insurance Program) Reauthorization Act (MACRA), negotiated by House leaders John Boehner (R-OH) and Nancy Pelosi (D-CA) and their staffs, is a reminder that old-time political horse trading has much to be said for it. The answer to “why now?” can be found in the technicalities of budget scoring. Under the SGR, Medicare’s physician fees were tied through a complex formula to a target based on caseloads, practice costs, and the gross domestic product. When current spending on physician services exceeded the targets, the formula called for fee cuts to be applied prospectively. Fee cuts that were not implemented were carried forward and added to any future cuts the formula might generate. Because Congress repeatedly deferred cuts, a backlog developed. By 2012, this backlog combined with assumed rapid future growth in Medicare spending caused the Congressional Budget Office (CBO) to estimate the 10-year cost of repealing the SGR at a stunning $316 billion. For many years, Congress looked the costs of repealing the SGR squarely in the eye — and blinked. The cost of a 1-year delay, as estimated by the CBO, was a tiny fraction of the cost of repeal. So Congress delayed — which is hardly surprising. But then, something genuinely surprising did happen. The growth of overall health care spending slowed, causing the CBO to slash its estimates of the long-term cost of repealing the SGR. By 2015, the 10-year price of repeal had fallen to $136 billion. Even this number was a figment of budget accounting, since the chance that the fee cuts would ever have been imposed was minuscule. But the smaller number made possible the all-too-rare bipartisan collaboration that produced the legislation that President Barack Obama has just signed. The core of the law is repeal of the SGR and abandonment of the 21% cut in Medicare physician fees it called for this year. In its place is a new method of paying physicians under Medicare. Some elements are specified in law; some are to be introduced later. The hard-wired elements include annual physician fee updates of 0.5% per year through 2019 and 0% from 2020 through 2025, along with a “merit-based incentive payment system” (MIPS) that will replace current incentive programs that terminate in 2018. The new program will assess performance in four categories: quality of care, resource use, meaningful use of electronic health records, and clinical practice improvement activities. Bonuses and penalties, ranging from +12% to –4% in 2020, and increasing to +27% to –9% for 2022 and later, will be triggered by performance scores in these four areas. The exact content of the MIPS will be specified in rules that the secretary of health and human services is to develop after consultation with physicians and other health care providers. Higher fees will be available to professionals who work in “alternative payment organizations” that typically will move away from fee-for-service payment, cover multiple services, show that they can limit the growth of spending, and use performance-based methods of compensation. These and other provisions will ramp up pressure on physicians and other providers to move from traditional individual or small-group fee-for-service practices into risk-based multi-specialty settings that are subject to management and oversight more intense than that to which most practitioners are yet accustomed. Both parties wanted to bury the SGR. But MACRA contains other provisions, unrelated to the SGR, that appeal to discrete segments of each party. Democrats had been seeking a 4-year extension of CHIP, which serves 8 million children and pregnant women. They were running into stiff head winds from conservatives who wanted to scale back the program. MACRA extends CHIP with no cuts but does so for only 2 years. It also includes a number of other provisions sought by Democrats: a 2-year extension of the Maternal, Infant, and Early Childhood Home Visiting program, plus permanent extensions of the Qualified Individual program, which pays Part B Medicare premiums for people with incomes just over the federal poverty thresholds, and transitional medical assistance, which preserves Medicaid eligibility for up to 1 year after a beneficiary gets a job. The law also facilitates access to health benefits. MACRA extends for two years states’ authority to enroll applicants for health benefits on the basis of data on income, household size, and other factors gathered when people enroll in other programs such as the Supplemental Nutrition Assistance Program, the National School Lunch Program, Temporary Assistance to Needy Families (“welfare”), or Head Start. It also provides $7.2 billion over the next two years to support community health centers, extending funding established in the Affordable Care Act. Elements of each party, concerned about budget deficits, wanted provisions to pay for the increased spending. They got some of what they wanted, but not enough to prevent some conservative Republicans in both the Senate and the House from opposing final passage. Many conservatives have long sought to increase the proportion of Medicare Part B costs that are covered by premiums. Most Medicare beneficiaries pay Part B premiums covering 25% of the program’s actuarial value. Relatively high-income beneficiaries pay premiums that cover 35, 50, 65, or 80% of that value, depending on their income. Starting in 2018, MACRA will raise the 50% and 65% premiums to 65% and 80%, respectively, affecting about 2% of Medicare beneficiaries. No single person with an income (in 2015 dollars) below $133,501 or couple with income below $267,001 would be affected initially. MACRA freezes these thresholds through 2019, after which they are indexed for inflation. Under previous law, the thresholds were to have been greatly increased in 2019, reducing the number of high-income Medicare beneficiaries to whom these higher premiums would have applied. (For reference, half of all Medicare beneficiaries currently have incomes below $26,000 a year.) A second provision bars Medigap plans from covering the Part B deductible, which is now $147. By exposing more people to deductibles, this provision will cause some reduction in Part B spending. Everyone who buys such plans will see reduced premiums; some will face increased out-of-pocket costs. The financial effects either way will be small. Inflexible adherence to principle contributes to the political gridlock that has plunged rates of public approval of Congress to subfreezing lows. MACRA is a reminder of the virtues of compromise and quiet negotiation. A small group of congressional leaders and their staffs crafted a law that gives something to most members of both parties. Today’s appalling norm of poisonously polarized politics make this instance of political horse trading seem nothing short of miraculous. Authors Henry J. Aaron Publication: NEJM Full Article
the Strengthening Medicare for 2030 - A working paper series By webfeeds.brookings.edu Published On :: Thu, 04 Jun 2015 00:00:00 -0400 The addition of Medicare in 1965 completed a suite of federal programs designed to protect the wealth and health of people reaching older ages in the United States, starting with the Committee on Economic Security of 1934—known today as Social Security. While few would deny Medicare’s important role in improving older and disabled Americans’ financial security and health, many worry about sustaining and strengthening Medicare to finance high-quality, affordable health care for coming generations. In 1965, average life expectancy for a 65-year-old man and woman was another 13 years and 16 years, respectively. Now, life expectancy for 65-year-olds is 18 years for men and 20 years for women—effectively a four- to five-year increase. In 2011, the first of 75-million-plus baby boomers became eligible for Medicare. And by 2029, when all of the baby boomers will be 65 or older, the U.S. Census Bureau predicts 20 percent of the U.S. population will be older than 65. Just by virtue of the sheer size of the aging population, Medicare spending growth will accelerate sharply in the coming years. Estimated Medicare Spending, 2010-2030 Sources: Future Elderly Model (FEM), University of Southern California Leonard D. Schaeffer Center for Health Policy & Economics, U.S. Census Bureau projections, Medicare Current Beneficiary Survey and Centers for Medicare & Medicaid Services. The Center for Health Policy at Brookings and the USC Leonard D. Schaeffer Center for Health Policy and Economics' half-day forum on the future of Medicare, looked ahead to the year 2030--a year when the youngest baby boomers will be Medicare-eligible-- to explore the changing demographics, health care needs, medical technology costs, and financial resources that will be available to beneficiaries. The working papers below address five critical components of Medicare reform, including: modernizing Medicare's infrastructure, benefit design, marketplace competition, and payment mechanisms. DISCUSSION PAPERS Health and Health Care of Beneficiaries in 2030, Étienne Gaudette, Bryan Tysinger, Alwyn Cassil and Dana Goldman: This chartbook, prepared by the USC Schaeffer Center, aims to help policymakers understand how Medicare spending and beneficiary demographics will likely change over the next 15 years to help strengthen and sustain the program. Trends in the Well-Being of Aged and their Prospects through 2030, Gary Burtless: This paper offers a survey of trends in old-age poverty, income, inequality, labor market activity, insurance coverage, and health status, and provides a brief discussion of whether the favorable trends of the past half century can continue in the next few decades. The Transformation of Medicare, 2015 to 2030, Henry J. Aaron and Robert Reischauer: This paper discusses how Medicare can be made a better program and how it should look in 2030s using the perspectives of beneficiaries, policymakers and administrators; and that of society at large. Could Improving Choice and Competition in Medicare Advantage be the Future of Medicare?, Alice Rivlin and Willem Daniel: This paper explores the advantages and disadvantages of strengthening competition in Medicare Advantage (MA), including a look at the bidding process and replacing fee-for-service methodologies. Improving Provider Payment in Medicare, Paul Ginsburg and Gail Wilensky: This paper discusses the various alternative payment models currently being implemented in the private sector and elsewhere that can be employed in the Medicare program to preserve quality of care and also reduce costs. Authors Henry J. AaronGary BurtlessAlwyn CassilWillem DanielÉtienne GaudettePaul GinsburgDana GoldmanRobert ReischauerAlice M. RivlinBryan TysingerGail Wilensky Publication: The Brookings Institution and the USC Schaeffer Center Full Article
the Strengthening Medicare for 2030 By webfeeds.brookings.edu Published On :: Fri, 05 Jun 2015 09:00:00 -0400 Event Information June 5, 20159:00 AM - 1:00 PM EDTFalk AuditoriumBrookings Institution1775 Massachusetts Avenue, N.W.Washington, DC 20036 Register for the EventIn its 50th year, the Medicare program currently provides health insurance coverage for more than 49 million Americans and accounts for $600 billion in federal spending. With those numbers expected to rise as the baby boomer generation ages, many policy experts consider this impending expansion a major threat to the nation’s economic future and question how it might affect the quality and value of health care for Medicare beneficiaries. On June 5, the Center for Health Policy at Brookings and the USC Leonard D. Schaeffer Center for Health Policy and Economics hosted a half-day forum on the future of Medicare. Instead of reflecting on historical accomplishments, the event looked ahead to 2030—a time when the youngest Baby Boomers will be Medicare-eligible—and explore the changing demographics, health care needs, medical technology costs, and financial resources available to beneficiaries. The panels focused on modernizing Medicare's infrastructure, benefit design, marketplace competition, and payment mechanisms. The event also included the release of five policy papers from featured panelists. Please note that presentation slides from USC's Dana Goldman will not be available for download. For more information on findings from his presentation download the working paper available on this page or watch the event video. Video Challenges and opportunities facing Medicare in 2030Eligibility, benefit design, and financial supportCould improving choice and competition in Medicare Advantage be the future of Medicare?Improving provider payment in Medicare Audio Strengthening Medicare for 2030 Transcript Uncorrected Transcript (.pdf) Event Materials Burtless Slides20150605_medicare_2030_transcript Full Article
the Eurozone desperately needs a fiscal transfer mechanism to soften the effects of competitiveness imbalances By webfeeds.brookings.edu Published On :: Thu, 18 Jun 2015 00:00:00 -0400 The eurozone has three problems: national debt obligations that cannot be met, medium-term imbalances in trade competitiveness, and long-term structural flaws. The short-run problem requires more of the monetary easing that Germany has, with appalling shortsightedness, been resisting, and less of the near-term fiscal restraint that Germany has, with equally appalling shortsightedness, been seeking. To insist that Greece meet all of its near-term current debt service obligations makes about as much sense as did French and British insistence that Germany honor its reparations obligations after World War I. The latter could not be and were not honored. The former cannot and will not be honored either. The medium-term problem is that, given a single currency, labor costs are too high in Greece and too low in Germany and some other northern European countries. Because adjustments in currency values cannot correct these imbalances, differences in growth of wages must do the job—either wage deflation and continued depression in Greece and other peripheral countries, wage inflation in Germany, or both. The former is a recipe for intense and sustained misery. The latter, however politically improbable it may now seem, is the better alternative. The long-term problem is that the eurozone lacks the fiscal transfer mechanisms necessary to soften the effects of competitiveness imbalances while other forms of adjustment take effect. This lack places extraordinary demands on the willingness of individual nations to undertake internal policies to reduce such imbalances. Until such fiscal transfer mechanisms are created, crises such as the current one are bound to recur. Present circumstances call for a combination of short-term expansionary policies that have to be led or accepted by the surplus nations, notably Germany, who will also have to recognize and accept that not all Greek debts will be paid or that debt service payments will not be made on time and at originally negotiated interest rates. The price for those concessions will be a current and credible commitment eventually to restore and maintain fiscal balance by the peripheral countries, notably Greece. Authors Henry J. Aaron Publication: The International Economy Image Source: © Vincent Kessler / Reuters Full Article
the The myth behind America’s deficit By webfeeds.brookings.edu Published On :: Thu, 10 Sep 2015 11:30:00 -0400 Medicare Hospital Insurance and Social Security would not add to deficits because they can’t spend money they don’t have. The dog days of August have given way to something much worse. Congress returned to session this week, and the rest of the year promises to be nightmarish. The House and Senate passed budget resolutions earlier this year calling for nearly $5 trillion in spending cuts by 2025. More than two-thirds of those cuts would come from programs that help people with low-and moderate-incomes. Health care spending would be halved. If such cuts are enacted, the president will likely veto them. At best, another partisan budget war will ensue after which the veto is sustained. At worst, the cuts become law. The putative justification for these cuts is that the nation faces insupportable increases in public debt because of expanding budget deficits. Even if the projections were valid, it would be prudent to enact some tax increases in order to preserve needed public spending. But the projections of explosively growing debt are not valid. They are fantasy. Wait! you say. The Congressional Budget Office has been telling us for years about the prospect of rising deficit and exploding debt. They repeated those warnings just two months ago. Private organizations of both the left and right agree with the CBO’s projections, in general if not in detail. How can any sane person deny that the nation faces a serious long-term budget deficit problem? The answer is simple: The CBO and private organizations use a convention in preparing their projections that is at odds with established policy and law. If, instead, projections are based on actual current law, as they claim to be, the specter of an increasing debt burden vanishes. What is that convention? Why is it wrong? Why did CBO adopt it, and why have others kept it? CBO’s budget projections cover the next 75 years. Its baseline projections claim to be based on current law and policy. (CBO also presents an ‘alternative scenario’ based on assumed changes in law and policy). Within that period, Social Security (OASDI) and Medicare Hospital Insurance (HI) expenditures are certain to exceed revenues earmarked to pay for them. Both are financed through trust funds. Both funds have sizeable reserves — government securities — that can be used to cover short falls for a while. But when those reserves are exhausted, expenditures cannot exceed current revenues. Trust fund financing means that neither Social Security nor Medicare Hospital Insurance can run deficits. Nor can they add to the public debt. Nonetheless, CBO and other organizations assume that Social Security and Medicare Hospital Insurance can and will spend money they don’t have and that current law bars them from spending. One of the reasons why trust fund financing was used, first for Social Security and then for Medicare Hospital Insurance, was to create a framework that disciplined Congress earmarked to earmark sufficient revenues to pay for benefits it might award. Successive presidents and Congresses, both Republican and Democratic, have repeatedly acted to prevent either program’s cumulative spending from exceeding cumulative revenues. In 1983, for example, faced with an impending trust fund shortfall, Congress cut benefits and raised taxes enough to turn prospective cash flow trust fund deficits into cash flow surpluses. And President Reagan signed the bill. In so doing, they have reaffirmed the discipline imposed by trust fund financing. Trust fund accounting explains why people now are worrying about the adequacy of funding for Social Security and Medicare. They recognize that the trust funds will be depleted in a couple of decades. They understand that between now and then Congress must either raise earmarked taxes or fashion benefit cuts. If it doesn’t raise taxes, benefits will be cut across the board. Either way, the deficits that CBO and other organizations have built into their budget projections will not materialize. The implications for projected debt of CBO’s inclusion in its projections of deficits that current law and established policy do not allow are enormous, as the graph below shows. If one excludes deficits in Social Security and Medicare Hospital Insurance that cannot occur under current law and established policy, the ratio of national debt to gross domestic product will fall, not rise, as CBO budget projections indicate. In other words, the claim that drastic cuts in government spending are necessary to avoid calamitous budget deficits is bogus. It might seem puzzling that CBO, an agency known for is professionalism and scrupulous avoidance of political bias, would adopt a convention so at odds with law and policy. The answer is straightforward—Congress makes them do it. Section 257 of the Balanced Budget and Emergency Deficit Control Act of 1985 requires CBO to assume that the trust funds can spend money although legislation governing trust fund operations bars such expenditures. CBO is obeying the law. No similar explanation exonerates the statement of the Committee for a Responsible Federal Budget, which on August 25, 2015 cited, with approval, the conclusion that ‘debt continues to grow unsustainably,’ or that of the Bipartisan Policy Center, which wrote on the same day that ‘America’s debt continues to grow on an unsustainable path.’ Both statements are wrong. To be sure, the dire budget future anticipated in the CBO projections could materialize. Large deficits could result from an economic calamity or war. Congress could abandon the principle that Social Security and Medicare Hospital Insurance should be financed within trust funds. It could enact other fiscally rash policies. But such deficits do not flow from current law or reflect the trust fund discipline endorsed by both parties over the last 80 years. And it is current law and policy that are supposed to underlie budget projections. Slashing spending because a thirty-year old law requires CBO to assume that Congress will do something it has shown no sign of doing—overturn decades of bipartisan prudence requiring that the major social insurance programs spend only money specifically earmarked for them, and not a penny more—would impose enormous hardship on vulnerable populations in the name of a fiscal fantasy. Editor's Note: This post originally appeared in Fortune Magazine. Authors Henry J. Aaron Publication: Fortune Magazine Image Source: © Jonathan Ernst / Reuters Full Article
the Can taxing the rich reduce inequality? You bet it can! By webfeeds.brookings.edu Published On :: Tue, 27 Oct 2015 00:00:00 -0400 Two recently posted papers by Brookings colleagues purport to show that “even a large increase in the top marginal rate would barely reduce inequality.”[1] This conclusion, based on one commonly used measure of inequality, is an incomplete and misleading answer to the question posed: would a stand-alone increase in the top income tax bracket materially reduce inequality? More importantly, it is the wrong question to pose, as a stand-alone increase in the top bracket rate would be bad tax policy that would exacerbate tax avoidance incentives. Sensible tax policy would package that change with at least one other tax modification, and such a package would have an even more striking effect on income inequality. In brief: A stand-alone increase in the top tax bracket would be bad tax policy, but it would meaningfully increase the degree to which the tax system reduces economic inequality. It would have this effect even though it would fall on just ½ of 1 percent of all taxpayers and barely half of their income. Tax policy significantly reduces inequality. But transfer payments and other spending reduce it far more. In combination, taxes and public spending materially offset the inequality generated by market income. The revenue from a well-crafted increase in taxes on upper-income Americans, dedicated to a prudent expansions of public spending, would go far to counter the powerful forces that have made income inequality more extreme in the United States than in any other major developed economy. [1] The quotation is from Peter R. Orszag, “Education and Taxes Can’t Reduce Inequality,” Bloomberg View, September 28, 2015 (at http://bv.ms/1KPJXtx). The two papers are William G. Gale, Melissa S. Kearney, and Peter R. Orszag, “Would a significant increase in the top income tax rate substantially alter income inequality?” September 28, 2015 (at http://brook.gs/1KK40IX) and “Raising the top tax rate would not do much to reduce overall income inequality–additional observations,” October 12, 2015 (at http://brook.gs/1WfXR2G). Downloads Download the paper Authors Henry J. Aaron Image Source: © Jonathan Ernst / Reuters Full Article
the Is the ACA in trouble? By webfeeds.brookings.edu Published On :: Tue, 24 Nov 2015 10:14:00 -0500 Editor's Note: This post originally appeared in InsideSources. The author wishes to thank Kevin Lucia for helpful comments and suggestions. United Health Care’s surprise announcement that it is considering whether to stop selling health insurance through the Affordable Care Act’s health exchanges in 2017 and is also pulling marketing and broker commissions in 2016 has health policy analysts scratching their heads. The announcement is particularly puzzling, as just a month ago, United issued a bullish announcement that it was planning to expand to 11 additional individual markets, taking its total to 34. United’s stated reason is that this business is unprofitable. That may be true, but it is odd that the largest health insurer in the nation would vacate a growing market without putting up a fight. Is United’s announcement seriously bad news for Obamacare, as many commentators have asserted? Is United seeking concessions in another area and using this announcement as a bargaining chip? Or, is something else going on? The answer, I believe, is that the announcement, while a bit of all of these things, is less significant than many suppose. To make sense of United’s actions, one has to understand certain peculiarities of United’s business model and some little-understood aspects of the Affordable Care Act. Most of United’s business consists of group sales of insurance through employers who offer plans to their employees as a fringe benefit. United has chosen not to sell insurance aggressively to individuals in most places and, where it does, not to offer the lowest-premium plans. In some states, it does not sell to individuals at all. In 49 states, insurers may sell plans either through the ACA health exchange or directly to customers outside the exchanges. The exceptions are Vermont and the District of Columbia in which individuals buying insurance must go through their exchanges. Thus, insurers may find that “good” risks—those with below-average use of health care—disproportionately buy directly, while the “poor” risks buy through the exchanges. State regulators must review insurance premiums to assure that they are reasonable and set other rules that insurers must follow. This process typically involves some negotiation. With varying skill and intensity, state insurance commissioners try to hold down prices. If they are too lax, buyers may be overcharged. If they are too aggressive, insurers may simply withdraw from the market, causing politically-unpopular inconvenience. These negotiations go on separately in 50 states and the District of Columbia each and every year. Finally, fewer people are now expected to buy insurance through the health exchanges than was expected a couple of years ago. ACA subsidies are modest for people with moderate incomes and the penalties for not carrying insurance have been small. Some people with modest incomes face high deductibles, high out-of-pocket costs, narrow networks of providers, or some mix of all three. As a result, some people who expected not to need much health care have chosen to ‘go bare’ and pay the modest penalties for not carrying insurance. What seems to have happened—one can’t be sure, as the United announcement is Delphic—is that the company, which mostly delayed its participation in the individual exchanges until 2015, incurred substantial start-up costs, enrolled few customers who turned out to be sicker than anticipated, and experienced more-than-anticipated attrition. Other insurers, including Blue-Cross/Blue-Shield plans nation-wide which hold a dominant position in individual markets in many states, did well enough so that Joseph Swedish, CEO of Anthem, Inc., one of the largest of the ‘Blues,’ announced that his company is firmly committed to the exchanges. But minor players in the individual market, such as United, may have concluded that the costs of developing that market are too high for the expected pay-off. In evaluating these diverse factors, one needs to recognize that the ACA, in general, and the health exchanges, in particular, have changed insurance markets in fundamental ways. Millions of people who were previously uninsured are now trying to understand the bewildering complexities of health insurance. Insurance companies have a lot to learn, too. The ACA now bars insurance companies from ‘underwriting’—the practice of varying premiums based on the characteristics of individual customers, something at which they were quite expert. Under the ACA, insurance companies must sell insurance to all comers, however sick they may be, and must charge premiums that can vary only based on age. Now, companies must ‘manage’ risk, which is easier for a company with a large market share of the individual market, as the Blues have in most states, than it is for a company like United with only a small share. What this means is that United’s announcement is regrettable news for those states from which they may decide to withdraw, as its departure would reduce competition. United might also use the threat of departure to negotiate favorable terms with states and the Administration. And it means that federal regulators need to write regulations to discourage individual customers from practices that unfairly saddle insurers with risks, such as buying insurance outside open-enrollment periods designed for exceptional circumstances and then dropping coverage a few months later. But it would be a mistake to treat United’s announcement, presumably made for good and sufficient business reasons, as a portentous omen of an ACA crisis. Authors Henry J. Aaron Publication: InsideSources Full Article
the 2016: The most important election since 1932 By webfeeds.brookings.edu Published On :: Fri, 18 Dec 2015 09:00:00 -0500 The 2016 presidential election confronts the U.S. electorate with political choices more fundamental than any since 1964 and possibly since 1932. That statement may strike some as hyperbolic, but the policy differences between the two major parties and the positions of candidates vying for their presidential nominations support this claim. A victorious Republican candidate would take office backed by a Republican-controlled Congress, possibly with heightened majorities and with the means to deliver on campaign promises. On the other hand, the coattails of a successful Democratic candidate might bring more Democrats to Congress, but that president would almost certainly have to work with a Republican House and, quite possibly, a still Republican Senate. The political wars would continue, but even a president engaged in continuous political trench warfare has the power to get a lot done. Candidates always promise more than they can deliver and often deliver different policies from those they have promised. Every recent president has been buffeted by external events unanticipated when he took office. But this year, more than in half a century or more, the two parties offer a choice, not an echo. Here is a partial and selective list of key issues to illustrate what is at stake. Health care The Affordable Care Act, known as Obamacare or the ACA, passed both houses of Congress with not a single Republican vote. The five years since enactment of the ACA have not dampened Republican opposition. The persistence and strength of opposition to the ACA is quite unlike post-enactment reactions to the Social Security Act of 1935 or the 1965 amendments that created Medicare. Both earlier programs were hotly debated and controversial. But a majority of both parties voted for the Social Security Act. A majority of House Republicans and a sizeable minority of Senate Republicans supported Medicare. In both cases, opponents not only became reconciled to the new laws but eventually participated in improving and extending them. Republican members of Congress overwhelmingly supported, and a Republican president endorsed, adding Disability Insurance to the Social Security Act. In 2003, a Republican president proposed and fought for the addition of a drug benefit to Medicare. The current situation bears no resemblance to those two situations. Five years after enactment of Obamacare, in contrast, every major candidate for the Republican presidential nomination has called for its repeal and replacement. So have the Republican Speaker of the House of Representatives and Majority Leader in the Senate. Just what 'repeal and replace' might look like under a GOP president remains unclear as ACA critics have not agreed on an alternative. Some plans would do away with some of the elements of Obamacare and scale back others. Some proposals would repeal the mandate that people carry insurance, the bar on 'medical underwriting' (a once-routine practice under which insurers vary premiums based on expected use of medical care), or the requirement that insurers sell plans to all potential customers. Other proposals would retain tax credits to help make insurance affordable but reduce their size, or would end rules specifying what 'adequate' insurance plans must cover. Repeal is hard to imagine if a Democrat wins the presidency in 2016. Even if repeal legislation could overcome a Senate filibuster, a Democratic president would likely veto it and an override would be improbable. But a compromise with horse-trading, once routine, might once again become possible. A Democratic president might agree to Republican-sponsored changes to the ACA, such as dropping the requirement that employers of 50 or more workers offer insurance to their employees, if Republicans agreed to changes in the ACA that supporters seek, such as the extension of tax credits to families now barred from them because one member has access to very costly employer-sponsored insurance. In sum, the 2016 election will determine the future of the most far-reaching social insurance legislation in half a century. Social Security Social Security faces a projected long-term gap between what it takes in and what it is scheduled to pay out. Every major Republican candidate has called for cutting benefits below those promised under current law. None has suggested any increase in payroll tax rates. Each Democratic candidate has proposed raising both revenues and benefits. Within those broad outlines, the specific proposals differ. Most Republican candidates would cut benefits across the board or selectively for high earners. For example, Senator Ted Cruz proposes to link benefits to prices rather than wages, a switch that would reduce Social Security benefits relative to current law by steadily larger amounts: an estimated 29 percent by 2065 and 46 percent by 2090. He would allow younger workers to shift payroll taxes to private accounts. Donald Trump has proposed no cuts in Social Security because, he says, proposing cuts is inconsistent with winning elections and because meeting current statutory commitments is 'honoring a deal.' Trump also favors letting people invest part of their payroll taxes in private securities. He has not explained how he would make up the funding gap that would result if current benefits are honored but revenues to support them are reduced. Senator Marco Rubio has endorsed general benefit cuts, but he has also proposed to increase the minimum benefit. Three Republican candidates have proposed ending payroll taxes for older workers, a step that would add to the projected funding gap. Democratic candidates, in contrast, would raise benefits, across-the-board or for selected groups—care givers or survivors. They would switch the price index used to adjust benefits for inflation to one that is tailored to consumption of the elderly and that analysts believe would raise benefits more rapidly than the index now in use. All would raise the ceiling on earnings subject to the payroll tax. Two would broaden the payroll tax base. As these examples indicate, the two parties have quite different visions for Social Security. Major changes, such as those envisioned by some Republican candidates, are not easily realized, however. Before he became president, Ronald Reagan in numerous speeches called for restructuring Social Security. Those statements did not stop him from signing a 1983 law that restored financial balance to the very program against which he had inveighed but with few structural changes. George W. Bush sought to partially privatize Social Security, to no avail. Now, however, Social Security faces a funding gap that must eventually be filled. The discipline of Trust Fund financing means that tax increases, benefit cuts, or some combination of the two are inescapable. Action may be delayed beyond the next presidency, as current projections indicate that the Social Security Trust Fund and current revenues can sustain scheduled benefits until the mid 2030s. But that is not what the candidates propose. Voters face a choice, clear and stark, between a Democratic president who would try to maintain or raise benefits and would increase payroll taxes to pay for it, and a Republican president who would seek to cut benefits, oppose tax increases, and might well try to partially privatize Social Security. The Environment On no other issue is the split between the two parties wider or the stakes in their disagreement higher than on measures to deal with global warming. Leading Republican candidates have denied that global warming is occurring (Trump), scorned evidence supporting the existence of global warming as bogus (Cruz), acknowledged that global warming is occurring but not because of human actions (Rubio, Carson), or admitted that it is occurring but dismissed it as not a pressing issue (Fiorina, Christie). Congressional Republicans oppose current Administration initiatives under the Clean Air Act to curb emission of greenhouse gases. Democratic candidates uniformly agree that global warming is occurring and that it results from human activities. They support measures to lower those emissions by amounts similar to those embraced in the Paris accords of December 2015 as essential to curb the speed and ultimate extent of global warming. Climate scientists and economists are nearly unanimous that unabated emissions of greenhouse gases pose serious risks of devastating and destabilizing outcomes—that climbing average temperatures could render some parts of the world uninhabitable, that increases in sea levels that will inundate coastal regions inhabited by tens of millions of people, and that storms, droughts, and other climatic events will be more frequent and more destructive. Immediate actions to curb emission of greenhouse gases can reduce these effects. But no actions can entirely avoid them, and delay is costly. Environmental economists also agree, with little partisan division, that the way to proceed is to harness market forces to reduce greenhouse gas emissions.” The division between the parties on global warming is not new. In 2009, the House of Representatives narrowly passed the American Clean Energy and Security Act. That law would have capped and gradually lowered greenhouse gas emissions. Two hundred eleven Democrats but only 8 Republicans voted for the bill. The Senate took no action, and the proposal died. Now Republicans are opposing the Obama administration’s Clean Power Plan, a set of regulations under the Clean Air Act to lower emissions by power plants, which account for 40 percent of the carbon dioxide released into the atmosphere. The Clean Power Plan is a stop-gap measure. It applies only to power plants, not to other sources of emissions, and it is not nationally uniform. These shortcomings reflect the legislative authority on which the plan is based, the Clean Air Act. That law was designed to curb the local problem of air pollution, not the global damage from greenhouse gases. Environmental economists of both parties recognize that a tax or a cap on greenhouse gas emissions would be more effective and less costly than the current regulations, but superior alternatives are now politically unreachable. Based on their statements, any of the current leading Republican candidates would back away from the recently negotiated Paris climate agreement, scuttle the Clean Power Plan, and resist any tax on greenhouse gas emissions. Any of the Democratic candidates would adhere to the Clean Power Plan and support the Paris climate agreement. One Democratic candidate has embraced a carbon tax. None has called for the extension of the Clean Power Plan to other emission sources, but such policies are consistent with their current statements. The importance of global policy to curb greenhouse gas emissions is difficult to exaggerate. While the United States acting alone cannot entirely solve the problem, resolute action by the world’s largest economy and second largest greenhouse gas emitter is essential, in concert with other nations, to forestall climate catastrophe. The Courts If the next president serves two terms, as six of the last nine presidents have done, four currently sitting justices will be over age 86 and one over age 90 by the time that presidency ends—provided that they have not died or resigned. The political views of the president have always shaped presidential choices regarding judicial appointments. As all carry life-time tenure, these appointments influence events long after the president has left office. The political importance of these appointments has always been enormous, but it is even greater now than in the past. One reason is that the jurisprudence of sitting Supreme Court justices now lines up more closely than in the past with that of the party of the president who appointed them. Republican presidents appointed all sitting justices identified as conservative; Democratic presidents appointed all sitting justices identified as liberal. The influence of the president’s politics extends to other judicial appointments as well. A second reason is that recent judicial decisions have re-opened decisions once regarded as settled. The decision in the first case dealing with the Affordable Care Act (ACA), NFIB v. Sibelius is illustrative. When the ACA was enacted, few observers doubted the power of the federal government to require people to carry health insurance. That power was based on a long line of decisions, dating back to the 1930s, under the Constitutional clause authorizing the federal government to regulate interstate commerce. In the 1930s, the Supreme Court rejected an older doctrine that had barred such regulations. The earlier doctrine dated from 1905 when the Court overturned a New York law that prohibited bakers from working more than 10 hours a day or 60 hours a week. The Court found in the 14th Amendment, which prohibits any state from ‘depriving any person of life, liberty or property, without due process of law,’ a right to contract previously invisible to jurists which it said the New York law violated. In the early- and mid-1930s, the Court used this doctrine to invalidate some New Deal legislation. Then the Court changed course and authorized a vast range of regulations under the Constitution’s Commerce Clause. It was on this line of cases that supporters of the ACA relied. Nor did many observers doubt the power of Congress to require states to broaden Medicaid coverage as a condition for remaining in the Medicaid program and receiving federal matching grants to help them pay for required medical services. To the surprise of most legal scholars, a 5-4 Supreme Court majority ruled in NFIB v. Sibelius that the Commerce Clause did not authorize the individual health insurance mandate. But it decided, also 5 to 4, that tax penalties could be imposed on those who fail to carry insurance. The tax saved the mandate. But the decision also raised questions about federal powers under the Commerce Clause. The Court also ruled that the Constitution barred the federal government from requiring states to expand Medicaid coverage as a condition for remaining in the program. This decision was odd, in that Congress certainly could constitutionally have achieved the same objective by repealing the old Medicaid program and enacting a new Medicaid program with the same rules as those contained in the ACA that states would have been free to join or not. NFIB v. Sibelius and other cases the Court has recently heard or soon will hear raise questions about what additional attempts to regulate interstate commerce might be ruled unconstitutional and about what limits the Court might impose on Congress’s power to require states to implement legislated rules as a condition of receiving federal financial aid. The Court has also heard, or soon will hear, a series of cases of fundamental importance regarding campaign financing, same-sex marriage, affirmative action, abortion rights, the death penalty, the delegation of powers to federal regulatory agencies, voting rights, and rules under which people can seek redress in the courts for violation of their rights. Throughout U.S. history, the American people have granted nine appointed judges the power to decide whether the actions taken by elected legislators are or are not consistent with a constitution written more than two centuries ago. As a practical matter, the Court could not maintain this sway if it deviated too far from public opinion. But the boundaries within which the Court has substantially unfettered discretion are wide, and within those limits the Supreme Court can profoundly limit or redirect the scope of legislative authority. The Supreme Court’s switch in the 1930s from doctrines under which much of the New Deal was found to be unconstitutional to other doctrines under which it was constitutional illustrates the Court’s sensitivity to public opinion and the profound influence of its decisions. The bottom line is that the next president will likely appoint enough Supreme Court justices and other judges to shape the character of the Supreme Court and of lower courts with ramifications both broad and enduring on important aspects of every person’s life. *** The next president will preside over critical decisions relating to health care policy, Social Security, and environmental policy, and will shape the character of the Supreme Court for the next generation. Profound differences distinguish the two major parties on these and many other issues. A recent survey of members of the House of Representatives found that on a scale of ‘liberal to conservative’ the most conservative Democrat was more liberal than the least conservative Republican. Whatever their source, these divisions are real. The examples cited here are sufficient to show that the 2016 election richly merits the overworked term 'watershed'—it will be the most consequential presidential election in a very long time. Authors Henry J. Aaron Full Article
the The impossible (pipe) dream—single-payer health reform By webfeeds.brookings.edu Published On :: Tue, 26 Jan 2016 08:38:00 -0500 Led by presidential candidate Bernie Sanders, one-time supporters of ‘single-payer’ health reform are rekindling their romance with a health reform idea that was, is, and will remain a dream. Single-payer health reform is a dream because, as the old joke goes, ‘you can’t get there from here. Let’s be clear: opposing a proposal only because one believes it cannot be passed is usually a dodge.One should judge the merits. Strong leaders prove their skill by persuading people to embrace their visions. But single-payer is different. It is radical in a way that no legislation has ever been in the United States. Not so, you may be thinking. Remember such transformative laws as the Social Security Act, Medicare, the Homestead Act, and the Interstate Highway Act. And, yes, remember the Affordable Care Act. Those and many other inspired legislative acts seemed revolutionary enough at the time. But none really was. None overturned entrenched and valued contractual and legislative arrangements. None reshuffled trillions—or in less inflated days, billions—of dollars devoted to the same general purpose as the new legislation. All either extended services previously available to only a few, or created wholly new arrangements. To understand the difference between those past achievements and the idea of replacing current health insurance arrangements with a single-payer system, compare the Affordable Care Act with Sanders’ single-payer proposal. Criticized by some for alleged radicalism, the ACA is actually stunningly incremental. Most of the ACA’s expanded coverage comes through extension of Medicaid, an existing public program that serves more than 60 million people. The rest comes through purchase of private insurance in “exchanges,” which embody the conservative ideal of a market that promotes competition among private venders, or through regulations that extended the ability of adult offspring to remain covered under parental plans. The ACA minimally altered insurance coverage for the 170 million people covered through employment-based health insurance. The ACA added a few small benefits to Medicare but left it otherwise untouched. It left unaltered the tax breaks that support group insurance coverage for most working age Americans and their families. It also left alone the military health programs serving 14 million people. Private nonprofit and for-profit hospitals, other vendors, and privately employed professionals continue to deliver most care. In contrast, Senator Sanders’ plan, like the earlier proposal sponsored by Representative John Conyers (D-Michigan) which Sanders co-sponsored, would scrap all of those arrangements. Instead, people would simply go to the medical care provider of their choice and bills would be paid from a national trust fund. That sounds simple and attractive, but it raises vexatious questions. How much would it cost the federal government? Where would the money to cover the costs come from? What would happen to the $700 billion that employers now spend on health insurance? How would the $600 billion a year reductions in total health spending that Sanders says his plan would generate come from? What would happen to special facilities for veterans and families of members of the armed services? Sanders has answers for some of these questions, but not for others. Both the answers and non-answers show why single payer is unlike past major social legislation. The answer to the question of how much single payer would cost the federal government is simple: $4.1 trillion a year, or $1.4 trillion more than the federal government now spends on programs that the Sanders plan would replace. The money would come from new taxes. Half the added revenue would come from doubling the payroll tax that employers now pay for Social Security. This tax approximates what employers now collectively spend on health insurance for their employees...if they provide health insurance. But many don’t. Some employers would face large tax increases. Others would reap windfall gains. The cost question is particularly knotty, as Sanders assumes a 20 percent cut in spending averaged over ten years, even as roughly 30 million currently uninsured people would gain coverage. Those savings, even if actually realized, would start slowly, which means cuts of 30 percent or more by Year 10. Where would they come from? Savings from reduced red-tape associated with individual insurance would cover a small fraction of this target. The major source would have to be fewer services or reduced prices. Who would determine which of the services physicians regard as desirable -- and patients have come to expect -- are no longer ‘needed’? How would those be achieved without massive bankruptcies among hospitals, as columnist Ezra Klein has suggested, and would follow such spending cuts? What would be the reaction to the prospect of drastic cuts in salaries of health care personnel – would we have a shortage of doctors and nurses? Would patients tolerate a reduction in services? If people thought that services under the Sanders plan were inadequate, would they be allowed to ‘top up’ with private insurance? If so, what happens to simplicity? If not, why not? Let me be clear: we know that high quality health care can be delivered at much lower cost than is the U.S. norm. We know because other countries do it. In fact, some of them have plans not unlike the one Senator Sanders is proposing. We know that single-payer mechanisms work in some countries. But those systems evolved over decades, based on gradual and incremental change from what existed before. That is the way that public policy is made in democracies. Radical change may occur after a catastrophic economic collapse or a major war. But in normal times, democracies do not tolerate radical discontinuity. If you doubt me, consider the tumult precipitated by the really quite conservative Affordable Care Act. Editor's note: This piece originally appeared in Newsweek. Authors Henry J. Aaron Publication: Newsweek Image Source: © Jim Young / Reuters Full Article
the How to fix the backlog of disability claims By webfeeds.brookings.edu Published On :: Tue, 01 Mar 2016 08:31:00 -0500 The American people deserve to have a federal government that is both responsive and effective. That simply isn’t the case for more than 1 million people who are awaiting the adjudication of their applications for disability benefits from the Social Security Administration. Washington can and must do better. This gridlock harms applicants either by depriving them of much-needed support or effectively barring them from work while their cases are resolved because having any significant earnings would immediately render them ineligible. This is unacceptable. Within the next month, the Government Accountability Office, the nonpartisan congressional watchdog, will launch a study on the issue. More policymakers should follow GAO’s lead. A solution to this problem is long overdue. Here’s how the government can do it. Congress does not need to look far for an example of how to reduce the SSA backlog. In 2013, the Veterans Administration cut its 600,000-case backlog by 84 percent and reduced waiting times by nearly two-thirds, all within two years. It’s an impressive result. Why have federal officials dealt aggressively and effectively with that backlog, but not the one at SSA? One obvious answer is that the American people and their representatives recognize a debt to those who served in the armed forces. Allowing veterans to languish while a sluggish bureaucracy dithers is unconscionable. Public and congressional outrage helped light a fire under the bureaucracy. Administrators improved services the old-fashioned way — more staff time. VA employees had to work at least 20 hours overtime per month. Things are a bit more complicated at SSA, unfortunately. Roughly three quarters of applicants for disability benefits have their cases decided within about nine months and, if denied, decide not to appeal. But those whose applications are denied are legally entitled to ask for a hearing before an administrative law judge — and that is where the real bottleneck begins. There are too few ALJs to hear the cases. Even in the best of times, maintaining an adequate cadre of ALJs is difficult because normal attrition means that SSA has to hire at least 100 ALJs a year to stay even. When unemployment increases, however, so does the number of applications for disability benefits. After exhausting unemployment benefits, people who believe they are impaired often turn to the disability programs. So, when the Great Recession hit, SSA knew it had to hire many more ALJs. It tried to do so, but SSA cannot act without the help of the Office of Personnel Management, which must provide lists of qualified candidates before agencies can hire them. SSA employs 85 percent of all ALJs and for several years has paid OPM approximately $2 million annually to administer the requisite tests and interviews to establish a register of qualified candidates. Nonetheless, OPM has persistently refused to employ legally trained people to vet ALJ candidates or to update registers. And when SSA sought to ramp up ALJ hiring to cope with the recession challenge, OPM was slow to respond. In 2009, for example, OPM promised to supply a new register containing names of ALJ candidates. Five years passed before it actually delivered the new list of names. For a time, the number of ALJs deciding cases actually fell. The situation got so bad that the president’s January 2015 budget created a work group headed by the Office of Management and Budget and the Administrative Conference of the United States to try to break the logjam. OPM promised a list for 2015, but insisted it could not change procedures. Not trusting OPM to mend its ways, Congress in October 2015 enacted legislation that explicitly required OPM to administer a new round of tests within the succeeding six months. These stopgap measures are inadequate to the challenge. Both applicants and taxpayers deserve prompt adjudication of the merits of claims. The million-person backlog and the two-year average waits are bad enough. Many applicants wait far longer. Meanwhile, they are strongly discouraged from working, as anything more than minimal earnings will cause their applications automatically to be denied. Throughout this waiting period, applicants have no means of self-support. Any skills applicants retain atrophy. The shortage of ALJs is not the only problem. The quality and consistency of adjudication by some ALJs has been called into question. For example, differences in approval rates are so large that differences among applicants cannot plausibly explain them. Some ALJs have processed so many cases that they could not possibly have applied proper standards. In recognition of both problems, SSA has increased oversight and beefed up training. The numbers have improved. But large and troubling variations in workloads and approval rates persist. For now, political polarization blocks agreement on whether and how to modify eligibility rules and improve incentives to encourage work by those able to work. But there is bipartisan agreement that dragging out the application process benefits no one. While completely eliminating hearing delays is impossible, adequate administrative funding and more, better trained hearing officers would help reduce them. Even if OPM’s past record were better than it is, OPM is now a beleaguered agency, struggling to cope with the fallout from a security breach that jeopardizes the security of the nation and the privacy of millions of current and past federal employees and federal contractors. Mending this breach and establishing new procedures will — and should — be OPM’s top priority. That’s why, for the sake of everyone concerned, responsibility for screening candidates for administrative law judge positions should be moved, at least temporarily, to another agency, such as the Administrative Conference of the United States. Shortening the period that applicants for disability benefits now spend waiting for a final answer is an achievable goal that can and should be addressed. Our nation’s disabled and its taxpayers deserve better. Editor's note: This piece originally appeared in Politico. Authors Henry J. AaronLanhee Chen Publication: Politico Full Article
the The stunning ignorance of Trump's health care plan By webfeeds.brookings.edu Published On :: Mon, 07 Mar 2016 16:32:00 -0500 One cannot help feeling a bit silly taking seriously the policy proposals of a person who seems not to take policy seriously himself. Donald Trump's policy positions have evolved faster over the years than a teenager's moods. He was for a woman's right to choose; now he is against it. He was for a wealth tax to pay off the national debt before proposing a tax plan that would enrich the wealthy and balloon the national debt. He was for universal health care but opposed to any practical way to achieve it. Based on his previous flexibility, Trump's here-today proposals may well be gone tomorrow. As a sometime-Democrat, sometime-Republican, sometime-independent, who is now the leading candidate for the Republican presidential nomination, Trump has just issued his latest pronouncements on health care policy. So, what the hell, let's give them more respect than he has given his own past policy statements. Perhaps unsurprisingly, those earlier pronouncements are notable for their detachment from fact and lack of internal logic. The one-time supporter of universal health care now joins other candidates in his newly-embraced party in calling for repeal of the only serious legislative attempt in American history to move toward universal coverage, the Affordable Care Act. Among his stated reasons for repeal, he alleges that the act has "resulted in runaway costs," promoted health care rationing, reduced competition and narrowed choice. Each of these statements is clearly and demonstrably false. Health care spending per person has grown less rapidly in the six years since the Affordable Care Act was enacted than in any corresponding period in the last four decades. There is now less health care rationing than at any time in living memory, if the term rationing includes denial of care because it is unaffordable. Rationing because of unaffordability is certainly down for the more than 20 million people who are newly insured because of the Affordable Care Act. Hospital re-admissions, a standard indicator of low quality, are down, and the health care exchanges that Trump now says he would abolish, but that resemble the "health marts" he once espoused, have brought more choice to individual shoppers than private employers now offer or ever offered their workers. Trump's proposed alternative to the Affordable Care Act is even worse than his criticism of it. He would retain the highly popular provision in the act that bars insurance companies from denying people coverage because of preexisting conditions, a practice all too common in the years before the health care law. But he would do away with two other provisions of the Affordable Care Act that are essential to make that reform sustainable: the mandate that people carry insurance and the financial assistance to make that requirement feasible for people of modest means. Without those last two provisions, barring insurers from using preexisting conditions to jack up premiums or deny coverage would destroy the insurance market. Why? Because without the mandate and the financial aid, people would have powerful financial incentives to wait until they were seriously ill to buy insurance. They could safely do so, confident that some insurer would have to sell them coverage as soon as they became ill. Insurers that set affordable prices would go broke. If insurers set prices high enough to cover costs, few customers could afford them. In simple terms, Trump's promise to bar insurers from using preexisting conditions to screen customers but simultaneously to scrap the companion provisions that make the bar feasible is either the fraudulent offer of a huckster who takes voters for fools, or clear evidence of stunning ignorance about how insurance works. Take your pick. Unfortunately, none of the other Republican candidates offers a plan demonstrably superior to Trump's. All begin by calling for repeal and replacement of the Affordable Care Act. But none has yet advanced a well-crafted replacement. It is not that the Affordable Care Act is perfect legislation. It isn't. But, as the old saying goes, you can't beat something with nothing. And so far as health care reform is concerned, nothing is what the Republican candidates now have on offer. Editor's note: This piece originally appeared in U.S. News and World Report. Authors Henry J. Aaron Publication: U.S. News and World Report Image Source: © Lucy Nicholson / Reuters Full Article
the Can the center hold? By webfeeds.brookings.edu Published On :: Mon, 18 Apr 2016 11:05:00 -0400 The first stanza of William Butler Yeats much quoted poem, The Second Coming, contains the words: ‘Things fall apart, the center cannot hold.... The best lack all conviction, While the worst are full of passionate intensity.’ It is unclear whether these words, penned in 1919 referred only to the Irish war of independence or somehow expressed a prescient vision of what Yeats called ‘the blood-dimmed tide’ that would soon engulf Europe. But there can be little doubt that these words eerily convey the tone and content of much that passes today for political speech in the United States. Why are things falling apart? Why are so many Americans rejecting those in both parties whom they have trusted in the past to lead them? Why are they turning to rebels and outsiders so disturbingly full of passionate intensity? I believe that the answer resides in three identifiable strands in recent history, largely separate but temporally linked. One is a belief that traditional elites whom the public has long trusted to lead them lack the will and the capacity to act in the nation’s best interest. The second is a series of economic developments that have fallen with particular severity on those Americans with less-than-college education. The third is a shift in values and norms of behavior that have liberated many but that threaten others and are at war with deeply held convictions of many. Chasm-like differences in values separate people with shared economic interests. Ordinarily, blunders by those in power cause voters to switch allegiance from one set of leadership elites to another with a more appealing agenda. Successful candidates have long run against Washington, often from state governorships, but never in rebellion against the core ideas of their parties. The debate in both parties is different this year. The insurgent in the Democratic primaries, a long-serving Senator, is tapping into anger among many Democrats who believe that party leaders have been too willing to compromise on ideas to which the party faithful are devoted but that party leaders regard as dubious policy (protectionism), impracticable (single-payer health reform), or both (highly progressive taxes). The debates among the Republican candidates are redolent with something more visceral—fear, anger, and sadness that, as they see it, the fundamentals that define American life are in mortal jeopardy. Republican primary voters have turned to candidates who promise an end to compromise with and even civility toward those whose policies and values they reject. The decline of trust in elected officials is stunning and crosses party lines. In 1964, 77 percent of Americans trusted the federal government to do what is right always or most of the time. And with good reason. The administration of Franklin Delano Roosevelt had struggled mightily, with mixed results to be sure but always with irrepressible confidence, to restore prosperity after the Great Depression. The federal government—the president and Congress acting jointly—had organized the nation to fight and win the largest and bloodiest war in world history. A quarter century of rapid economic growth followed the war. Incomes of all economic groups increased. Success fostered trust. The two major parties differed, of course, often bitterly, exemplified by the Red Scare and McCarthyism of the 1940s and 1950s. But the range of views within each party far exceeded the average difference between them. Conservative, segregationist, and anti-union Democrats of the South had little other than a party label in common with liberal, intergrationist, and pro-union Democrats of the North and West. A gap only slightly narrower separated the internationalist, ‘modern’ Republicans led by Dwight Eisenhower, Henry Cabot Lodge, and Arthur Vandenberg from the conservative, isolationist Republicans represented by Robert Taft and John Bricker. The Republican party encompassed similarly wide differences as recently as the administration of Ronald Reagan, seen incorrectly by many as ideologically unified. In order to succeed, aspirants for party leadership had to master the art of compromise. Party standard-bearers for whom intra-party political bargaining and compromise were second nature, found it natural to apply those same skills in inter-party dealings. In the glow of post-World War II America, few recognized how unusual it was for Americans to have confidence in the efficacy of the federal government. The founding fathers deeply distrusted centralized power. They divided authority among three branches of government expressly to frustrate the exercise of such power. They reserved to the states all powers other than those the Constitution explicitly granted to the central government. The first decades in the life of the new nation saw repeated and sometimes violent resistance to actions of the national government, culminating in the Civil War, the bloodiest war in our history. Erosion of the post-World War II interlude began in earnest with the Vietnam War and Watergate. Then the economy turned sour, buffeted by the first OPEC ‘oil shock’ and the recession that followed. Growth of productivity slowed. So did growth of per worker earnings. Inequality, which had fallen for more than four decades, began to increase. Faith in the federal government rebounded during the Reagan administration in part and paradoxically because he appealed to the abiding distrust of Washington. It fell again toward the end of the eighties, but recovered briefly in the 1990s following the well-managed, ‘good war’ against Iraq and the only decade since the 1960s during which incomes grew across the entire income distribution. Trust in government reached a high of 60 percent in October 2001, one month after 9/11. Then, based on inaccurate information or downright lies about weapons of mass destruction by its leaders, the United States invaded Iraq. Thousands of soldiers died, tens of thousands were wounded, and trillions of dollars were spent. When America withdrew, chaos ensued. It is not hard to understand why voters would bitterly blame elites for the self-inflicted wounds from a misbegotten war. On the home front, blinkered or feckless elites were blind to the emerging real-estate bubble, to rampant financial mismanagement, and to plain fraud, practiced not only by get-rich financial scammers by also by their complicit customers. In 2007 and 2008, the financial system teetered and nearly collapsed. Economic chaos ensued. Elites suffered sharp losses, but regained most of those losses during a recovery in which the top few percent of the income and wealth distribution enjoyed most of the gains. Public policy shored up financial system, a move that doubtless saved Main Street as well. It also supported incomes of the middle class through such government programs as Unemployment Insurance and food assistance. But relief for the financial sector struck those suffering unemployment, foreclosures, and vanishing home-equity as evidence of cozy collusion between policy-makers of both parties and the plutocrats who caused mass suffering and epidemic insecurity. The U.S. economy has since recovered better than those of most other developed nations. It has done so despite prematurely restrictive fiscal policy, adopted before recovery was well advanced, out of a bizarre belief that imagined future problems from future budget deficits posed a greater threat to the nation than did current mass unemployment. Average earnings, stagnant for four decades, remained flat. Earnings of workers with less than college education actually fell. Expansion of such government programs as the earned income tax credit and Medicaid offset such losses to a degree. But they are a poor substitute for the across-the-board income growth of the post-World-War-II decades. And they have done little or nothing to offset forces, including the decline of unions and competition from low-wage workers abroad, that have hammered earnings of low-skilled workers. Can one be surprised that by 2015 the fraction of Americans who said that the federal government will do the right thing always or most of the time had fallen to 26 percent among Democrats and to a dismal 11 percent among Republicans? A dispassionate outsider might point out that the United States remains an island of stability to which millions around the world flock for refuge and opportunity and that the U.S. economy is still stronger than that of any other developed nation. But that same dispassionate observer could also note that social and economic mobility, never as great as popular myth supposed, had fallen well below that in other nations and that U.S. economic inequality surpassed that of any other developed nation. With a cold eye, that observer might well conclude that the dyspeptic majorities in both parties have reason to reject leaders who failed them so often and so catastrophically. Although anger at the objective failures of leadership elites has a solid rational basis, rational anger cannot fully explain the emotional intensity of alienation among large swaths of the American population. To understand that depth of feeling, it is necessary recognize that shifts in values, sex roles, and civil rights—changes that have enhanced lives of most Americans—have also eroded the objective condition and subjective sense of security, status, and well-being of many of our fellow citizens. Women, summoned from domesticity to factory and office jobs during World War II, returned to birth the Baby Boom. When that was done, they began an inexorable march back to paid work. At first they were confined to such ‘appropriate’ occupations as teachers, secretaries, and nurses—career ghettos with short job ladders and low ceilings. A succession of rebellions against such limits became a massive civil rights revolution, spawning exhilarating opportunities for half of the population. The flood of women into the labor force and into occupations from which they had largely been excluded was a boon not just for them but also and for U.S. economic capacity. It was, however, a decidedly mixed blessing for many men—for those working men who lost monopoly possession of many occupations, for married men threatened more by the erosion of economic dominance within the family than appreciative of added income from empowered economic partners, and for single men who found themselves devalued as potential ‘husband-providers.’ For African Americans, the Emancipation Proclamation ended legal slavery, but not repression. Official policy—federal, state, and local—and private collusion perpetuated subjugation well into the 20th century. Litigation and direct political action eventually curbed those practices, albeit slowly, painfully, and incompletely. Here too, there were gains and losses...gains for African-Americans and other people of color, whose rights to live and work where they wanted expanded, and gains for the nation as a whole, which benefitted from an expanded pool of talent and from the first steps in expiating opprobrious behavior toward fellow citizens. Again, not everyone gained. Some have had to confront new economic competition. Some, rightly or wrongly, have seen affirmative action as depriving them of access to services once exclusively theirs. Others react against favoritism even toward groups long egregiously disfavored. And still other whites, lacking wealth or status, lost the unpriced yet priceless satisfaction of feeling superior to others. As women and people of color entered occupations from which they had long been excluded, technical change and competition from abroad eroded the base of well-paid jobs for those with comparatively little education. Unionized jobs disappeared, as did the extra earnings and fringe benefits that unions extracted from resistant employers. White men without college degrees and the women who were their partners no longer could count on rising wages and the improved status that comes with seniority in career jobs. The toll was not only economic but physical. While life-expectancies of middle and upper income men and women rose sharply, life-expectancies of lower-income women fell and of lower-income men barely increased because of drug use, depression, and other self-destructive personal behaviors An upheaval in social norms and values accompanied these market-place developments. The contraceptive revolution weakened the link of sex to marriage. Cohabitation, once known as ‘living in sin,’ became a normal precursor or alternative to marriage—the ‘first union’ for 70 percent of women with less-than-college education. Women increasingly came to bear children as single mothers and to do so without shame, or with much less of it than in the past. Homosexuality, formerly regarded as abnormal at best and criminal at worst, emerged from the shadows to become generally, if not universally, accepted. Whites males, once economically, culturally, and politically dominant, saw one area of ascendancy after another slipping from their control, as women achieved economic and sexual independence and as people with skins darker than theirs emerged from the social and economic shadows. Demographers heralded the imminent emergence of a majority-minority nation. The idea of white ascendancy, if not superiority, morphed from accepted truth into anachronistic myth. These three forces—bald failures of leadership, changes in the relative standing of races and sexes, and upheavals in accepted values—explain the moods within each political party. The weights attached to each of these forces varies across the political spectrum. Bernie Sanders cites growing economic inequality, favoritism toward the rich, and past foreign policy blunders. Donald Trump exploits resentment, particularly that of white males with little education, with scattershot attacks on virtually every other group he can find and indicts leaders for what he sees as current as well as past foreign policy mistakes. Ted Cruz, unabashedly asks voters in a nation founded on religious tolerance to allow immigration only of Christians-at least for now. The electorate will choose a new president and new legislators a few months hence. That election will determine who is president and who serves in the House and Senate. But it will not remove the forces that have caused so many to scorn leaders they once trusted. The center may hold once again. But if it does, it will do so tenuously, and it will be on probation. Editor's note: This piece originally appeared in The Huffington Post. Authors Henry J. Aaron Publication: The Huffington Post Image Source: © Reuters Photographer / Reuter Full Article
the Disability insurance: The Way Forward By webfeeds.brookings.edu Published On :: Wed, 27 Apr 2016 08:30:00 -0400 Editor’s note: The remarks below were delivered to the Committee for a Responsible Federal Budget on release of their report on the SSDI Solutions Initiative. I want to thank Marc Goldwein for inviting me to join you for today’s event. We all owe thanks to Jim McCrery and Earl Pomeroy for devoting themselves to the SSDI Solutions Initiative, to the staff of CFRB who backed them up, and most of all to the scholars and practitioners who wrote the many papers that comprise this effort. This is the sort of practical, problem-solving enterprise that this town needs more of. So, to all involved in this effort, ‘hats off’ and ‘please, don’t stop now.’ The challenge of improving how public policy helps people with disabilities seemed urgent last year. Depletion of the Social Security Disability Insurance trust loomed. Fears of exploding DI benefit rolls were widespread and intense. Congress has now taken steps that delay projected depletion until 2022. Meticulous work by Jeffrey Liebman suggests that Disability Insurance rolls have peaked and will start falling. The Technical Panel appointed by the Social Security Advisory Board, concurred in its 2015 report. With such ‘good’ news, it is all too easy to let attention drift to other seemingly more pressing items. But trust fund depletion and growing beneficiary rolls are not the most important reasons why policymakers should be focusing on these programs. The primary reason is that the design and administration of disability programs can be improved with benefit to taxpayers and to people with disabilities alike. And while 2022 seems a long time off, doing the research called for in the SSDI Solutions Initiative will take all of that time and more. So, it is time to get to work, not to relax. Before going any further, I must make a disclaimer. I was invited to talk here as chair of the Social Security Advisory Board. Everything I am going to say from now on will reflect only my personal views, not those of the other members or staff of the SSAB except where the Board has spoken as a group. The same disclaimer applies to the trustees, officers, and other staff of the Brookings Institution. Blame me, not them. Let me start with an analogy. We economists like indices. Years ago, the late Arthur Okun came up with an index to measure how much pain the economy was inflicting on people. It was a simple index, just the sum of inflation and the unemployment rate. Okun called it the ‘misery index.’ I suggest a ‘policy misery index’—a measure of the grief that a policy problem causes us. It is the sum of a problem’s importance and difficulty. Never mind that neither ‘importance’ nor ‘difficulty’ is quantifiable. Designing and administering interventions intended to improve the lives of people with disabilities has to be at or near the top of the policy misery index. Those who have worked on disability know what I mean. Programs for people with disabilities are hugely important and miserably hard to design and administer well. That would be true even if legislators were writing afresh on a blank legislative sheet. That they must cope with a deeply entrenched program about which analysts disagree and on which many people depend makes the problems many times more challenging. I’m going to run through some of the reasons why designing and administering benefits for people determined to be disabled is so difficult. Some may be obvious, even banal, to the highly informed group here today. And you will doubtless think of reasons I omit. First, the concept of disability, in the sense of a diminished capacity to work, has no clear meaning, the SSA definition of disability notwithstanding. We can define impairments. Some are so severe that work or, indeed, any other form of self-support seems impossible. But even among those with severe impairments, some people work for pay, and some don’t. That doesn’t mean that if someone with a given impairment works, everyone with that same impairment could work if they tried hard enough. It means that physical or mental impairments incompletely identify those for whom work is not a reasonable expectation. The possibility of work depends on the availability of jobs, of services to support work effort, and of a host of personal characteristics, including functional capacities, intelligence, and grit. That is not how the current disability determination process works. It considers the availability of jobs in the national, not the local, economy. It ignores the availability of work supports or accommodations by potential employers. Whatever eligibility criteria one may establish for benefits, some people who really can’t work, or can’t earn enough to support themselves, will be denied benefits. And some will be awarded benefits who could work. Good program design helps keep those numbers down. Good administration helps at least as much as, and maybe more than, program design. But there is no way to reduce the number of improper awards and improper denials to zero. Second, the causes of disability are many and varied. Again, this observation is obvious, almost banal. Genetic inheritance, accidents and injuries, wear and tear from hard physical labor, and normal aging all create different needs for assistance. These facts mean that people deemed unable to work have different needs. They constitute distinct interest groups, each seeking support, but not necessarily of the same kind. These groups sometimes compete with each other for always-limited resources. And that competition means that the politics of disability benefits are, shall we say, interesting. Third, the design of programs to help people deemed unable to work is important and difficult. Moral hazard is endemic. Providing needed support and services is an act of compassion and decency. The goal is to provide such support and services while preserving incentives to work and to controlling costs borne by taxpayers. But preserving work incentives is only part of the challenge. The capacity to work is continuous, not binary. Training and a wide and diverse range of services can help people perform activities of daily living and work. Because resources are scarce, policy makers and administrators have to sort out who should get those services. Should it be those who are neediest? Those who are most likely to recover full capacities? Triage is inescapable. It is technically difficult. And it is always ethically fraught. Designing disability benefit programs is hard. But administering them well is just as important and at least as difficult. These statements may also be obvious to those who here today. But recent legislation and administrative appropriations raise doubts about whether they are obvious to or accepted by some members of Congress. Let’s start with program design. We can all agree, I think, that incentives matter. If benefits ceased at the first dollar earned, few who come on the rolls would ever try to work. So, Congress, for many years, has allowed beneficiaries to earn any amount for a brief period and small amounts indefinitely without losing eligibility. Under current law, there is a benefit cliff. If—after a trial work period—beneficiaries earn even $1 more than what is called substantial gainful activity, $1,130 in 2016, their benefit checks stop. They retain eligibility for health coverage for a while even after they leave the rolls. And for an extended period they may regain cash and health benefits without delay if their earnings decline. Members of Congress have long been interested in whether a more gradual phase-out of benefits as earnings rise might encourage work. Various aspects of the current Disability Insurance program reflect Congress’s desire to encourage work. The so-called Benefit Offset National Demonstration—or BOND—was designed to test the impact on labor supply by DI beneficiaries of one formula—replacing the “cliff” with a gradual reduction in benefits: $1 of benefit last for each $2 of earnings above the Substantial Gainful Activity level. Alas, there were problems with that demonstration. It tested only one offset scenario – one starting point and one rate. So, there could be no way of knowing whether a 2-for-1 offset was the best way to encourage work. And then there was the uncomfortable fact that, at the time of the last evaluation, out of 79,440 study participants only 21 experienced the offset. So there was no way of telling much of anything, other than that few people had worked enough to experience the offset. Nor was the cause of non-response obvious. It is not clear how many demonstration participants even understood what was on offer. Unsurprisingly, members of Congress interested in promoting work among DI recipients asked SSA to revisit the issue. The 2015 DI legislation mandates a new demonstration, christened the Promoting Opportunity Demonstration, or POD. POD uses the same 2 for 1 offset rate that BOND did, but the offset starts at an earnings level at or below earnings of $810 a month in 2016—which is well below the earnings at which the BOND phase-out began. Unfortunately, as Kathleen Romig has pointed out in an excellent paper for the Center on Budget and Policy Priorities, this demonstration is unlikely to yield useful results. Only a very few atypical DI beneficiaries are likely to find it in their interest to participate in the demonstration, fewer even than in the BOND. That is because the POD offset begins at lower earnings than the BOND offset did. In addition, participants in POD sacrifice the right under current law that permits people receiving disability benefits to earn any amount for 9 months of working without losing any benefits. Furthermore, the 2015 law stipulated that no Disability Insurance beneficiary could be required to participate in the demonstration or, having agreed to participate, forced to remain in the demonstration. Thus, few people are likely to respond to the POD or to remain in it. There is a small group to whom POD will be very attractive—those few DI recipients who retain a lot of earning capacity. The POD will allow them to retain DI coverage until their earnings are quite high. For example, a person receiving a $2,000 monthly benefit—well above the average, to be sure, but well below the maximum—would remain eligible for some benefits until his or her annual earnings exceeded $57,700. I don’t know about you, but I doubt that Congress would favorably consider permanent law of this sort. Not only would those participating be a thin and quite unrepresentative sample of DI beneficiaries in general, or even of those with some earning capacity, but selection bias resulting from the opportunity to opt out at any time would destroy the external validity of any statistical results. Let me be clear. My comments on POD, the demonstration mandated in the 2015 legislation, are not meant to denigrate the need for, or the importance of, research on how to encourage work by DI recipients, especially those for whom financial independence is plausible. On the contrary, as I said at the outset, research is desperately needed on this issue, as well as many others. It is not yet too late to authorize a research design with a better chance of producing useful results. But it will be too late soon. Fielding demonstrations takes time: to solicit bids from contractors, for contractors to formulate bids, for government boards to select the best one, for contractors to enroll participants, for contractors to administer the demonstration, and for analysts to process the data generated by the demonstrations. That process will take all the time available between now and 2021 or 2022 when the DI trust fund will again demand attention. It will take a good deal more time than that to address the formidable and intriguing research agenda of SSDI Solutions Initiative. I should like to conclude with plugs for two initiatives to which the Social Security Advisory Board has been giving some attention. It takes too long for disability insurance applicants to have their cases decided. Perhaps the whole determination process should be redesigned. One of the CFRB papers proposes just that. But until that happens, it is vital to shorten the unconscionable delays separating initial denials and reconsideration from hearings before administrative law judges to which applicants are legally entitled. Procedural reforms in the hearing process might help. More ALJs surely will. The 2015 budget act requires the Office of Personnel Management to take steps that will help increase the number of ALJs hired. I believe that the new director, Beth Colbert, is committed to reforms. But it is very hard to change legal interpretations that have hampered hiring for years and the sluggish bureaucratic culture that fostered them. So, the jury is out on whether OPM can deliver. In a recent op-ed in Politico, Lanhee Chen, a Republican member of the SSAB, and I jointly endorsed urged Congress to be ready, if OPM fails to deliver on more and better lists of ALJ candidates and streamlined procedures for their appointment, to move the ALJ examination authority to another federal organization, such as the Administrative Conference of the United States. Lastly, there is a facet of income support policy that we on the SSAB all agree merits much more attention than it has received. Just last month, the SSAB released a paper entitled Representative Payees: A Call to Action. More than eight million beneficiaries have been deemed incapable of managing $77 billion in benefits that the Social Security Administration provided them in 2014. We believe that serious concern is warranted about all aspects of the representative payee program—how this infringement of personal autonomy is found to be necessary, how payees are selected, and how payee performance is monitored. Management of representative payees is a particular challenge for the Social Security Administration. Its primary job is to pay cash benefits in the right amount to the right person at the right time. SSA does that job at rock-bottom costs and with remarkable accuracy. It is handing rapidly rising workloads with budgets that have barely risen. SSA is neither designed nor staffed to provide social services. Yet determining the need for, selecting, and monitoring representative payees is a social service function. As the Baby Boom ages, the number of people needing help in administering cash benefits from the Social Security Administration—and from other agencies such as the Veterans Administration—will grow. So will the number needing help in making informed choices under Medicare and Medicaid. The SSAB is determined to look into this challenge and to make constructive suggestions. We are just beginning and invite others to join in studying what I have called “the most important problem the public has never heard of.” Living with disabilities today is markedly different from what it was in 1956 when the Disability Insurance program began. Yet, the DI program has changed little. Beneficiaries and taxpayers are pay heavily the failure of public policy to apply what has been learned over the past six decades about health, disability, function, and work. I hope that SSA and Congress will use well the time until it next must legislate on Disability Insurance. The DI rolls are stabilizing. The economy has grown steadily since the Great Recession. Congress has reinstated demonstration authority. With adequate funding for research and testing, the SSA can rebuild its research capability. Along with the external research community, it can identify what works and help Congress improve the DI program for beneficiaries and taxpayers alike. The SSDI Solutions Initiative is a fine roadmap. Authors Henry J. Aaron Publication: Committee for a Responsible Federal Budget Image Source: © Max Whittaker / Reuters Full Article
the The next stage in health reform By webfeeds.brookings.edu Published On :: Thu, 26 May 2016 10:40:00 -0400 Health reform (aka Obamacare) is entering a new stage. The recent announcement by United Health Care that it will stop selling insurance to individuals and families through most health insurance exchanges marks the transition. In the next stage, federal and state policy makers must decide how to use broad regulatory powers they have under the Affordable Care Act (ACA) to stabilize, expand, and diversify risk pools, improve local market competition, encourage insurers to compete on product quality rather than premium alone, and promote effective risk management. In addition, insurance companies must master rate setting, plan design, and network management and effectively manage the health risk of their enrollees in order to stay profitable, and consumers must learn how to choose and use the best plan for their circumstances. Six months ago, United Health Care (UHC) announced that it was thinking about pulling out of the ACA exchanges. Now, they are pulling out of all but a “handful” of marketplaces. UHC is the largest private vendor of health insurance in the nation. Nonetheless, the impact on people who buy insurance through the ACA exchanges will be modest, according to careful analyses from the Kaiser Family Foundation and the Urban Institute. The effect is modest for three reasons. One is that in some states UHC focuses on group insurance, not on insurance sold to individuals, where they are not always a major presence. Secondly, premiums of UHC products in individual markets are relatively high. Third, in most states and counties ACA purchasers will still have a choice of two or more other options. In addition, UHC’s departure may coincide with or actually cause the entry of other insurers, as seems to be happening in Iowa. The announcement by UHC is noteworthy, however. It signals the beginning for ACA exchanges of a new stage in their development, with challenges and opportunities different from and in many ways more important than those they faced during the first three years of operation, when the challenge was just to get up and running. From the time when HealthCare.Gov and the various state exchanges opened their doors until now, administrators grappled non-stop with administrative challenges—how to enroll people, helping them make an informed choice among insurance offerings, computing the right amount of assistance each individual or family should receive, modifying plans when income or family circumstances change, and performing various ‘back office’ tasks such as transferring data to and from insurance companies. The chaotic first weeks after the exchanges opened on October 1, 2013 have been well documented, not least by critics of the ACA. Less well known are the countless behind-the-scenes crises, patches, and work-arounds that harried exchange administrators used for years afterwards to keep the exchanges open and functioning. The ACA forced not just exchange administrators but also insurers to cope with a new system and with new enrollees. Many new exchange customers were uninsured prior to signing up for marketplace coverage. Insurers had little or no information on what their use of health care would be. That meant that insurers could not be sure where to set premiums or how aggressively to try to control costs, for example by limiting networks of physicians and hospitals enrollees could use. Some did the job well or got lucky. Some didn’t. United seems to have fallen in the second category. United could have stayed in the 30 or so state markets they are leaving and tried to figure out ways to compete more effectively, but since their marketplace premiums were often not competitive and most of their business was with large groups, management decided to focus on that highly profitable segment of the insurance market. Some insurers, are seeking sizeable premium increases for insurance year 2017, in part because of unexpectedly high usage of health care by new exchange enrollees. United is not alone in having a rough time in the exchanges. So did most of the cooperative plans that were set up under the ACA. Of the 23 cooperative plans that were established, more than half have gone out of business and more may follow. These developments do not signal the end of the ACA or even indicate a crisis. They do mark the end of an initial period when exchanges were learning how best to cope with clerical challenges posed by a quite complicated law and when insurance companies were breaking into new markets. In the next phase of ACA implementation, federal and state policy makers will face different challenges: how to stabilize, expand, and diversify marketplace risk pools, promote local market competition, and encourage insurers to compete on product quality rather than premium alone. Insurance company executives will have to figure out how to master rate setting, plan design, and network management and manage risk for customers with different characteristics than those to which they have become accustomed. Achieving these goals will require state and federal authorities to go beyond the core implementation decisions that have absorbed most of their attention to date and exercise powers the ACA gives them. For example, section 1332 of the ACA authorizes states to apply for waivers starting in 2017 under which they can seek to achieve the goals of the 2010 law in ways different from those specified in the original legislation. Along quite different lines, efforts are already underway in many state-based marketplaces, such as the District of Columbia, to expand and diversify the individual market risk pool by expanding marketing efforts to enroll new consumers, especially young adults. Minnesota’s Health Care Task Force recently recommended options to stabilize marketplace premiums, including reinsurance, maximum limits on the excess capital reserves or surpluses of health plans, and the merger of individual and small group markets, as Massachusetts and Vermont have done. In normal markets, prices must cover costs, and while some companies prosper, some do not. In that respect, ACA markets are quite normal. Some regional and national insurers, along with a number of new entrants, have experienced losses in their marketplace business in 2016. One reason seems to be that insurers priced their plans aggressively in 2014 and 2015 to gain customers and then held steady in 2016. Now, many are proposing significant premium hikes for 2017. Others, like United, are withdrawing from some states. ACA exchange administrators and state insurance officials must now take steps to encourage continued or new insurer participation, including by new entrants such as Medicaid managed care organizations (MCOs). For example, in New Mexico, where in 2016 Blue Cross Blue Shield withdrew from the state exchange, state officials now need to work with that insurer to ensure a smooth transition as it re-enters the New Mexico marketplace and to encourage other insurers to join it. In addition, state insurance regulators can use their rate review authority to benefit enrollees by promoting fair and competitive pricing among marketplace insurers. During the rate review process, which sometimes evolves into a bargaining process, insurance regulators often have the ability to put downward pressure on rates, although they must be careful to avoid the risk of underpricing of marketplace plans which could compromise the financial viability of insurers and cause them to withdraw from the market. Exchanges have an important role in the affordability of marketplace plans too. For example ACA marketplace officials in the District of Columbia and Connecticut work closely with state regulators during the rate review process in an effort to keep rates affordable and adequate to assure insurers a fair rate of return. Several studies now indicate that in selecting among health insurance plans people tend to give disproportionate weight to premium price, and insufficient attention to other cost provisions—deductibles and cost sharing—and to quality of service and care. A core objective of the ACA is to encourage insurance customers to evaluate plans comprehensively. This objective will be hard to achieve, as health insurance is perhaps the most complicated product most people buy. But it will be next to impossible unless customers have tools that help them take account of the cost implications of all plan features and report accurately and understandably on plan quality and service. HealthCare.gov and state-based marketplaces, to varying degrees, are already offering consumers access to a number of decision support tools, such as total cost calculators, integrated provider directories, and formulary look-ups, along with tools that indicate provider network size. These should be refined over time. In addition, efforts are now underway at the federal and state level to provide more data to consumers so that they can make quality-driven plan choices. In 2018, the marketplaces will be required to display federally developed quality ratings and enrollee satisfaction information. The District of Columbia is examining the possibility of adding additional measures. California has proposed that starting in 2018 plans may only contract with providers and hospitals that have met state-specified metrics of quality care and promote safety of enrollees at a reasonable price. Such efforts will proliferate, even if not all succeed. Beyond regulatory efforts noted above, insurance companies themselves have a critical role to play in contributing to the continued success of the ACA. As insurers come to understand the risk profiles of marketplace enrollees, they will be better able to set rates, design plans, and manage networks and thereby stay profitable. In addition, insurers are best positioned to maintain the stability of their individual market risk pools by developing and financing marketing plans to increase the volume and diversity of their exchange enrollments. It is important, in addition, that insurers, such as UHC, stop creaming off good risks from the ACA marketplaces by marketing limited coverage insurance products, such as dread disease policies and short term plans. If they do not do so voluntarily, state insurance regulators and the exchanges should join in stopping them from doing so. Most of the attention paid to the ACA to date has focused on efforts to extend health coverage to the previously uninsured and to the administrative stumbles associated with that effort. While insurance coverage will broaden further, the period of rapid growth in coverage is at an end. And while administrative challenges remain, the basics are now in place. Now, the exchanges face the hard work of promoting vigorous and sustainable competition among insurers and of providing their customers with information so that insurers compete on what matters: cost, service, and quality of health care. Editor's note: This piece originally appeared in Real Clear Markets. Kevin Lucia and Justin Giovannelli contributed to this article with generous support from The Commonwealth Fund. Authors Henry J. AaronJustin GiovannelliKevin Lucia Image Source: © Brian Snyder / Reuters Full Article
the Brookings experts on the implications of COVID-19 for the Middle East and North Africa By webfeeds.brookings.edu Published On :: Thu, 26 Mar 2020 09:36:07 +0000 The novel coronavirus was first identified in January 2020, having caused people to become ill in Wuhan, China. Since then, it has rapidly spread across the world, causing widespread fear and uncertainty. At the time of writing, close to 500,000 cases and 20,000 deaths had been confirmed globally; these numbers continue to rise at an… Full Article
the To fast or not to fast—that is the coronavirus question for Ramadan By webfeeds.brookings.edu Published On :: Fri, 24 Apr 2020 09:00:59 +0000 Full Article
the The end of Kansas-Missouri’s border war should mark a new chapter for both states’ economies By webfeeds.brookings.edu Published On :: Wed, 14 Aug 2019 15:22:10 +0000 This week, Governor Kelly of Kansas and Governor Parson of Missouri signed a joint agreement to end the longstanding economic border war between their two states. For years, Kansas and Missouri taxpayers subsidized the shuffling of jobs across the state line that runs down the middle of the Kansas City metro area, with few new… Full Article
the Webinar: COVID-19 and the economy By webfeeds.brookings.edu Published On :: Fri, 27 Mar 2020 17:35:41 +0000 With more than 1,000 deaths, 3 million and counting unemployed, and no definite end in sight, the coronavirus has upended nearly every aspect of American life. In the last two weeks, the Federal Reserve and Congress scrambled to pass policies to mitigate what will be a very deep recession. Americans across the country are asking—… Full Article
the Building resilience in education to the impact of climate change By webfeeds.brookings.edu Published On :: Tue, 17 Sep 2019 14:47:49 +0000 The catastrophic wind and rain of Hurricane Dorian not only left thousands of people homeless but also children and adolescents without schools. The Bahamas is not alone; as global temperatures rise, climate scientists predict that more rain will fall in storms that will become wetter and more extreme, including hurricanes and cyclones around the world.… Full Article
the Poll shows American views on Muslims and the Middle East are deeply polarized By webfeeds.brookings.edu Published On :: Wed, 27 Jul 2016 15:21:00 +0000 A recent public opinion survey conducted by Brookings non-resident senior fellow Shibley Telhami sparked headlines focused on its conclusion that American views of Muslims and Islam have become favorable. However, the survey offered another important finding that is particularly relevant in this political season: evidence that the cleavages between supporters of Hillary Clinton and Donald Trump, respectively, on Muslims, Islam, and the Israeli-Palestinians peace process are much deeper than on most other issues. Full Article Uncategorized
the The polarizing effect of Islamic State aggression on the global jihadi movement By webfeeds.brookings.edu Published On :: Wed, 27 Jul 2016 17:26:41 +0000 Full Article
the Obama’s exit calculus on the peace process By webfeeds.brookings.edu Published On :: Wed, 27 Jul 2016 17:29:00 +0000 One issue that has traditionally shared bipartisan support is how the United States should approach the Israeli-Palestinian conflict, write Sarah Yerkes and Ariella Platcha. However, this year both parties have shifted their positions farther from the center and from past Democratic and Republican platforms. How will that affect Obama’s strategy? Full Article Uncategorized
the The Islamic State threat to the Middle East By webfeeds.brookings.edu Published On :: Mon, 01 Aug 2016 17:17:40 +0000 Politicians and analysts in Europe and the United States understandably focus on the threat the Islamic State poses to the West, and the debate is fierce over whether the group’s recent attacks are a desperate gasp of a declining organization or proof of its growing menace. Such a focus, however, obscures the far greater threat […] Full Article
the Taking the off-ramp: A path to preventing terrorism By webfeeds.brookings.edu Published On :: Tue, 02 Aug 2016 21:28:37 +0000 Full Article
the The U.S. needs a national prevention network to defeat ISIS By webfeeds.brookings.edu Published On :: Wed, 03 Aug 2016 15:40:11 +0000 The recent release of a Congressional report highlighting that the United States is the “top target” of the Islamic State coincided with yet another gathering of members of the global coalition to counter ISIL to take stock of the effort. There, Defense Secretary Carter echoed the sentiments of an increasing number of political and military leaders when he said that military […] Full Article
the Minding the gap: A multi-layered approach to tackling violent extremism By webfeeds.brookings.edu Published On :: Wed, 03 Aug 2016 16:20:33 +0000 Full Article
the Strengthening families, not just marriages By webfeeds.brookings.edu Published On :: Wed, 09 Dec 2015 13:43:00 -0500 In their recent blog for Social Mobility Memos, Brad Wilcox, Robert Lerman, and Joseph Price make a convincing case that a stable family structure is an important factor in increased social mobility, higher economic growth, and less poverty over time. Why is marriage so closely tied to family income? The interesting question is: what lies behind this relationship? Why is a rise (or a smaller decline) in the proportion of married families associated, for example, with higher growth in average family incomes or a decline in poverty? The authors suggest a number of reasons, including the positive effects of marriage for children, less crime, men’s engagement in work, and income pooling. Of these, however, income pooling is by far the most important. Individual earnings have increased very little, if at all, over the past three or four decades, so the only way for families to get ahead was to add a second earner to the household. This is only possible within marriage or some other type of income pooling arrangement like cohabitation. Marriage here is the means: income pooling is the end. Is marriage the best route to income pooling? How do we encourage more people to share incomes and expenses? There are no easy answers. Wilcox and his co-authors favor reducing marriage penalties in tax and benefit programs, expanding training and apprenticeship programs, limiting divorces in cases where reconciliation is still possible, and civic efforts to convince young people to follow what I and others have called the “success sequence.” All of these ideas are fine in principle. The question is how much difference they can make in practice. Previous efforts have had at best modest results, as a number of articles in the recent issue of the Brookings-Princeton journal The Future of Children point out. Start the success sequence with a planned pregnancy Our success sequence, which Wilcox wants to use as the basis for a pro-marriage civic campaign, requires teens and young adults to complete their education, get established in a job, and to delay childbearing until after they are married. The message is the right one. The problem is that many young adults are having children before marriage. Why? Early marriage is not compatible, in their view, with the need for extended education and training. They also want to spend longer finding the best life partner. These are good reasons to delay marriage. But pregnancies and births still occur, with or without marriage. For better or worse, our culture now tolerates, and often glamorizes, multiple relationships, including premarital sex and unwed parenting. This makes bringing back the success sequence difficult. Our best bet is to help teens and young adults avoid having a child until they have completed their education, found a steady job, and most importantly, a stable partner with whom they want to raise children, and with whom they can pool their income. In many cases this means marriage; but not in all. The bottom line: teens and young adults need more access and better education and counselling on birth control, especially little-used but highly effective forms as the IUD and the implant. Contraception, not marriage, is where we should be focusing our attention. Authors Isabel V. Sawhill Image Source: © Gary Cameron / Reuters Full Article
the Paid leave will be a hot issue in the 2016 campaign By webfeeds.brookings.edu Published On :: Mon, 21 Dec 2015 13:08:00 -0500 The U.S. is the only advanced country without a paid leave policy, enabling workers to take time off to care for a new baby or other family member. At least two Presidential candidates, Hillary Clinton and Marco Rubio, have been talking about it, making it likely that it will get attention in 2016. The idea has broad appeal now that most two-parent families and almost all one-parent families struggle with balancing work and family. Polls show that it is favored by 81 percent of the public—94 percent of Democrats, 80 percent of Independents and 65 percent of Republicans. Three states, California, New Jersey, and Rhode Island, have each enacted policies that could become models for other states or for the nation. Paid leave promotes inclusive growth Overall, paid leave is good for workers, good for children, and possibly even good for employers because of its role in helping to retain workers. It is also a policy that encourages inclusive growth. Studies of European systems suggest that paid leave increases female labor force participation and that the lack of it in the U.S. may be one reason for the decline in female labor force participation since 2000 and the growing female participation gap between the U.S. and other countries, adversely affecting our absolute and relative growth. The policy would make growth more inclusive because it would disproportionately benefit lower-wage workers. The devil is in the design The major issues in designing a paid leave policy are: Eligibility, and especially the extent of work experience required to qualify (often a year); the amount of leave allowed (Clinton suggests three months; Rubio four weeks); the wage replacement rate (often two-thirds of regular wages up to a cap), and financing. Legislation proposed by Rep. Rosa DeLauro (D-CT) and Sen. Kirsten Gillibrand (D-NY) calls for a 0.2 percent payroll tax on employers and employees. Most states have made paid leave a part of their temporary disability systems. Senator Rubio proposes to finance it through a new tax credit for employers. Getting it right on eligibility, length of leave, and size of benefit My own view is that a significant period of work experience should be required for eligibility to encourage stable employment before the birth of a child. This would not only encourage work but also insure that the subsidy was an earned benefit and not welfare by another name (but see below on financing). Leave periods need to be long enough to enable parents to bond with a child during the child’s first year of life but not so long that they lead to skill depreciation and to parents dropping out of the labor force. Three months seems like a good first step although it is far less generous than what many European countries provide (an average of 14 months across the OECD). That said, the Europeans may have gone too far. While there is little evidence that a leave as long as 6 months would have adverse effects on employment, when Canada extended their leave from six months to a year, the proportion of women returning to work declined. A replacement rate of two-thirds up to a cap also seems reasonable although a higher replacement rate is one way to encourage more parents to take the leave. Among other things, more generous policies would have positive effects on the health and well-being of children. They might also encourage more fathers to take leave. How to pay for it On financing, social insurance is the appropriate way to share the putative burden between employers and employees and avoid the stigma and unpopularity of social welfare. It would, in essence, change the default for employees (who are otherwise unlikely to save for purposes of taking leave). Some may worry that imposing any new costs on employers will lead to fewer employment opportunities. However, many economists believe that the employer portion of the tax is largely borne by workers in the form of lower wages. Moreover, in a study of 253 employers in California, over 90 percent reported either positive or no negative effects on profitability, turnover, and employee morale. Reductions in turnover, in particular, are noteworthy since turnover is a major expense for most employers. Will paid leave cause discrimination against women? Another worry is discrimination against women. Here there is some cause for concern unless efforts are made to insure that leave is equally available to, and also used by, both men and women. This concern has led some countries to establish a use-it-or-lose-it set aside for fathers. In the province of Quebec, the proportion of fathers taking leave after implementation of such a policy increased from 21 to 75 percent and even after the leave period was over, men continued to share more equally in the care of their children. Will Congress enact a national paid leave policy in the next few years? That’s doubtful in our current political environment but states may continue to take the lead. In the meantime, it can’t hurt if the major candidates are talking about the issue on the campaign trail. Authors Isabel V. Sawhill Full Article
the The decline in marriage and the need for more purposeful parenthood By webfeeds.brookings.edu Published On :: Thu, 14 Jan 2016 13:19:00 -0500 If you’re reading this article, chances are you know people who are still getting married. But it’s getting rarer, especially among the youngest generation and those who are less educated. We used to assume people would marry before having children. But marriage is no longer the norm. Half of all children born to women under 30 are born out of wedlock. The proportion is even higher among those without a college degree. What’s going on here? Most of today’s young adults don’t feel ready to marry in their early 20s. Many have not completed their educations; others are trying to get established in a career; and many grew up with parents who divorced and are reluctant to make a commitment or take the risks associated with a legally binding tie. But these young people are still involved in romantic relationships. And yes, they are having sex. Any stigma associated with premarital sex disappeared a long time ago, and with sex freely available, there’s even less reason to bother with tying the knot. The result: a lot of drifting into unplanned pregnancies and births to unmarried women and their partners with the biggest problems now concentrated among those in their 20s rather than in their teens. (The teen birth rate has actually declined since the early 1990s.) Does all of this matter? In a word, yes. These trends are not good for the young people involved and they are especially problematic for the many children being born outside marriage. The parents may be living together at the time of the child’s birth but these cohabiting relationships are highly unstable. Most will have split before the child is age 5. Social scientists who have studied the resulting growth of single-parent families have shown that the children in these families don’t fare as well as children raised in two-parent families. They are four or five times as likely to be poor; they do less well in school; and they are more likely to engage in risky behaviors as adolescents. Taxpayers end up footing the bill for the social assistance that many of these families need. Is there any way to restore marriage to its formerly privileged position as the best way to raise children? No one knows. The fact that well-educated young adults are still marrying is a positive sign and a reason for hope. On the other hand, the decline in marriage and rise in single parenthood has been dramatic and the economic and cultural transformations behind these trends may be difficult to reverse. Women are no longer economically dependent on men, jobs have dried up for working-class men, and unwed parenthood is no longer especially stigmatized. The proportion of children raised in single-parent homes has, as a consequence, risen from 5 percent in 1960 to about 30 percent now. Conservatives have called for the restoration of marriage as the best way to reduce poverty and other social ills. However, they have not figured out how to do this. The George W. Bush administration funded a series of marriage education programs that failed to move the needle in any significant way. The Clinton administration reformed welfare to require work and thus reduced any incentive welfare might have had in encouraging unwed childbearing. The retreat from marriage has continued despite these efforts. We are stuck with a problem that has no clear governmental solution, although religious and civic organizations can still play a positive role. But perhaps the issue isn’t just marriage. What may matter even more than marriage is creating stable and committed relationships between two mature adults who want and are ready to be parents before having children. That means reducing the very large fraction of births to young unmarried adults that occur before these young people say they are ready for parenthood. Among single women under the age of 30, 73 percent of all pregnancies are, according to the woman herself, either unwanted or badly mistimed. Some of these women will go on to have an abortion but 60 percent of all of the babies born to this group are unplanned. As I argue in my book, “Generation Unbound,” we need to combine new cultural messages about the importance of committed relationships and purposeful childbearing with new ways of helping young adults avoid accidental pregnancies. The good news here is that new forms of long-acting but fully reversible contraception, such as the IUD and the implant, when made available to young women at no cost and with good counseling on their effectiveness and safety, have led to dramatic declines in unplanned pregnancies. Initiatives in the states of Colorado and Iowa, and in St. Louis have shown what can be accomplished on this front. Would greater access to the most effective forms of birth control move the needle on marriage? Quite possibly. Unencumbered with children from prior relationships and with greater education and earning ability, young women and men would be in a better position to marry. And even if they fail to marry, they will be better parents. My conclusion: marriage is in trouble and, however desirable, will be difficult to restore. But we can at least ensure that casual relationships outside of marriage don’t produce children before their biological parents are ready to take on one of the most difficult social tasks any of us ever undertakes: raising a child. Accidents happen; a child shouldn’t be one of them. Editor's Note: this piece originally appeared in Inside Sources. Authors Isabel V. Sawhill Publication: Inside Sources Image Source: © Lucy Nicholson / Reuters Full Article
the The District’s proposed law shows the wrong way to provide paid leave By webfeeds.brookings.edu Published On :: Tue, 19 Jan 2016 15:03:00 -0500 The issue of paid leave is heating up in 2016. At least two presidential candidates — Democrat Hillary Clinton and Republican Sen. Marco Rubio (Fla.) — have proposed new federal policies. Several states and large cities have begun providing paid leave to workers when they are ill or have to care for a newborn child or other family member. This forward movement on paid-leave policy makes sense. The United States is the only advanced country without a paid-leave policy. While some private and public employers already provide paid leave to their workers, the workers least likely to get paid leave are low-wage and low-income workers who need it most. They also cannot afford to take unpaid leave, which the federal government mandates for larger companies. Paid leave is good for the health and development of children; it supports work, enabling employees to remain attached to the labor force when they must take leave; and it can lower costly worker turnover for employers. Given the economic and social benefits it provides and given that the private market will not generate as much as needed, public policies should ensure that such leave is available to all. But it is important to do so efficiently, so as not to burden employers with high costs that could lead them to substantially lower wages or create fewer jobs. States and cities that require employers to provide paid sick days mandate just a small number, usually three to seven days. Family or temporary disability leaves that must be longer are usually financed through small increases in payroll taxes paid by workers and employers, rather than by employer mandates or general revenue. Policy choices could limit costs while expanding benefits. For instance, states should limit eligibility to workers with experience, such as a year, and it might make sense to increase the benefit with years of accrued service to encourage labor force attachment. Some states provide four to six weeks of family leave, though somewhat larger amounts of time may be warranted, especially for the care of newborns, where three months seems reasonable. Paid leave need not mean full replacement of existing wages. Replacing two-thirds of weekly earnings up to a set limit is reasonable. The caps and partial wage replacement give workers some incentive to limit their use of paid leave without imposing large financial burdens on those who need it most. While many states and localities have made sensible choices in these areas, some have not. For instance, the D.C. Council has proposed paid-leave legislation for all but federal workers that violates virtually all of these rules. It would require up to 16 weeks of temporary disability leave and up to 16 weeks of paid family leave; almost all workers would be eligible for coverage, without major experience requirements; and the proposed law would require 100 percent replacement of wages up to $1,000 per week, and 50 percent coverage up to $3,000. It would be financed through a progressive payroll tax on employers only, which would increase to 1 percent for higher-paid employees. Our analysis suggests that this level of leave would be badly underfunded by the proposed tax, perhaps by as much as two-thirds. Economists believe that payroll taxes on employers are mostly paid through lower worker wages, so the higher taxes needed to fully fund such generous leave would burden workers. The costly policy might cause employers to discriminate against women. The disruptions and burdens of such lengthy leaves could cause employers to hire fewer workers or shift operations elsewhere over time. This is particularly true here, considering that the D.C. Council already has imposed costly burdens on employers, such as high minimum wages (rising to $11.50 per hour this year), paid sick leave (although smaller amounts than now proposed) and restrictions on screening candidates. The minimum wage in Arlington is $7.25 with no other mandates. Employers will be tempted to move operations across the river or to replace workers with technology wherever possible. Cities, states and the federal government should provide paid sick and family leave for all workers. But it can and should be done in a fiscally responsible manner that does not place undue burdens on the workers themselves or on their employers. Editor's note: this piece originally appeared in The Washington Post. Authors Harry J. HolzerIsabel V. Sawhill Publication: The Washington Post Image Source: © Charles Platiau / Reuters Full Article
the The case for 'race-conscious' policies By webfeeds.brookings.edu Published On :: Thu, 04 Feb 2016 14:00:00 -0500 The injustices faced by African Americans are high on the nation’s agenda. “Black Lives Matter” has become a rallying cry that has elicited intense feelings among both supporters and detractors. As William Julius Wilson has pointed out on this blog, the focus on policing and criminal justice is necessary but not sufficient. Concerted action is required to tackle systematic racial gaps in everything from income and wealth to employment rates, poverty rates, and educational achievement. The moral argument for reparations Ta-Nehisi Coates argues that financial reparations should be paid to all those who have suffered directly or indirectly from slavery and its aftermath, including present day injustices such as the targeting of subprime mortgages to minorities. The moral case is compelling, and Coates notes that there have been other instances in U.S. history when reparations have been paid—such as to some Native American tribes and to the Japanese-Americans thrown into internment camps during World War II. Even if the moral argument for reparations is won, there are formidable obstacles in terms of policy, politics, and law. How would reparations work in practice? To be fair, Coates does support the bill from Congressman John Conyers establishing a commission to examine precisely these questions. Even if a workable policy can be found, the political opposition would, to put it mildly, be formidable. There are also doubts about constitutional legality. However, these are certainly questions worthy of better answers than the ones currently being made. Race-conscious policy Reparations are a stark example of a race-based policy: targeting resources or an intervention at an explicitly-defined racial group. At the other extreme are “race-blind” policies, applied with no regard to race (at least in theory). But there is a middle ground, consisting of what might be labeled ‘race-conscious’ policies. These policies would be designed to close racial gaps without targeting racial groups. Bonds, jobs, tax credits: examples of race-conscious policies What might race-conscious policies look like? Here are some ideas: Professors William Darity at Duke and Darrick Hamilton of The New School propose to tackle race gaps in wealth by providing “baby bonds” to children born to families with limited wealth. In 2013, median net worth was $11,000 for black households compared to $141,900 for whites. Darity and Hamilton are supporters of reparations in principle, but are alert to policy and political feasibility. Their specific proposal is that every baby born into a family with below-median wealth receives a “baby bond” or trust fund. These would be worth $50,000 to $60,000 on average, but scaled according to the level of the family’s wealth. The money would be available at the age of 18 for certain expenditures such as paying for college or buying a home. This is a good example of a race-conscious policy. It is not explicitly targeted on race but it would have its greatest impact on African American families. While racial wealth gaps are large and troubling, the disappearance of almost half of unskilled, young black men from the labor force may be an even greater problem in the long run. A comprehensive approach on jobs could include raising the minimum wage, expanding the EITC, and providing subsidized jobs in either the public or private sector for those unable to find jobs on their own. The job subsidies might be targeted on young adults from high-poverty neighborhoods where joblessness is endemic. The subsidized jobs would help people of all races, but especially African Americans. A jobs-based program is also likely to find greater political support than straightforward wealth redistribution. Granted, such jobs programs are hard to administer, but we now have a large number of workers whose job prospects are slim to nonexistent in a technologically-oriented and service-based economy. An enhanced EITC could also help to increase wealth (or lower indebtedness). As Kathryn Edin and her colleagues note in It’s Not Like I’m Poor, the EITC is normally received as a lump sum refund at the end of the year. As a form of forced saving, it enables poor families to repay debt and make mobility enhancing investments in themselves or their children. According to Edin, recipients like the fact that, unlike welfare, the tax credit links them socially and psychologically to other Americans who receive tax refunds. A more generous EITC could therefore help on the wealth as well as income side, and narrow racial gaps in both. A final example of a race-conscious policy is the Texas “top 10” law, which guarantees admission to any public university in the state for students in the top 10 percent of their high school class. This plan could be expanded to other states. Taking race seriously The “Black Lives Matter” movement has refocused the nation’s attention on mass incarceration and related injustices in the criminal justice system. But this problem exists side by side with racial inequalities in income, wealth, education, and employment. There are no easy answers to America’s stubborn race gaps. But jobs and wages seem to us to be of paramount importance. Implemented in a race-conscious way (by targeting them to areas suffering from high rates of poverty and joblessness), employment policy might be the most powerful instrument of all for race equality. Authors Isabel V. SawhillRichard V. Reeves Image Source: © Christopher Aluka Berry / Reu Full Article
the Taking the long view: Budgeting for investments in human capital By webfeeds.brookings.edu Published On :: Mon, 08 Feb 2016 13:42:00 -0500 Tomorrow, President Obama unveils his last budget, and we’re sure to see plenty of proposals for spending on education and skills. In the past, the Administration has focused on investments in early childhood education, community colleges, and infrastructure and research. From a budgetary standpoint, the problem with these investments is how to capture their benefits as well as their costs. Show me the evidence First step: find out what works. The Obama Administration has been emphatic about the need for solid evidence in deciding what to fund. The good news is that we now have quite a lot of it, showing that investing in human capital from early education through college can make a difference. Not all programs are successful, of course, and we are still learning what works and what doesn’t. But we know enough to conclude that investing in a variety of health, education, and mobility programs can positively affect education, employment, and earnings in adulthood. Solid investments in human capital For example: 1. Young, low-income children whose families move to better neighborhoods using housing vouchers see a 31 percent increase in earnings; 2. Quality early childhood and school reform programs can raise lifetime income per child by an average of about $200,000, for at an upfront cost of about $20,000; 3. Boosting college completion rates, for instance via the Accelerated Study in Associate Programs (ASAP) in the City University of New York, leads to higher earnings. Underinvesting in human capital? If such estimates are correct (and we recognize there are uncertainties), policymakers are probably underinvesting in such programs because they are looking at the short-term costs but not at longer-term benefits and budget savings. First, the CBO’s standard practice is to use a 10-year budget window, which means long-range effects are often ignored. Second, although the CBO does try to take into account behavioral responses, such as increased take-up rates of a program, or improved productivity and earnings, it often lacks the research needed to make such estimates. Third, the usual assumption is that the rate of return on public investments in human capital is less than that for private investment. This is now questionable, especially given low interest rates. Dynamic scoring for human capital investments? A hot topic in budget politics right now is so-called “dynamic scoring.” This means incorporating macroeconomic effects, such as an increase in the labor force or productivity gains, into cost estimates. In 2015, the House adopted a rule requiring such scoring, when practicable, for major legislation. But appropriations bills are excluded, and quantitative analyses are restricted to the existing 10-year budget window. The interest in dynamic scoring is currently strongest among politicians pushing major tax bills, on the grounds that tax cuts could boost growth. But the principles behind dynamic scoring apply equally to improvements in productivity that could result from proposals to subsidize college education, for example—as proposed by both Senator Sanders and Secretary Clinton. Of course, it is tough to estimate the value of these potential benefits. But it is worth asking whether current budget rules lead to myopia in our assessments of what such investments might accomplish, and thus to an over-statement of their “true” cost. Authors Beth AkersIsabel V. Sawhill Image Source: © Jonathan Ernst / Reuters Full Article
the Boys need fathers, but don’t forget about the girls By webfeeds.brookings.edu Published On :: Tue, 09 Feb 2016 09:14:00 -0500 We have known for some time that children who grow up in single parent-families do not fare as well as those with two parents – especially two biological parents. In recent years, some scholars have argued that the consequences are especially serious for boys. Not only do boys need fathers, presumably to learn how to become men and how to control their often unruly temperaments, but less obviously, and almost counterintuitively, it turns out that boys are more sensitive or less resilient than girls. Parenting seems to affect the development of boys more than it affects the development of girls. Specifically, their home environment is more likely to affect behavior and performance in school. Up until now, these speculations have been based on limited evidence. But new research from Harvard professor Raj Chetty and a team of colleagues shows that the effects of single parenthood are indeed real for all boys, regardless of family income, but especially for boys living in high-poverty, largely minority neighborhoods. When they become adults, boys from low-income, single-parent families are less likely to work, to earn a decent income, and to go to college: not just in absolute terms, but compared to their sisters or other girls who grew up in similar circumstances. These effects are largest when the families live in metropolitan areas (commuting zones) with a high fraction of black residents, high levels of racial and income segregation, and lots of single-parent families. In short, it is not just the boy’s own family situation that matters but also the kind of neighborhood he grows up in. Exposure to high rates of crime, and other potentially toxic peer influences without the constraining influence of adult males within these families, seems to set these boys on a very different course than other boys and, perhaps more surprisingly, on a different course from their sisters. The focus of a great deal of attention recently has been on police practices in low-income minority neighborhoods. Without in any way excusing police brutality where it has occurred, what this research suggests is that the challenge for police is heightened by the absence of male authority figures in low-income black neighborhoods. In his gripping account of his own coming of age in West Baltimore, journalist Ta-Nehisi Coates recounts being severely punished by his father for some adolescent infraction. When his mother protested, Ta-Nehisi’s father replied that it was better that this discipline come from within the family than be left to the police. But Coates’ family was one of the few in his neighborhood where a father still existed. Repairing families is difficult at best. Most single-parent families are initially formed as the result of an unplanned birth to an unmarried young woman in these same communities. Perhaps girls and young women simply suffer in a different way. Instead of becoming involved in crime and ending up in prison or the informal economy, they are more likely to drift into early motherhood. With family responsibilities at an early age, and less welfare assistance than in the past, they are also more likely to have to work. But in the longer run, providing more education and a different future for these young women may actually be just as important as helping their brothers if we don’t want to perpetuate the father absence that caused these problems in the first place. They are going to need both the motivation (access to education and decent jobs) and the means (access to better forms of contraception) if we are to achieve this goal. Editor's note: This piece originally appeared in Real Clear Markets. Authors Isabel V. Sawhill Publication: Real Clear Markets Full Article
the The gender pay gap: To equality and beyond By webfeeds.brookings.edu Published On :: Tue, 12 Apr 2016 00:00:00 -0400 Today marks Equal Pay Day. How are we doing? We have come a long way since I wrote my doctoral dissertation on the pay gap back in the late 1960s. From earning 59 percent of what men made in 1974 to earning 79 percent in 2015 (among year-round, full-time workers), women have broken a lot of barriers. There is no reason why the remaining gap can’t be closed. The gap could easily move in favor of women. After all, they are now better educated than men. They earn 60 percent of all bachelor’s degrees and the majority of graduate degrees. Adjusting for educational attainment, the current earnings gap widens, with the biggest relative gaps at the highest levels of education: If we want to encourage people to get more education, we can't discriminate against the best educated just because they are women. What’s behind the pay gap? One source of the current gap is the fact that women still take more time off from work to care for their families. These family responsibilities may also affect the kinds of work they choose. Harvard professor Claudia Goldin notes that they are more likely to work in occupations where it is easier to combine work and family life. These divided work-family loyalties are holding women back more than pay discrimination per se. This should change when men are more willing to share equally on the home front, as Richard Reeves and I have argued elsewhere. Pay gap policies: Paid leave, child care, early education But there is much to be done while waiting for this more egalitarian world to arrive. Paid family leave and more support for early child care and education would go a long way toward relieving families, and women in particular, of the dual burden they now face. In the process, the pay gap should shrink or even move in favor of women. The Economic Policy Institute (EPI) has just released a very informative report on these issues. They call for an aggressive expansion of both early childhood education and child care subsidies for low and moderate income families. Specifically, they propose to cap child care expenses at 10 percent of income, which would provide an average subsidy of $3,272 to working families with children and much more than this to lower-income families. The EPI authors argue that child care subsidies would provide needed in-kind benefits to lower income families (check!), boost women’s labor force participation in a way that would benefit the overall economy (check!), and reduce the gender pay gap (check!). In short, childcare subsidies are a win-win-win. Paid leave and the pay gap For present purposes I want to focus on the likely effects on the pay gap. In the mid-1990s, the U.S. had the highest rate of female labor force participation compared to Germany, Canada, and Japan. Now we have the lowest. One reason is because other advanced countries have expanded paid leave and child care support for employed mothers while the U.S. has not: Getting to and past parity If we want to eliminate the pay gap and perhaps even reverse it, the primary focus must be on women’s continuing difficulties in balancing work and family life. We should certainly attend to any remaining instances of pay discrimination in the workplace, as called for in the Paycheck Fairness Act. But the biggest source of the problem is not employer discrimination; it is women’s continued double burden. Authors Isabel V. Sawhill Image Source: © Brendan McDermid / Reuters Full Article
the Creating jobs: Bill Clinton to the rescue? By webfeeds.brookings.edu Published On :: Wed, 25 May 2016 10:55:00 -0400 At an event this past week, Hillary Clinton announced that, if elected, she planned to put Bill Clinton in charge of creating jobs. If he becomes the “First Gentlemen” -- or as she prefers to call him, the “First Dude,” – he just might have some success in this role. The country’s very strong record of job creation during the first Clinton administration is a hopeful sign. (Full disclosure: I served in his Administration.) But assuming he's given the role of jobs czar, what would Bill Clinton do? The uncomfortable fact is that no one knows how to create enough jobs. Although about 50 percent of the public, according to Pew, worries that there are not enough jobs available, and virtually every presidential candidate is promising to produce more, economists are not sure how to achieve this goal. The debate centers around why we think people are jobless. Unless we can agree on the diagnosis, we will not be able to fashion an appropriate policy response. Some economists think that an unemployment rate hovering around 5 percent constitutes “full employment.” Those still looking for jobs, in this view, are either simply transitioning voluntarily from one job to another or they are “structurally unemployed.” The latter term refers to a mismatch, either between a worker’s skills and the skills that employers are seeking, or between where the workers live and where the jobs are geographically. (The decline in housing values or tighter zoning restrictions, for example, may have made it more difficult for people to move to states or cities where jobs are more available.) Another view is that despite the recovery from the Great Recession, there is still a residue of “cyclical” unemployment. If the Federal Reserve or Congress were to boost demand by keeping interest rates low, reducing taxes, or increasing spending on, say, infrastructure, this would create more jobs – or so goes the argument. But the Fed can’t reduce interest rates significantly because they are already near rock-bottom levels and tax and spending policies are hamstrung by political disagreements. In my view, the U.S. currently suffers from both structural and cyclical unemployment. The reason I believe there is still some room to stimulate the economy is because we have not yet seen a significant increase in labor costs and inflation. Political problems aside, we should be adding more fuel to the economy in the form of lower taxes or higher public spending. High levels of structural unemployment are also a problem. The share of working-age men who are employed has been dropping for decades at least in part because of outsourcing and automation. The share of the unemployed who have been out of work for more than six months is also relatively high for an economy at this stage of the business cycle. One possibility is that the recession caused many workers to drop out of the labor force and that after a long period of joblessness, they have seen their skills atrophy and employers stigmatize them as unemployable. The depressing fact is that none of these problems is easy to solve. Manufacturing jobs that employ a lot of people are not coming back. Retraining the work force for a high-tech economy will take a long time. Political disagreements won’t disappear unless there is a landslide election that sweeps one party into control of all three branches of government. So what can Bill Clinton or anyone else do? We may need to debate some more radical solutions such as subsidized jobs or a basic income for the structurally unemployed or a shorter work week to spread the available work around. These may not be politically feasible for some time to come, but former President Clinton is the right person to engage communities and employers in some targeted job creation projects now and to involve the country in a serious debate about what to do about jobs over the longer haul. Editor's note: This piece originally appeared in Inside Sources. Authors Isabel V. Sawhill Publication: Inside Sources Image Source: Paul Morigi Full Article
the To help low-income American households, we have to close the "work gap" By webfeeds.brookings.edu Published On :: Tue, 31 May 2016 11:00:00 -0400 When Franklin Roosevelt delivered his second inaugural address on January 20, 1936 he lamented the “one-third of a nation ill-housed, ill-clad, ill-nourished.” He challenged Americans to measure their collective progress not by “whether we add more to the abundance of those who have much; [but rather] whether we provide enough for those who have too little.” In our new paper, One third of a nation: Strategies for helping working families, we ask a simple question: How are we doing? In brief, we find that: The gulf in labor market income between the haves and have-nots remains wide. The median income of households in the bottom third in 2014 was $24,000, just a little more than a quarter of the median of $90,000 for the top two-thirds. The bottom-third households are disproportionately made up of minority adults, adults with limited educational attainment, and single parents. The most important reason for the low incomes of the bottom third is a “work gap”: the fact that many are not employed at all, or work limited hours. The work gap The decline in labor force participation rates has been widely documented, but the growing gulf in the work gap between the bottom third and the rest of the population is truly striking: While the share of men who are employed in the top two-thirds has been quite stable since 1980, lower-income men’s work rates have declined by 11 percentage points. What about women? Middle- and upper-income women have increased their work rates by 13 percentage points. This has helped maintain or even increase their family’s income. But employment rates among lower-income women have been flat, despite reforms of the welfare system and safety net designed to encourage work. Why the lack of paid work for the bottom third? Many on the left point to problems like low pay and lack of access to affordable childcare, and so favor a higher minimum wage and more subsidies for daycare. For many conservatives, the problem is rooted in family breakdown and a dependency-inducing safety net. They therefore champion proposals like marriage promotion programs and strict work requirements for public benefits. Most agree about the importance of education. We model the impact of a range of such proposals, using data from the Census Bureau, specifically: higher graduation rates from high school, a tighter labor market, a higher minimum wage, and “virtual” marriages between single mothers and unattached men. In isolation, each has only modest effects. In our model, the only significant boost to income comes from employment, and in particular from assuming that all bottom-third household heads work full time: Time to debate some more radical solutions It may be that the standard solutions to the problems of the bottom third, while helpful, are no longer sufficient. A debate about whether to make safety net programs such as Food Stamps and housing assistance conditional on work or training is underway. So are other solutions such as subsidized jobs (created by some states during the Great Recession as a natural complement to a work-conditioned safety net), more work sharing (used in Germany during the recession), or even a universal basic income (being considered by Swiss voters in June). Authors Isabel V. SawhillNathan JooEdward Rodrigue Image Source: © Stephen Lam / Reuters Full Article
the Around the halls: Experts discuss the recent US airstrikes in Iraq and the fallout By webfeeds.brookings.edu Published On :: Thu, 02 Jan 2020 19:53:38 +0000 U.S. airstrikes in Iraq on December 29 — in response to the killing of an American contractor two days prior — killed two dozen members of the Iranian-backed militia Kata'ib Hezbollah. In the days since, thousands of pro-Iranian demonstrators gathered outside the U.S. embassy in Baghdad, with some forcing their way into the embassy compound… Full Article
the Around the halls: What Brookings experts hope to hear in the Iowa debate By webfeeds.brookings.edu Published On :: Tue, 14 Jan 2020 01:55:34 +0000 Iran and the recent the U.S. strike that killed Quds Force commander Qasem Soleimani will loom large for the Democratic candidates participating in the debate in Iowa. It may be tempting for the candidates to use this issue primarily as an opportunity to criticize the current administration and issue vague appeals for a return to… Full Article
the Around the halls: Brookings experts on the Middle East react to the White House’s peace plan By webfeeds.brookings.edu Published On :: Wed, 29 Jan 2020 16:33:09 +0000 On January 28 at the White House, President Trump unveiled his plan for Middle East peace alongside Israeli Prime Minister Benjanim Netanyahu. Below, Brookings experts on the peace process and the region more broadly offer their initial takes on the announcement. Natan Sachs (@natansachs), Director of the Center for Middle East Policy: This is a… Full Article
the Israel is back on the brink By webfeeds.brookings.edu Published On :: Tue, 03 Mar 2020 22:44:57 +0000 In the endless loop of Israeli politics, one could easily have failed to notice that on Monday, the country held its third national election in less than a year. This numbing political repetition, however, masks the high stakes of these recurring elections. After the second election, in September, I wrote that one thing emerged from… Full Article
the What does the Gantz-Netanyahu coalition government mean for Israel? By webfeeds.brookings.edu Published On :: Tue, 21 Apr 2020 21:02:27 +0000 After three inconclusive elections over the last year, Israel at last has a new government, in the form of a coalition deal between political rivals Benjamin Netanyahu and Benny Gantz. Director of the Center for Middle East Policy Natan Sachs examines the terms of the power-sharing deal, what it means for Israel's domestic priorities as… Full Article
the Managing risk: Nuclear weapons in the new geopolitics By webfeeds.brookings.edu Published On :: Mon, 11 Feb 2019 20:43:26 +0000 Director's summarySince the end of the Cold War, more attention has been given to nuclear non-proliferation issues at large than to traditional issues of deterrence, strategic stability, and arms control. Given the state of current events and the re-emergence of great power competition, we are now starting to see a rebalance, with a renewed focus on questions… Full Article