b

Welcoming member of Knesset Erel Margalit to Brookings


One of the great parts of being at Brookings has been the many champions of government reform in the US and around the world who have reached out to visit us here, meet me and my colleagues, and talk about how best to transform government and make it work better for people. The latest was MK Erel Margalit, who before joining the Israeli Knesset started a leading venture capital firm in Israel (and was the first Israeli to make the Forbes Midas list of top tech investors globally). My Brookings colleagues, including Elaine Kamarck, Bill Galston, Natan Sachs and John Hudak talked with MK Margalit about the lessons he learned in the private sector, and about his efforts to bring those lessons to his work in government. 

Coming not long after our meeting with Czech Deputy Prime Minister and Finance Minister Andre Babis, who enjoyed similar success in business and has ambitious reform goals of his own informed by his business career, it was fascinating to talk about what does and does not translate to the government sector. MK Margalit’s focus includes supporting peace and economic development by developing enterprise zones in and around Israel that encourage economic partnerships between Jewish and Arab Israelis and their businesses, and that include Palestinians as well. It was an impressive melding of business and government methodologies. The meeting built on similar ones we have had with other innovators including CFPB Director Rich Cordray, former Mayor and Governor Martin O’Malley, and of course DPM Babis, all of whom have in common innovating to make government function more effectively.

Authors

Image Source: © Ronen Zvulun / Reuters
      




b

The great debate: Is political realism realistic?


I this week had the pleasure of doing a podcast debate with my Brookings colleague Jonathan Rauch on the question of whether we need stronger machines and weaker transparency in American government, or the opposite. Guess which side I took! This has been a long-running water cooler and cafeteria discussion between Jon and myself since I arrived at Brookings almost a year ago. While we find some areas of agreement in the podcast (more than you might think),I remain unconvinced by the so-called “political realist” school that Jonathan is a leader of. As I have previously written and blogged (here, here and here), I think the realists are fantasists, disconnected from the actual reality of politics, including its risks. We need more transparency, not less to deal with, for example, things like corruption risk, particularly in the post-Citizens United era. Indeed, that decision itself embraces the value of a vigorous transparency regime when other safeguards are relaxed. My belief is that Washington works both more efficiently and more ethically under the scrutinizing gaze of the American media, ngo's and public. As former White House ethics czar, I often facilitated administration openness efforts, including as a means of accountability, for example helping put the White House visitor logs online.

Jon and my lively debate covers not only issues of transparency itself but also applies them to other current topics—the Affordable Care Act, Trade Promotion Authority, and much more. The debate was silently moderated by our colleague Ben Wittes as part of his “Chess Clock Debates” series. With only ten minutes on the chess clock each to make our points, it was a concise discussion that hit the fundamentals briskly. Thanks to Ben for inviting us and giving us a public forum to discuss this critical policy issue.

Authors

Image Source: © Jonathan Ernst / Reuters
      




b

Q & A with Ambassador Norman Eisen


Editor's Note: In September of this year Visiting Fellow Norman Eisen was featured in the Council on Government Ethics Law (COGEL) members-only magazine, The Guardian. An abbreviated version of his interview is featured below.

Interview conducted by Wesley Bizzell, Assistant General Counsel, Altria Client Services LLC.

Recently, you addressed the Italian Parliament to discuss ethics in government, as that legislative body considers adopting its own code of ethical conduct. In that speech, you noted you believe there are four key concepts at the center of Federal U.S. ethics laws. What are those four concepts and why they are important?

Firstly, I’d like to note the importance of focusing on four concepts. The House of Representatives Ethics manual is 456 pages long; too long to be of any real use in creating an ethics system. Instead, these four principles serve as a foundation upon which different governments can build their own sets of rules based on their own unique needs.

I focused on just four to make a point about priorities. The first is “conflicts”—that is, problems that arise when an individual’s personal interests and parliamentary duties may be at odds with one another. The second is “gifts”. Even if there isn’t an explicit quid-pro-quo style agreement involved, when a political figure accepts a gift from someone with a demonstrated interest in government decision-making, the suspicion of misconduct will always be there. “Revolving door” is the third core concept. When individuals rotate from the private sector to the public sector over and over again, they are naturally going to form relationships that tempt them toward unethical behavior. Finally, “use official resources.” Officials must be careful to use official resources only for official purposes, being particularly careful not to conduct any campaign activity on the taxpayer’s dime. The goal with these four priorities is not only to keep people from behaving unethically, but also to make sure it doesn’t seem like anyone is doing anything unethical either.

In that speech, you said that focusing on these four areas keeps you from losing the forest for the trees when working with ethics codes. Can you elaborate on that?

There’s always a danger for members of the executive branch, because the system of rules and regulations that governs ethical behavior is itself so complex. When it’s imbedded in equally complicated and overlapping sets of statute you risk creating rules so specific that they’re practically useless. The same is true in the legislative branch and I dare say in the federal judicial branch, as well as at the state and local levels. You’re always on the edge of being lost in the minutiae.

In fact, you can often make wrong decisions if you focus in too much on the specifics, because you lose sight of the larger picture that guides the rules. There are always options in ethical dilemmas, and the big picture needs to be kept in focus.

While at the White House serving as Special Counsel to the President for Ethics and Government Reform you oversaw numerous significant changes in the area of open government—including helping craft and implement President Obama’s Open Government Directive; publishing White House visitor logs on the internet; and generally improving the Freedom of Information Act (FOIA) process. What change in the area of open government are you most proud?

I was struck when we began the interview by the list of topics—campaign finance, lobbying, ethics, elections, and FOIA issues—because all of those were part of my portfolio as Special Counsel to the President for Ethics and Government Reform during the first two years of the Obama administration. I would have to say that I’m most proud of my role in the President’s decision to put all of the White House visitor records on the internet.

Remember, in previous administrations, Democratic and Republican alike, plaintiffs had to litigate for years just to get a handful of visitor records. To have all of the visitor records on the internet, categorized into various types, opens access to the White House to an unprecedented degree. There are now over four-and-a-half million visitor records available on the White House website, with more added every month. I think that that is remarkable.

Truthfully, I was torn between that accomplishment and a second one, which is that the President and his staff in the White House have had the longest run in presidential history (knock on wood) without a major ethics scandal or a grand jury investigation, indictment, or conviction. I was tempted to list that second fact as the accomplishment of which I was most proud. But it occurred to me that the death of White House scandal is actually a function of the exceptional level of transparency that the visitor records represent. Transparency helps ensure people don’t have meetings they shouldn’t be having, which keeps them out of trouble. So I’ll offer that second accomplishment as a part of the first one.

In your view, what was the most significant lobbying and ethics reform during your tenure at the White House?

No doubt about it: reversing the revolving door. Craig Holman of Public Citizen, who studies these issues, says we were the first in the world to create a reverse revolving door. I think it is absolutely critical to slow the revolving door in both directions—both coming out of government and going in.

I should also note that the comprehensive nature of the ethics system we put into place in the Obama administration bears a responsibility for the good results. The first rule, of course, of any ethics system is “tone at the top.” The president exemplifies that. He has the highest standards of ethics himself, and as a result everyone around him feels he will be personally let down if they don’t embrace the ethics system. Good results flow from that. Looking back, we can identify certain aspects that have more and less successful, but it’s important to recognize that the positive results are owed to the gestalt. Our transparency and ethics system was one of the most through and transparent that I’ve seen in any government, and the result speak for themselves.

Authors

Image Source: © Petr Josek Snr / Reuters
      




b

Five questions about the VW scandal


Now that that the initial revelations regarding the VW scandal have sunk in it’s time to begin assessing the larger significance of those revelations. While the case and, we predict, VW, will continue for years (we are only at the end of the beginning, and far from the beginning of the end), we are far enough along to see five large questions emerging. These questions will tell us much about the economic, corporate and cultural future of VW and German enterprise. 

1) VW was an integral component of Germany's industrial reputation in Europe, across the Atlantic in the United States, and around the world. Now, that hard-won reputation is at risk. How broad will the damage be to German businesses' reputation not just for quality, but for "premium quality?"

2) Turning from the German business sector to the German economy as a whole, the VW scandal has many ironies, not least of which is that the company was a key driver (so to speak) of the famous German Wirthschaftswunder. Economic health propelled a vanquished Germany to the forefront of Europe’s post-WWII recovery and then made post-Cold War reunification a success. Does the VW scandal have the potential to slow down the overall growth of the German economy, and what are the European and global implications of that at a time when the Chinese economy is also sputtering?

3) From a corporate governance perspective, the scandal represents some of the most boneheaded thinking ever. Following disclosure of the fraud, €14bn (£10bn; $15.6bn) was wiped off VW's stock market value. Whoever knew/orchestrated the scheme thought they would get away with it, but did they really not foresee the consequences or even the likelihood of getting caught? We will long be studying the abnormal “fraud psychology" of this case.

4) Germany ranks among the top ten countries for low corruption according to Transparency International. Yet VW is not alone among German companies in making major headlines with massive ethics failures in recent years, joining Siemens, Bayer, Deutsche Bank, and many others. What does this mean for the future of Germany’s role as a force for anti-corruption at home and internationally?

5) Former VW CEO Winterkorn resigned but claimed he knew nothing about the scandal. What does this say about the structure and management culture of Germany’s largest companies? How widespread is “plausible deniability” in German business culture--and in all business culture everywhere? If so, what are the dangers of this going forward, and what should be done to address them?

Authors

Image Source: © Hannibal Hanschke / Reuters
      




b

More Czech governance leaders visit Brookings


I had the pleasure earlier this month of welcoming my friend, Czech Republic Foreign Minister Lubomir Zaoralek, here to Brookings for a discussion of critical issues confronting the Europe-U.S. alliance. Foreign Minister Zaoralek was appointed to his current position in January 2014 after serving as a leading figure in the Czech Parliament for many years. He was accompanied by a distinguished delegation that included Dr. Petr Drulak of the Foreign Ministry, and Czech Ambassador Petr Gandalovic. I was fortunate enough to be joined in the discussion by colleagues from Brookings including Fiona Hill, Shadi Hamid, Steve Pifer, and others, as well as representatives of other D.C. think tanks. Our discussion spanned the globe, from how to respond to the Syrian conflict, to addressing Russia’s conduct in Ukraine, to the thaw in U.S.-Cuba relations, to dealing with the refugee crisis in Europe. The conversation was so fascinating that the sixty minutes we had allotted flew by and we ended up talking for two hours—and we still just scratched the surface.

Amb. Eisen and FM Zaoralek, October 2, 2015

Yesterday, we had a visit from Czech State Secretary Tomas Prouza, accompanied by Ambassador Martin Povejsil, the Czech Permanent Envoy to the EU. We also talked about world affairs. In this case, that included perhaps the most important governance matter now confronting the U.S.: the exceptionally entertaining (if not enlightening) presidential primary season. I expressed my opinion that Vice President Biden would not enter the race, only to have him prove me right in his Rose Garden remarks a few hours later. If only all my predictions came true (and as quickly). We at Brookings benefited greatly from the insights of both of these October delegations, and we look forward to welcoming many more from every part of the Czech political spectrum in the months ahead.

Prouza, Eisen, Povejsil, October 21, 2015

Authors

Image Source: © Gary Hershorn / Reuters
       




b

Five reasons for (cautious) optimism about the EU’s future


The European Union (EU) is confronting a series of potentially existential threats, including the refugee crisis, ISIS terror, Russian adventurism, and Brexit (the potential exit of the U.K. from the EU).  I hosted Czech Prime Minister Bohuslav Sobotka at Brookings to get his fundamentally (but carefully) optimistic take on how he and his fellow EU leaders can meet those challenges. Here are five reasons for optimism that emerged from our conversation: 

  1. Take the Fight to Daesh.  The PM made clear Europe’s determination to take on the terror and refugee issues at their source in Iraq, Syria, and Libya.  Just this week, the Czech Republic upped its commitment to the international coalition, announcing that it will send a team to train Iraqis using U.S. made L-159 fighter jets (also sold to Iraq by Prague).  With transatlantic leadership, these efforts are starting to bear fruit in the decay of ISIS.
  2. Never Let a Good Crisis Go to Waste. As part of addressing today’s refugee crisis, Europe is exploring multi-lateral efforts to construct a common European border service, integrate refugee populations, and promote internal security.  The process is painful, but filling these gaps will make the European Union stronger.
  3. Stand Strong With Ukraine.  Some predicted that European unity against Putin’s expansionism would not hold.  Instead, the EU and the United States have maintained their resolve in enacting sanctions.  That has strengthened the EU, but as the PM pointed out, now Ukraine and its supporters must make sure that state moves towards good governance and functionality. 
  4. Taking the Exit Out of Brexit.  The PM predicted that the U.K. would not exit the EU.  When I pressed him on why, he acknowledged that there were elements of wishing and hoping in that forecast, and that the vote comes at a tough moment.  But I share the PM’s hopes—the U.K. is not one to leave friends when times get tough.
  5. Never Forget to Remember.  The PM and I spent a lot of time discussing the ups and downs of Central Europe’s experiment with democracy over the past century.  He and his Czech colleagues—of all mainstream political parties—are acutely aware of that history, and that too gives me hope that it will not be repeated.

Immense challenges can destabilize and divide—but they also present opportunities for new collaboration and cohesion. If addressed in partnership, Europe’s current trials can ultimately strengthen the ties that bind the EU together.  

Watch the full discussion here.

Andrew Kenealy contributed to this post. 

Authors

Image Source: Paul Morigi
       




b

Can the Department of Veterans Affairs be modernized?


Event Information

June 20, 2016
2:00 PM - 3:00 PM EDT

Falk Auditorium
Brookings Institution
1775 Massachusetts Avenue NW
Washington, DC 20036

Register for the Event
A conversation with VA Secretary Robert McDonald

This program was aired live on CSPAN.org » 



With the demand for its services constantly evolving, the Department of Veterans Affairs (VA) faces complex challenges in providing accessible care to America’s veterans. Amidst a history of long patient wait times, cost overruns, and management concerns, the VA recently conducted a sweeping internal review of its operations.  The result was the new MyVA program.

How will MyVA improve the VA’s care of veterans? What will it do restore public confidence in its efforts? What changes is the VA undergoing to address both internal concerns and modern challenges in veteran care? 

On June 20, Governance Studies at Brookings hosted VA Secretary Robert McDonald. Secretary McDonald described the VA’s transformation strategy and explained how the reforms within MyVA will impact veterans, taxpayers and other stakeholders. He addressed lessons learned not just for the VA but for all government agencies that strive to achieve transformation and improve service delivery.

This event was broadcast live on C-SPAN.

Join the conversation on Twitter at #VASec and @BrookingsGov

Audio

Transcript

Event Materials

       




b

The Iran deal, one year out: What Brookings experts are saying


How has the Joint Comprehensive Plan of Action (JCPOA)—signed between the P5+1 and Iran one year ago—played out in practice? Several Brookings scholars, many of whom participated prominently in debates last year surrounding official congressional review, offered their views.

Strobe Talbott, President, Brookings Institution:

At the one-year mark, it’s clear that the nuclear agreement between Iran and the major powers has substantially restricted Tehran’s ability to produce the fissile material necessary to build a bomb. That’s a net positive—for the United States and the broader region.

Robert Einhorn, Senior Fellow, Center for 21st Century Security and Intelligence and Senior Fellow, Arms Control and Non-Proliferation Initiative, Foreign Policy program:

One year after its conclusion, the JCPOA remains controversial in Tehran and Washington (as I describe in more detail here), with opponents unreconciled to the deal and determined to derail it. But opponents have had to scale back their criticism, in large part because the JCPOA, at least so far, has delivered on its principal goal—blocking Iran’s path to nuclear weapons for an extended period of time. Moreover, Iran’s positive compliance record has not given opponents much ammunition. The IAEA found Iran in compliance in its two quarterly reports issued in 2016.

But challenges to the smooth operation and even the longevity of the deal are already apparent.

A real threat to the JCPOA is that Iran will blame the slow recovery of its economy on U.S. failure to conscientiously fulfill its sanctions relief commitments and, using that as a pretext, will curtail or even end its own implementation of the deal. But international banks and businesses have been reluctant to engage Iran not because they have been discouraged by the United States but because they have their own business-related reasons to be cautious. Legislation proposed in Congress could also threaten the nuclear deal. 

For now, the administration is in a position to block new legislation that it believes would scuttle the deal. But developments outside the JCPOA, especially Iran’s regional behavior and its crackdown on dissent at home, could weaken support for the JCPOA within the United States and give proponents of deal-killing legislation a boost. 

A potential wildcard for the future of the JCPOA is coming governing transitions in both Washington and Tehran. Hillary Clinton would maintain the deal but perhaps a harder line than her predecessor. Donald Trump now says he will re-negotiate rather than scrap the deal, but a better deal will not prove negotiable. With President Hassan Rouhani up for re-election next year and the health of the Supreme Leader questionable, Iran’s future policy toward the JCPOA cannot be confidently predicted.

A final verdict on the JCPOA is many years away. But it is off to a promising start, as even some of its early critics now concede. Still, it is already clear that the path ahead will not always be smooth, the longevity of the deal cannot be taken for granted, and keeping it on track will require constant focus in Washington and other interested capitals. 

Suzanne Maloney, Deputy Director, Foreign Policy program and Senior Fellow, Center for Middle East Policy, Foreign Policy program:

The Joint Comprehensive Plan of Action has fulfilled neither the worst fears of its detractors nor the most soaring ambitions of its proponents. All of the concerns that have shaped U.S. policy toward Tehran for more than a generation—terrorism, human rights abuses, weapons of mass destruction, regional destabilization—remain as relevant, and as alarming, as they have ever been. Notably, much the same is true on the Iranian side; the manifold grievances that Tehran has harbored toward Washington since the 1979 revolution continue to smolder.

An important truth about the JCPOA, which has been wielded by both its defenders and its detractors in varying contexts, is that it was transactional, not transformational. As President Barack Obama repeatedly insisted, the accord addressed one specific problem, and in those narrow terms, it can be judged a relative success. The value of that relative success should not be underestimated; a nuclear-armed Iran would magnify risks in a turbulent region in a terrible way. 

But in the United States, in Iran, and across the Middle East, the agreement has always been viewed through a much broader lens—as a waystation toward Iranian-American rapprochement, as an instrument for addressing the vicious cycle of sectarian violence that threatens to consume the region, as a boost to the greater cause of moderation and democratization in Iran. And so the failure of the deal to catalyze greater cooperation from Iran on a range of other priorities—Syria, Yemen, Iraq, to name a few—or to jumpstart improvements in Iran’s domestic dynamics cannot be disregarded simply because it was not its original intent. 

For the “new normal” of regularized diplomatic contact between Washington and Tehran to yield dividends, the United States will need a serious strategy toward Tehran that transcends the JCPOA, building on the efficacy of the hard-won multilateral collaboration on the nuclear issue. Iranians, too, must begin to pivot the focus of their efforts away from endless litigation of the nuclear deal and toward a more constructive approach to addressing the deep challenges facing their country today. 

Bruce Riedel, Senior Fellow, Center for Middle East Policy and Center for 21st Century Security and Intelligence and Director, Intelligence Project, Foreign Policy program:

As I explain more fully here, one unintended but very important consequence of the Iran nuclear deal has been to aggravate and intensify Saudi Arabia's concerns about Iran's regional goals and intentions. This fueling of Saudi fears has in turn fanned sectarian tensions in the region to unprecedented levels, and the results are likely to haunt the region for years to come.

Riyadh's concerns about Iran have never been primarily focused on the nuclear danger. Rather, the key Saudi concern is that Iran seeks regional hegemony and uses terrorism and subversion to achieve it. The deal deliberately does not deal with this issue. In Saudi eyes, it actually makes the situation worse because lifting sanctions removed Iran's isolation as a rogue state and gives it more income. 

Washington has tried hard to reassure the Saudis, and President Obama has wisely sought to build confidence with King Salman and his young son. The Iran deal is a good one, and I've supported it from its inception. But it has had consequences that are dangerous and alarming. In the end, Riyadh and Tehran are the only players who can deescalate the situation—the Saudis show no sign of interest in that road. 

Norman Eisen, Visiting Fellow, Governance Studies:

The biggest disappointment of the post-deal year has been the failure of Congress to pass legislation complementing the JCPOA. There is a great deal that the legislative branch could do to support the pact. Above all, it could establish criteria putting teeth into U.S. enforcement of Preamble Section III, Iran's pledge never to seek nuclear weapons. Congress could and should make clear what the ramp to seeking nuclear weapons would look like, what the triggers would be for U.S. action, and what kinds of U.S. action would be on the table. If Iran knows that, it will modulate its behavior accordingly. If it does not, it will start to act out, and we have just kicked the can down the road. That delay is of course immensely valuable—but why not extend the road indefinitely? Congress can do that, and much more (e.g. by increasing funding for JCPOA oversight by the administration and the IAEA), with appropriate legislation.

Richard Nephew, Nonresident Senior Fellow, Center for 21st Century Security and Intelligence, Arms Control and Non-Proliferation Initiative, Foreign Policy program:

Over the past year, much effort has gone into ensuring that the Iran deal is fully implemented. To date, the P5+1 has—not surprisingly—gotten the better end of the bargain, with significant security benefits accruing to them and their partners in the Middle East once the International Atomic Energy Agency (IAEA) verified the required changes to Iran's nuclear program. Iran, for its part, has experienced a natural lag in its economic resurgence, held back by the collapse in oil prices in 2014, residual American and European sanctions, and reluctance among banks and businesses to re-engage.

But, Iran's economy has stabilized and—if the deal holds for its full measure—the security benefits that the P5+1 and their partners have won may fall away while Iran's economy continues to grow. The most important challenge related to the deal for the next U.S. administration (and, presumably, the Rouhani administration in its second term) is therefore: how can it be taken forward, beyond the 10- to 15-year transition period? Iran will face internal pressure to expand its nuclear program, but it also will face pressure to refrain both externally and internally, should other countries in the region seek to create their own matching nuclear capabilities. 

The best next step for all sides is to negotiate a region-wide arrangement to manage nuclear programs –one that constrains all sides, though perhaps not equally. It must ensure—at a minimum—that nuclear developments in the region are predictable, understandable, and credibly civilian (something Bob Einhorn and I addressed in a recent report). The next White House will need to do the hard work of convincing countries in the region—and beyond—not to rest on the victory of the JCPOA. Rather, they must take it for what it is: another step towards a more stable and manageable region.

Tamara Wittes, Senior Fellow and Director, Center for Middle East Policy, Foreign Policy program

This week, Washington is awash in events and policy papers taking stock of how the Iran nuclear deal has changed the Middle East in the past year. The narratives presented this week largely track the positions that the authors, speakers, or organizations articulated on the nuclear deal when it was first concluded last summer. Those who opposed the deal have marshaled evidence of how the deal has "emboldened" Iran's destabilizing behavior, while those who supported the deal cite evidence of "moderated" politics in the Islamic Republic. That polarized views on the deal last year produce polarized assessments of the deal's impact this year should surprise no one.

In fact, no matter which side of the nuclear agreement’s worth it presents, much of the analysis out this week ascribes to the nuclear deal Iranian behavior and attitudes in the region that existed before the deal's conclusion and implementation. Iran has been a revisionist state, and a state sponsor of terrorism, since the 1979 Islamic Revolution. The Saudi-Iranian rivalry predates the revolution; Iran's backing of Houthi militias against Saudi and its allies in Yemen well predates the nuclear agreement. Most notably, the upheavals in the Arab world since 2011 have given Iran wider opportunities than perhaps ever before to exploit the cracks within Arab societies—and to use cash, militias, and other tools to advance its interests and expand its influence. Iran has exploited those opportunities skillfully in the last five years and, as I wrote last summer, was likely to continue to do so regardless of diplomatic success or failure in Vienna. To argue that the nuclear deal somehow created these problems, or could solve them, is ahistorical. 

It is true that Iran's access to global markets might free even more cash for these endeavors, and that is a real issue worth tracking. But since severe sanctions did not prevent Iran from spending hundreds of millions of dollars to support and supply Hezbollah, or marshaling Islamic Revolutionary Guard Corps (IRGC) and militia fighters to sustain the faltering regime of Bashar Assad in Syria, it's not clear that additional cash will generate a meaningful difference in regional outcomes. Certainly, the nuclear deal's conclusion and implementation did not alter the trajectory of Iranian policy in Yemen, Iraq, Syria, or Lebanon to any noticeable degree—and that means that, no matter what the merits or dangers of the JCPOA, the United States must still confront and work to resolve enduring challenges to regional instability—including Iran's revisionist behavior.

Kenneth M. Pollack, Senior Fellow, Center for Middle East Policy, Foreign Policy program: 

When the JCPOA was being debated last year, I felt that the terms of the deal were far less consequential than how the United States responded to Iranian regional behavior after a deal was signed. I see the events of the past 12 months as largely having borne that out. While both sides have accused the other of "cheating," the deal has so far largely held. However, as many of my colleagues have noted, the real frictions have arisen from the U.S. geostrategic response to the deal.

I continue to believe that signing the JCPOA was better than any of the realistic alternatives—though I also continue to believe that a better deal was possible, had the administration handled the negotiations differently. However, the administration’s regional approach since then has been problematic—with officials condemning Riyadh and excusing Tehran in circumstances where both were culpable and ignoring some major Iranian transgressions, for instance (and with President Obama gratuitously insulting the Saudis and other U.S. allies in interviews). 

America's traditional Sunni Arab allies (and to some extent Turkey and Israel) feared that either the United States would use the JCPOA as an excuse to further disengage from the region or to switch sides and join the Iranian coalition. Their reading of events has been that this is precisely what has happened, and it is causing the GCC states to act more aggressively.

I think our traditional allies would enthusiastically welcome a Hillary Clinton presidency. She would likely do all that she could to reassure them that she plans to be more engaged and more willing to commit American resources and energy to Middle Eastern problems. But those allies will eventually look for her to turn words into action. I cannot imagine a Hillary Clinton administration abrogating the JCPOA, imposing significant new economic sanctions on Iran, or otherwise acting in ways that it would fear could provoke Tehran to break the deal. Our allies may see that as Washington trying to remain on the fence, which will infuriate them. 

So there are some important strategic differences between the United States and its regional allies. The second anniversary of the JCPOA could therefore prove even more fraught for America and the Middle East than the first. 


       




b

Australia’s Asylum Bill is High-Handed and Cambodia Deal Just a Quick Fix

      
 
 




b

One Step Forward, Many Steps Back for Refugees

      
 
 




b

ABC News Australia – Dec 2, 2014

      
 
 




b

Australia’s Obligations Still Apply Despite High Court Win

      
 
 




b

Climate change brings disasters on steroids

Editor’s Note: Nonresident Senior Fellow Jane McAdam says that climate change-related displacement is happening now and band aid solutions to natural disasters are simply not enough. The time is now to be proactive, because the cost of inaction will be much higher. This article was originally published in The Sydney Morning Herald and on smh.com.au.…

      
 
 




b

Migration with dignity – climate change and Kiribati

      
 
 




b

Human rights, climate change and cross-border displacement

      
 
 




b

Principles for Transparency and Public Participation in Redistricting

Scholars from the Brookings Institution and the American Enterprise Institute are collaborating to promote transparency in redistricting. In January 2010, an advisory board of experts and representatives of good government groups was convened in order to articulate principles for transparent redistricting and to identify barriers to the public and communities who wish to create redistricting…

      
 
 




b

Pulling Back the Curtain on Redistricting

Every 10 years — unfortunately, sometimes more frequently — legislative district lines are redrawn to balance population for demographic changes revealed by the census. What goes on is much more than a simple technical adjustment of boundaries, with ramifications that largely escape public notice.Politicians often use redistricting as an opportunity to cut unfavorable constituents and…

      
 
 




b

Toward Public Participation in Redistricting

The drawing of legislative district boundaries is among the most self-interested and least transparent systems in American democratic governance. All too often, formal redistricting authorities maintain their control by imposing high barriers to transparency and to public participation in the process. Reform advocates believe that opening that process to the public could lead to different…

      
 
 




b

As the venture capital game gets bigger, the Midwest keeps missing out

Those working to accelerate economic growth in the Heartland must face some stark realities. The Great Lakes region continues to export wealth to coastal economies, even as investment leaders try to equalize growth between the coasts and the Heartland. The region sees only a tiny fraction of venture capital (VC) deals, despite producing one quarter…

       




b

Most business incentives don’t work. Here’s how to fix them.

In 2017, the state of Wisconsin agreed to provide $4 billion in state and local tax incentives to the electronics manufacturing giant Foxconn. In return, the Taiwan-based company promised to build a new manufacturing plant in the state for flat-screen television displays and the subsequent creation of 13,000 new jobs. It didn’t happen. Those 13,000…

       




b

American workers’ safety net is broken. The COVID-19 crisis is a chance to fix it.

The COVID-19 pandemic is forcing some major adjustments to many aspects of our daily lives that will likely remain long after the crisis recedes: virtual learning, telework, and fewer hugs and handshakes, just to name a few. But in addition, let’s hope the crisis also drives a permanent overhaul of the nation’s woefully inadequate worker…

       




b

Does decarbonization mean de-coalification? Discussing carbon reduction policies

In September, the Energy Security and Climate Initiative (ESCI) at Brookings held the third meeting of its Coal Task Force (CTF), during which participants discussed the dynamics of three carbon policy instruments: performance standards, cap and trade, and a carbon tax. The dialogue revolved around lessons learned from implementing these policy mechanisms, especially as they…

       




b

6 years from the BP Deepwater Horizon oil spill: What we’ve learned, and what we shouldn’t misunderstand

Six years ago today, the BP Deepwater Horizon oil spill occurred in the U.S. Gulf of Mexico with devastating effects on the local environment and on public perception of offshore oil and gas drilling. The blowout sent toxic fluids and gas shooting up the well, leading to an explosion on board the rig that killed…

       




b

The post-Paris clean energy landscape: Renewable energy in 2016 and beyond

Last year’s COP21 summit saw global economic powers and leading greenhouse gas emitters—including the United States, China, and India—commit to the most ambitious clean energy targets to date. Bolstered by sharp reductions in costs and supportive government policies, renewable power spread globally at its fastest-ever rate in 2015, accounting for more than half of the…

       




b

2015 Brown Center Report on American Education: How Well Are American Students Learning?


Editor's Note: The introduction to the 2015 Brown Center Report on American Education appears below. Use the Table of Contents to navigate through the report online, or download a PDF of the full report.

TABLE OF CONTENTS

Part I: Girls, Boys, and Reading

Part II: Measuring Effects of the Common Core

Part III: Student Engagement


INTRODUCTION

The 2015 Brown Center Report (BCR) represents the 14th edition of the series since the first issue was published in 2000.  It includes three studies.  Like all previous BCRs, the studies explore independent topics but share two characteristics: they are empirical and based on the best evidence available.  The studies in this edition are on the gender gap in reading, the impact of the Common Core State Standards -- English Language Arts on reading achievement, and student engagement.

Part one examines the gender gap in reading.  Girls outscore boys on practically every reading test given to a large population.  And they have for a long time.  A 1942 Iowa study found girls performing better than boys on tests of reading comprehension, vocabulary, and basic language skills.  Girls have outscored boys on every reading test ever given by the National Assessment of Educational Progress (NAEP)—the first long term trend test was administered in 1971—at ages nine, 13, and 17.  The gap is not confined to the U.S.  Reading tests administered as part of the Progress in International Reading Literacy Study (PIRLS) and the Program for International Student Assessment (PISA) reveal that the gender gap is a worldwide phenomenon.  In more than sixty countries participating in the two assessments, girls are better readers than boys. 

Perhaps the most surprising finding is that Finland, celebrated for its extraordinary performance on PISA for over a decade, can take pride in its high standing on the PISA reading test solely because of the performance of that nation’s young women.  With its 62 point gap, Finland has the largest gender gap of any PISA participant, with girls scoring 556 and boys scoring 494 points (the OECD average is 496, with a standard deviation of 94).   If Finland were only a nation of young men, its PISA ranking would be mediocre.

Part two is about reading achievement, too. More specifically, it’s about reading and the English Language Arts standards of the Common Core (CCSS-ELA).  It’s also about an important decision that policy analysts must make when evaluating public policies—the determination of when a policy begins. How can CCSS be properly evaluated? 

Two different indexes of CCSS-ELA implementation are presented, one based on 2011 data and the other on data collected in 2013.  In both years, state education officials were surveyed about their Common Core implementation efforts.  Because forty-six states originally signed on to the CCSS-ELA—and with at least forty still on track for full implementation by 2016—little variability exists among the states in terms of standards policy.  Of course, the four states that never adopted CCSS-ELA can serve as a small control group.  But variation is also found in how the states are implementing CCSS.  Some states are pursuing an array of activities and aiming for full implementation earlier rather than later.  Others have a narrow, targeted implementation strategy and are proceeding more slowly. 

The analysis investigates whether CCSS-ELA implementation is related to 2009-2013 gains on the fourth grade NAEP reading test.  The analysis cannot verify causal relationships between the two variables, only correlations.  States that have aggressively implemented CCSS-ELA (referred to as “strong” implementers in the study) evidence a one to one and one-half point larger gain on the NAEP scale compared to non-adopters of the standards.  This association is similar in magnitude to an advantage found in a study of eighth grade math achievement in last year’s BCR.  Although positive, these effects are quite small.  When the 2015 NAEP results are released this winter, it will be important for the fate of the Common Core project to see if strong implementers of the CCSS-ELA can maintain their momentum.

Part three is on student engagement.  PISA tests fifteen-year-olds on three subjects—reading, math, and science—every three years.  It also collects a wealth of background information from students, including their attitudes toward school and learning.  When the 2012 PISA results were released, PISA analysts published an accompanying volume, Ready to Learn: Students’ Engagement, Drive, and Self-Beliefs, exploring topics related to student engagement.

Part three provides secondary analysis of several dimensions of engagement found in the PISA report.  Intrinsic motivation, the internal rewards that encourage students to learn, is an important component of student engagement.  National scores on PISA’s index of intrinsic motivation to learn mathematics are compared to national PISA math scores.  Surprisingly, the relationship is negative.  Countries with highly motivated kids tend to score lower on the math test; conversely, higher-scoring nations tend to have less-motivated kids. 

The same is true for responses to the statements, “I do mathematics because I enjoy it,” and “I look forward to my mathematics lessons.”  Countries with students who say that they enjoy math or look forward to their math lessons tend to score lower on the PISA math test compared to countries where students respond negatively to the statements.  These counterintuitive finding may be influenced by how terms such as “enjoy” and “looking forward” are interpreted in different cultures.  Within-country analyses address that problem.  The correlation coefficients for within-country, student-level associations of achievement and other components of engagement run in the anticipated direction—they are positive.  But they are also modest in size, with correlation coefficients of 0.20 or less. 

Policymakers are interested in questions requiring analysis of aggregated data—at the national level, that means between-country data.  When countries increase their students’ intrinsic motivation to learn math, is there a concomitant increase in PISA math scores?  Data from 2003 to 2012 are examined.  Seventeen countries managed to increase student motivation, but their PISA math scores fell an average of 3.7 scale score points.  Fourteen countries showed no change on the index of intrinsic motivation—and their PISA scores also evidenced little change.  Eight countries witnessed a decline in intrinsic motivation.  Inexplicably, their PISA math scores increased by an average of 10.3 scale score points.  Motivation down, achievement up.

Correlation is not causation.  Moreover, the absence of a positive correlation—or in this case, the presence of a negative correlation—is not refutation of a possible positive relationship.  The lesson here is not that policymakers should adopt the most effective way of stamping out student motivation.  The lesson is that the level of analysis matters when analyzing achievement data.  Policy reports must be read warily—especially those freely offering policy recommendations.  Beware of analyses that exclusively rely on within- or between-country test data without making any attempt to reconcile discrepancies at other levels of analysis.  Those analysts could be cherry-picking the data.  Also, consumers of education research should grant more credence to approaches modeling change over time (as in difference in difference models) than to cross-sectional analyses that only explore statistical relationships at a single point in time. 

  Part I: Girls, Boys, and Reading »

Downloads

Authors

Image Source: Elizabeth Sablich
     
 
 




b

Girls, boys, and reading


Part I of the 2015 Brown Center Report on American Education

Girls score higher than boys on tests of reading ability.  They have for a long time.  This section of the Brown Center Report assesses where the gender gap stands today and examines trends over the past several decades.  The analysis also extends beyond the U.S. and shows that boys’ reading achievement lags that of girls in every country in the world on international assessments.  The international dimension—recognizing that U.S. is not alone in this phenomenon—serves as a catalyst to discuss why the gender gap exists and whether it extends into adulthood.

Background

One of the earliest large-scale studies on gender differences in reading, conducted in Iowa in 1942, found that girls in both elementary and high schools were better than boys at reading comprehension.[i] The most recent results from reading tests of the National Assessment of Educational Progress (NAEP) show girls outscoring boys at every grade level and age examined.  Gender differences in reading are not confined to the United States.  Among younger children—age nine to ten, or about fourth grade—girls consistently outscore boys on international assessments, from a pioneering study of reading comprehension conducted in fifteen countries in the 1970s, to the results of the Program in International Reading Literacy Study (PIRLS) conducted in forty-nine nations and nine benchmarking entities in 2011.  The same is true for students in high school.  On the 2012 reading literacy test of the Program for International Student Assessment (PISA), worldwide gender gaps are evident between fifteen-year-old males and females.

As the 21st century dawned, the gender gap came under the scrutiny of reporters and pundits.  Author Christina Hoff Sommers added a political dimension to the gender gap, and some say swept the topic into the culture wars raging at the time, with her 2000 book The War Against Boys: How Misguided Feminism is Harming Our Young Men.[ii] Sommers argued that boys’ academic inferiority, and in particular their struggles with reading, stemmed from the feminist movement’s impact on schools and society.  In the second edition, published in 2013, she changed the subtitle to How Misguided Policies Are Harming Our Young Men.  Some of the sting is removed from the  indictment of “misguided feminism.”  But not all of it.  Sommers singles out for criticism a 2008 report from the American Association of University Women.[iii] That report sought to debunk the notion that boys fared poorly in school compared to girls.  It left out a serious discussion of boys’ inferior performance on reading tests, as well as their lower grade point averages, greater rate of school suspension and expulsion, and lower rate of acceptance into college.

Journalist Richard Whitmire picked up the argument about the gender gap in 2010 with Why Boys Fail: Saving Our Sons from an Educational System That’s Leaving Them Behind.[iv] Whitmire sought to separate boys’ academic problems from the culture wars, noting that the gender gap in literacy is a worldwide phenomenon and appears even in countries where feminist movements are weak to nonexistent.  Whitmire offers several reasons for boys’ low reading scores, including poor reading instruction (particularly a lack of focus on phonics), and too few books appealing to boys’ interests.  He also dismisses several explanations that are in circulation, among them, video games, hip-hop culture, too much testing, and feminized classrooms.  As with Sommers’s book, Whitmire’s culprit can be found in the subtitle: the educational system.  Even if the educational system is not the original source of the problem, Whitmire argues, schools could be doing more to address it. 

In a 2006 monograph, education policy researcher Sara Mead took on the idea that American boys were being shortchanged by schools.  After reviewing achievement data from NAEP and other tests, Mead concluded that the real story of the gender gap wasn’t one of failure at all.  Boys and girls were both making solid academic progress, but in some cases, girls were making larger gains, misleading some commentators into concluding that boys were being left behind.  Mead concluded, “The current boy crisis hype and the debate around it are based more on hopes and fears than on evidence.”[v]

Explanations for the Gender Gap

The analysis below focuses on where the gender gap in reading stands today, not its causes.  Nevertheless, readers should keep in mind the three most prominent explanations for the gap.  They will be used to frame the concluding discussion.

Biological/Developmental:  Even before attending school, young boys evidence more problems in learning how to read than girls.  This explanation believes the sexes are hard-wired differently for literacy.

School Practices: Boys are inferior to girls on several school measures—behavioral, social, and academic—and those discrepancies extend all the way through college.  This explanation believes that even if schools do not create the gap, they certainly don’t do what they could to ameliorate it. 

Cultural Influences: Cultural influences steer boys toward non-literary activities (sports, music) and define literacy as a feminine characteristic.  This explanation believes cultural cues and strong role models could help close the gap by portraying reading as a masculine activity. 

The U.S. Gender Gap in Reading

Table 1-1 displays the most recent data from eight national tests of U.S. achievement.  The first group shows results from the National Assessment of Educational Progress Long Term Trend (NAEP-LTT), given to students nine, 13, and 17 years of age.  The NAEP-LTT in reading was first administered in 1971.  The second group of results is from the NAEP Main Assessment, which began testing reading achievement in 1992.  It assesses at three different grade levels: fourth, eighth, and twelfth.   The last two tests are international assessments in which the U.S. participates, the Progress in International Reading Literacy Study (PIRLS), which began in 2001, and the Program for International Student Assessment (PISA), first given in 2000.  PIRLS tests fourth graders, and PISA tests 15-year-olds.  In the U.S., 71 percent of students who took PISA in the fall of 2012 were in tenth grade. 

Two findings leap out.  First, the test score gaps between males and females are statistically significant on all eight assessments.  Because the sample sizes of the assessments are quite large, statistical significance does not necessarily mean that the gaps are of practical significance—or even noticeable if one observed several students reading together.  The tests also employ different scales.  The final column in the table expresses the gaps in standard deviation units, a measure that allows for comparing the different scores and estimating their practical meaningfulness.

The second finding is based on the standardized gaps (expressed in SDs).  On both NAEP tests, the gaps are narrower among elementary students and wider among middle and high school students.  That pattern also appears on international assessments.  The gap is twice as large on PISA as on PIRLS.[vi]  A popular explanation for the gender gap involves the different maturation rates of boys and girls.  That theory will be discussed in greater detail below, but at this point in the analysis, let’s simply note that the gender gap appears to grow until early adolescence—age 13 on the LTT-NAEP and grade eight on the NAEP Main.

Should these gaps be considered small or large?  Many analysts consider 10 scale score points on NAEP equal to about a year of learning.  In that light, gaps of five to 10 points appear substantial.  But compared to other test score gaps on NAEP, the gender gap is modest in size.  On the 2012 LTT-NAEP for nine-year-olds, the five point gap between boys and girls is about one-half of the 10 point gap between students living in cities and those living in suburbs.[vii]  The gap between students who are eligible for free and reduced lunch and those who are not is 28 points; between black and white students, it is 23 points; and between English language learners (ELL) and non-ELL students, it is 34 points. 

Table 1-1 only shows the size of the gender gap as gauged by assessments at single points in time.  For determining trends, let’s take a closer look at the LTT-NAEP, since it provides the longest running record of the gender gap.  In Table 1-2, scores are displayed from tests administered since 1971 and given nearest to the starts and ends of decades.  Results from 2008 and 2012 are both shown to provide readers an idea of recent fluctuations.  At all three ages, gender gaps were larger in 1971 than they are today.  The change at age nine is statistically significant, but not at age 13 (p=0.10) or age 17 (p=.07), although they are close.  Slight shrinkage occurred in the 1980s, but the gaps expanded again in the 1990s.  The gap at age 13 actually peaked at 15 scale score points in 1994 (not shown in the table), and the decline since then is statistically significant.  Similarly, the gap at age 17 peaked in 1996 at 15 scale score points, and the decline since then is also statistically significant.  More recently, the gap at age nine began to shrink again in 1999, age 13 began shrinking in the 2000s, and age 17 in 2012.

Table 1-3 decomposes the change figures by male and female performance.  Sara Mead’s point, that the NAEP story is one of both sexes gaining rather than boys falling behind, is even truer today than when she made it in 2006.  When Mead’s analysis was published, the most recent LTT-NAEP data were from 2004.  Up until then, girls had made greater reading gains than boys.  But that situation has reversed.  Boys have now made larger gains over the history of LTT-NAEP, fueled by the gains that they registered from 2004 to 2012.  The score for 17-year-old females in 2012 (291) was identical to their score in 1971.

International Perspective

The United States is not alone in reading’s gender gap.  Its gap of 31 points is not even the largest (see Figure 1-1). On the 2012 PISA, all OECD countries exhibited a gender gap, with females outscoring males by 23 to 62 points on the PISA scale (standard deviation of 94).   On average in the OECD, girls outscored boys by 38 points (rounded to 515 for girls and 478 for boys).  The U.S. gap of 31 points is less than the OECD average.

Finland had the largest gender gap on the 2012 PISA, twice that of the U.S., with females outscoring males by an astonishing 62 points (0.66 SDs).  Finnish girls scored 556, and boys scored 494.  To put this gap in perspective, consider that Finland’s renowned superiority on PISA tests is completely dependent on Finnish girls.  Finland’s boys’ score of 494 is about the same as the international average of 496, and not much above the OECD average for males (478).  The reading performance of Finnish boys is not statistically significantly different from boys in the U.S. (482) or from the average U.S. student, both boys and girls (498). Finnish superiority in reading only exists among females.

There is a hint of a geographical pattern.  Northern European countries tend to have larger gender gaps in reading.  Finland, Sweden, Iceland, and Norway have four of the six largest gaps.  Denmark is the exception with a 31 point gap, below the OECD average.   And two Asian OECD members have small gender gaps.  Japan’s gap of 24 points and South Korea’s gap of 23 are ranked among the bottom four countries. The Nordic tendency toward large gender gaps in reading was noted in a 2002 analysis of the 2000 PISA results.[viii]  At that time, too, Denmark was the exception.  Because of the larger sample and persistence over time, the Nordic pattern warrants more confidence than the one in the two Asian countries.

Back to Finland.  That’s the headline story here, and it contains a lesson for cautiously interpreting international test scores.  Consider that the 62 point gender gap in Finland is only 14 points smaller than the U.S. black-white gap (76 points) and 21 points larger than the white-Hispanic gap (41 points) on the same test.  Finland’s gender gap illustrates the superficiality of much of the commentary on that country’s PISA performance.  A common procedure in policy analysis is to consider how policies differentially affect diverse social groups.  Think of all the commentators who cite Finland to promote particular policies, whether the policies address teacher recruitment, amount of homework, curriculum standards, the role of play in children’s learning, school accountability, or high stakes assessments.[ix]  Advocates pound the table while arguing that these policies are obviously beneficial.  “Just look at Finland,” they say.  Have you ever read a warning that even if those policies contribute to Finland’s high PISA scores—which the advocates assume but serious policy scholars know to be unproven—the policies also may be having a negative effect on the 50 percent of Finland’s school population that happens to be male?

Would Getting Boys to Enjoy Reading More Help Close the Gap?

One of the solutions put forth for improving boys’ reading scores is to make an effort to boost their enjoyment of reading.  That certainly makes sense, but past scores of national reading and math performance have consistently, and counterintuitively, shown no relationship (or even an inverse one) with enjoyment of the two subjects.  PISA asks students how much they enjoy reading, so let’s now investigate whether fluctuations in PISA scores are at all correlated with how much 15-year-olds say they like to read.

The analysis below employs what is known as a “differences-in-differences” analytical strategy.  In both 2000 and 2009, PISA measured students’ reading ability and asked them several questions about how much they like to read.  An enjoyment index was created from the latter set of questions.[x]  Females score much higher on this index than boys.  Many commentators believe that girls’ greater enjoyment of reading may be at the root of the gender gap in literacy.

When new international test scores are released, analysts are tempted to just look at variables exhibiting strong correlations with achievement (such as amount of time spent on homework), and embrace them as potential causes of high achievement. But cross-sectional correlations can be deceptive.  The direction of causality cannot be determined, whether it’s doing a lot of homework that leads to high achievement, or simply that good students tend to take classes that assign more homework.  Correlations in cross-sectional data are also vulnerable to unobserved factors that may influence achievement.  For example, if cultural predilections drive a country’s exemplary performance, their influence will be masked or spuriously assigned to other variables unless they are specifically modeled.[xi]  Class size, between-school tracking, and time spent on learning are all topics on which differences-in-differences has been fruitfully employed to analyze multiple cross-sections of international data.

Another benefit of differences-in-differences is that it measures statistical relationships longitudinally.  Table 1-4 investigates the question: Is the rise and fall of reading enjoyment correlated with changes in reading achievement?  Many believe that if boys liked reading more, their literacy test scores would surely increase.  Table 1-4 does not support that belief.  Data are available for 27 OECD countries, and they are ranked by how much they boosted males’ enjoyment of reading.  The index is set at the student-level with a mean of 0.00 and standard deviation of 1.00.  For the twenty-seven nations in Table 1-4, the mean national change in enjoyment is -.02 with a standard deviation of .09. 

Germany did the best job of raising boys’ enjoyment of reading, with a gain of 0.12 on the index.  German males’ PISA scores also went up—a little more than 10 points (10.33).  France, on the other hand, raised males’ enjoyment of reading nearly as much as Germany (0.11), but French males’ PISA scores declined by 15.26 points.  A bit further down the column, Ireland managed to get boys to enjoy reading a little more (a gain of 0.05) but their reading performance fell a whopping 36.54 points.  Toward the bottom end of the list, Poland’s boys enjoyed reading less in 2009 than in 2000, a decline of 0.14 on the index, but over the same time span, their reading literacy scores increased by more than 14 points (14.29).  Among the countries in which the relationship goes in the expected direction is Finland.  Finnish males’ enjoyment of reading declined (-0.14) as did their PISA scores in reading literacy (-11.73).  Overall, the correlation coefficient for change in enjoyment and change in reading score is -0.01, indicating no relationship between the two.

Christina Hoff Sommers and Richard Whitmire have praised specific countries for first recognizing and then addressing the gender gap in reading.  Recently, Sommers urged the U.S. to “follow the example of the British, Canadians, and Australians.”[xii]  Whitmire described Australia as “years ahead of the U.S. in pioneering solutions” to the gender gap.  Let’s see how those countries appear in Table 1-4.  England does not have PISA data for the 2000 baseline year, but both Canada and Australia are included.  Canada raised boys’ enjoyment of reading a little bit (0.02) but Canadian males’ scores fell by about 12 points (-11.74).  Australia suffered a decline in boys’ enjoyment of reading (-0.04) and achievement (-16.50).  As promising as these countries’ efforts may have appeared a few years ago, so far at least, they have not borne fruit in raising boys’ reading performance on PISA.

Achievement gaps are tricky because it is possible for the test scores of the two groups being compared to both decline while the gap increases or, conversely, for scores of both to increase while the gap declines.  Table 1-4 only looks at males’ enjoyment of reading and its relationship to achievement.  A separate differences-in-differences analysis was conducted (but not displayed here) to see whether changes in the enjoyment gap—the difference between boys’ and girls’ enjoyment of reading—are related to changes in reading achievement.  They are not (correlation coefficient of 0.08).  National PISA data simply do not support the hypothesis that the superior reading performance of girls is related to the fact that girls enjoy reading more than boys. 

Discussion

Let’s summarize the main findings of the analysis above. Reading scores for girls exceed those for boys on eight recent assessments of U.S. reading achievement.  The gender gap is larger for middle and high school students than for students in elementary school.  The gap was apparent on the earliest NAEP tests in the 1970s and has shown some signs of narrowing in the past decade.  International tests reveal that the gender gap is worldwide.  Among OECD countries, it even appears among countries known for superior performance on PISA’s reading test.  Finland not only exhibited the largest gender gap in reading on the 2012 PISA, the gap had widened since 2000.  A popular recommendation for boosting boys’ reading performance is finding ways for them to enjoy reading more.  That theory is not supported by PISA data.  Countries that succeeded in raising boys’ enjoyment of reading from 2000 to 2009 were no more likely to improve boys’ reading performance than countries where boys’ enjoyment of reading declined. 

The origins of the gender gap are hotly debated.  The universality of the gap certainly supports the argument that it originates in biological or developmental differences between the two sexes.  It is evident among students of different ages in data collected at different points in time.  It exists across the globe, in countries with different educational systems, different popular cultures, different child rearing practices, and different conceptions of gender roles.  Moreover, the greater prevalence of reading impairment among young boys—a ratio of two or three to one—suggests an endemic difficulty that exists before the influence of schools or culture can take hold.[xiii] 

But some of the data examined above also argue against the developmental explanation.  The gap has been shrinking on NAEP.  At age nine, it is less than half of what it was forty years ago.  Biology doesn’t change that fast.  Gender gaps in math and science, which were apparent in achievement data for a long time, have all but disappeared, especially once course taking is controlled.  The reading gap also seems to evaporate by adulthood.  On an international assessment of adults conducted in 2012, reading scores for men and women were statistically indistinguishable up to age 35—even in Finland and the United States.  After age 35, men had statistically significantly higher scores in reading, all the way to the oldest group, age 55 and older.  If the gender gap in literacy is indeed shaped by developmental factors, it may be important for our understanding of the phenomenon to scrutinize periods of the life cycle beyond the age of schooling.   

Another astonishing pattern emerged from the study of adult reading.  Participants were asked how often they read a book.  Of avid book readers (those who said they read a book once a week) in the youngest group (age 24 and younger), 59 percent were women and 41 percent were men.  By age 55, avid book readers were even more likely to be women, by a margin of 63 percent to 37 percent.  Two-thirds of respondents who said they never read books were men.  Women remained the more enthusiastic readers even as the test scores of men caught up with those of women and surpassed them.

A few years ago, Ian McEwan, the celebrated English novelist, decided to reduce the size of the library in his London townhouse.  He and his younger son selected thirty novels and took them to a local park.  They offered the books to passers-by.  Women were eager and grateful to take the books, McEwan reports.  Not a single man accepted.  The author’s conclusion? “When women stop reading, the novel will be dead.”[xiv] 

McEwan might be right, regardless of the origins of the gender gap in reading and the efforts to end it.



[i] J.B. Stroud and E.F. Lindquist, “Sex differences in achievement in the elementary and secondary schools,” Journal of Educational Psychology, vol. 33(9) (Washington, D.C.: American Psychological Association, 1942), 657-667.

[ii] Christina Hoff Sommers, The War Against Boys: How Misguided Feminism Is Harming Our Young Men (New York, NY: Simon & Schuster, 2000).

[iii] Christianne Corbett, Catherine Hill, and Andresse St. Rose, Where the Girls Are: The Facts About Gender Equity in Education (Washington, D.C.: American Association of University Women, 2008).

[iv] Richard Whitmire, Why Boys Fail: Saving Our Sons from an Educational System That’s Leaving Them Behind (New York, NY: AMACOM, 2010).

[v] Sara Mead, The Evidence Suggests Otherwise: The Truth About Boys and Girls (Washington, D.C.: Education Sector, 2006).

[vi] PIRLS and PISA assess different reading skills.  Performance on the two tests may not be comparable.

[vii] NAEP categories were aggregated to calculate the city/suburb difference.

[viii] OECD, Reading for Change: Performance and Engagement Across Countries (Paris: OECD, 2002), 125.

[ix] The best example of promoting Finnish education policies is Pasi Sahlberg’s  Finnish Lessons: What Can the World Learn from Educational Change in Finland? (New York: Teachers College Press, 2011).

[x] The 2009 endpoint was selected because 2012 data for the enjoyment index were not available on the NCES PISA data tool.

[xi] A formal name for the problem of reverse causality is endogeneity and for the problem of unobserved variables, omitted variable bias.

[xii] Christina Hoff Sommers, “The Boys at the Back,” New York Times, February 2, 2013;  Richard Whitmire, Why Boys Fail (New York: AMACOM, 2010), 153.

[xiii] J.L. Hawke, R.K. Olson, E.G. Willcutt, S.J. Wadsworth, & J.C. DeFries, “Gender ratios for reading difficulties,” Dyslexia 15(3), (Chichester, England: Wiley, 2009), 239–242.

[xiv] Daniel Zalewski, “The Background Hum: Ian McEwan’s art of unease,” The New Yorker, February 23, 2009. 

  Part II: Measuring Effects of the Common Core »

Downloads

Authors

     
 
 




b

Brookings Live: Girls, boys, and reading


Event Information

March 26, 2015
2:00 PM - 2:30 PM EDT

Online Only
Live Webcast

And more from the Brown Center Report on American Education



Girls outscore boys on practically every reading test given to a large population. And they have for a long time. A 1942 Iowa study found girls performing better than boys on tests of reading comprehension, vocabulary, and basic language skills, and girls have outscored boys on every reading test ever given by the National Assessment of Educational Progress (NAEP). This gap is not confined to the U.S. Reading tests administered as part of the Progress in International Reading Literacy Study (PIRLS) and the Program for International Student Assessment (PISA) reveal that the gender gap is a worldwide phenomenon.

On March 26, join Brown Center experts Tom Loveless and Matthew Chingos as they discuss the latest Brown Center Report on American Education, which examines this phenomenon. Hear what Loveless's analysis revealed about where the gender gap stands today and how it's trended over the past several decades - in the U.S. and around the world.

Tune in below or via Spreecast where you can submit questions. 

Spreecast is the social video platform that connects people.
Check out Girls, Boys, and Reading on Spreecast.

     
 
 




b

Common Core and classroom instruction: The good, the bad, and the ugly


This post continues a series begun in 2014 on implementing the Common Core State Standards (CCSS).  The first installment introduced an analytical scheme investigating CCSS implementation along four dimensions:  curriculum, instruction, assessment, and accountability.  Three posts focused on curriculum.  This post turns to instruction.  Although the impact of CCSS on how teachers teach is discussed, the post is also concerned with the inverse relationship, how decisions that teachers make about instruction shape the implementation of CCSS.

A couple of points before we get started.  The previous posts on curriculum led readers from the upper levels of the educational system—federal and state policies—down to curricular decisions made “in the trenches”—in districts, schools, and classrooms.  Standards emanate from the top of the system and are produced by politicians, policymakers, and experts.  Curricular decisions are shared across education’s systemic levels.  Instruction, on the other hand, is dominated by practitioners.  The daily decisions that teachers make about how to teach under CCSS—and not the idealizations of instruction embraced by upper-level authorities—will ultimately determine what “CCSS instruction” really means.

I ended the last post on CCSS by describing how curriculum and instruction can be so closely intertwined that the boundary between them is blurred.  Sometimes stating a precise curricular objective dictates, or at least constrains, the range of instructional strategies that teachers may consider.  That post focused on English-Language Arts.  The current post focuses on mathematics in the elementary grades and describes examples of how CCSS will shape math instruction.  As a former elementary school teacher, I offer my own personal opinion on these effects.

The Good

Certain aspects of the Common Core, when implemented, are likely to have a positive impact on the instruction of mathematics. For example, Common Core stresses that students recognize fractions as numbers on a number line.  The emphasis begins in third grade:

CCSS.MATH.CONTENT.3.NF.A.2
Understand a fraction as a number on the number line; represent fractions on a number line diagram.

CCSS.MATH.CONTENT.3.NF.A.2.A
Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1/b and that the endpoint of the part based at 0 locates the number 1/b on the number line.

CCSS.MATH.CONTENT.3.NF.A.2.B
Represent a fraction a/b on a number line diagram by marking off a lengths 1/b from 0. Recognize that the resulting interval has size a/b and that its endpoint locates the number a/b on the number line.


When I first read this section of the Common Core standards, I stood up and cheered.  Berkeley mathematician Hung-Hsi Wu has been working with teachers for years to get them to understand the importance of using number lines in teaching fractions.[1] American textbooks rely heavily on part-whole representations to introduce fractions.  Typically, students see pizzas and apples and other objects—typically other foods or money—that are divided up into equal parts.  Such models are limited.  They work okay with simple addition and subtraction.  Common denominators present a bit of a challenge, but ½ pizza can be shown to be also 2/4, a half dollar equal to two quarters, and so on. 

With multiplication and division, all the little tricks students learned with whole number arithmetic suddenly go haywire.  Students are accustomed to the fact that multiplying two whole numbers yields a product that is larger than either number being multiplied: 4 X 5 = 20 and 20 is larger than both 4 and 5.[2]  How in the world can ¼ X 1/5 = 1/20, a number much smaller than either 1/4or 1/5?  The part-whole representation has convinced many students that fractions are not numbers.  Instead, they are seen as strange expressions comprising two numbers with a small horizontal bar separating them. 

I taught sixth grade but occasionally visited my colleagues’ classes in the lower grades.  I recall one exchange with second or third graders that went something like this:

“Give me a number between seven and nine.”  Giggles. 

“Eight!” they shouted. 

“Give me a number between two and three.”  Giggles.

“There isn’t one!” they shouted. 

“Really?” I’d ask and draw a number line.  After spending some time placing whole numbers on the number line, I’d observe,  “There’s a lot of space between two and three.  Is it just empty?” 

Silence.  Puzzled little faces.  Then a quiet voice.  “Two and a half?”

You have no idea how many children do not make the transition to understanding fractions as numbers and because of stumbling at this crucial stage, spend the rest of their careers as students of mathematics convinced that fractions are an impenetrable mystery.   And  that’s not true of just students.  California adopted a test for teachers in the 1980s, the California Basic Educational Skills Test (CBEST).  Beginning in 1982, even teachers already in the classroom had to pass it.   I made a nice after-school and summer income tutoring colleagues who didn’t know fractions from Fermat’s Last Theorem.  To be fair, primary teachers, teaching kindergarten or grades 1-2, would not teach fractions as part of their math curriculum and probably hadn’t worked with a fraction in decades.  So they are no different than non-literary types who think Hamlet is just a play about a young guy who can’t make up his mind, has a weird relationship with his mother, and winds up dying at the end.

Division is the most difficult operation to grasp for those arrested at the part-whole stage of understanding fractions.  A problem that Liping Ma posed to teachers is now legendary.[3]

She asked small groups of American and Chinese elementary teachers to divide 1 ¾ by ½ and to create a word problem that illustrates the calculation.  All 72 Chinese teachers gave the correct answer and 65 developed an appropriate word problem.  Only nine of the 23 American teachers solved the problem correctly.  A single American teacher was able to devise an appropriate word problem.  Granted, the American sample was not selected to be representative of American teachers as a whole, but the stark findings of the exercise did not shock anyone who has worked closely with elementary teachers in the U.S.  They are often weak at math.  Many of the teachers in Ma’s study had vague ideas of an “invert and multiply” rule but lacked a conceptual understanding of why it worked.

A linguistic convention exacerbates the difficulty.  Students may cling to the mistaken notion that “dividing in half” means “dividing by one-half.”  It does not.  Dividing in half means dividing by two.  The number line can help clear up such confusion.  Consider a basic, whole-number division problem for which third graders will already know the answer:  8 divided by 2 equals 4.   It is evident that a segment 8 units in length (measured from 0 to 8) is divided by a segment 2 units in length (measured from 0 to 2) exactly 4 times.  Modeling 12 divided by 2 and other basic facts with 2 as a divisor will convince students that whole number division works quite well on a number line. 

Now consider the number ½ as a divisor.  It will become clear to students that 8 divided by ½ equals 16, and they can illustrate that fact on a number line by showing how a segment ½ units in length divides a segment 8 units in length exactly 16 times; it divides a segment 12 units in length 24 times; and so on.  Students will be relieved to discover that on a number line division with fractions works the same as division with whole numbers.

Now, let’s return to Liping Ma’s problem: 1 ¾ divided by ½.   This problem would not be presented in third grade, but it might be in fifth or sixth grades.  Students who have been working with fractions on a number line for two or three years will have little trouble solving it.  They will see that the problem simply asks them to divide a line segment of 1 3/4 units by a segment of ½ units.  The answer is 3 ½ .  Some students might estimate that the solution is between 3 and 4 because 1 ¾ lies between 1 ½ and 2, which on the number line are the points at which the ½ unit segment, laid end on end, falls exactly three and four times.  Other students will have learned about reciprocals and that multiplication and division are inverse operations.  They will immediately grasp that dividing by ½ is the same as multiplying by 2—and since 1 ¾ x 2 = 3 ½, that is the answer.  Creating a word problem involving string or rope or some other linearly measured object is also surely within their grasp.

Conclusion

I applaud the CCSS for introducing number lines and fractions in third grade.  I believe it will instill in children an important idea: fractions are numbers.  That foundational understanding will aid them as they work with more abstract representations of fractions in later grades.   Fractions are a monumental barrier for kids who struggle with math, so the significance of this contribution should not be underestimated.

I mentioned above that instruction and curriculum are often intertwined.  I began this series of posts by defining curriculum as the “stuff” of learning—the content of what is taught in school, especially as embodied in the materials used in instruction.  Instruction refers to the “how” of teaching—how teachers organize, present, and explain those materials.  It’s each teacher’s repertoire of instructional strategies and techniques that differentiates one teacher from another even as they teach the same content.  Choosing to use a number line to teach fractions is obviously an instructional decision, but it also involves curriculum.  The number line is mathematical content, not just a teaching tool.

Guiding third grade teachers towards using a number line does not guarantee effective instruction.  In fact, it is reasonable to expect variation in how teachers will implement the CCSS standards listed above.  A small body of research exists to guide practice. One of the best resources for teachers to consult is a practice guide published by the What Works Clearinghouse: Developing Effective Fractions Instruction for Kindergarten Through Eighth Grade (see full disclosure below).[4]  The guide recommends the use of number lines as its second recommendation, but it also states that the evidence supporting the effectiveness of number lines in teaching fractions is inferred from studies involving whole numbers and decimals.  We need much more research on how and when number lines should be used in teaching fractions.

Professor Wu states the following, “The shift of emphasis from models of a fraction in the initial stage to an almost exclusive model of a fraction as a point on the number line can be done gradually and gracefully beginning somewhere in grade four. This shift is implicit in the Common Core Standards.”[5]  I agree, but the shift is also subtle.  CCSS standards include the use of other representations—fraction strips, fraction bars, rectangles (which are excellent for showing multiplication of two fractions) and other graphical means of modeling fractions.  Some teachers will manage the shift to number lines adroitly—and others will not.  As a consequence, the quality of implementation will vary from classroom to classroom based on the instructional decisions that teachers make.  

The current post has focused on what I believe to be a positive aspect of CCSS based on the implementation of the standards through instruction.  Future posts in the series—covering the “bad” and the “ugly”—will describe aspects of instruction on which I am less optimistic.



[1] See H. Wu (2014). “Teaching Fractions According to the Common Core Standards,” https://math.berkeley.edu/~wu/CCSS-Fractions_1.pdf. Also see "What's Sophisticated about Elementary Mathematics?" http://www.aft.org/sites/default/files/periodicals/wu_0.pdf

[2] Students learn that 0 and 1 are exceptions and have their own special rules in multiplication.

[3] Liping Ma, Knowing and Teaching Elementary Mathematics.

[4] The practice guide can be found at: http://ies.ed.gov/ncee/wwc/pdf/practice_guides/fractions_pg_093010.pdf I serve as a content expert in elementary mathematics for the What Works Clearinghouse.  I had nothing to do, however, with the publication cited.

[5] Wu, page 3.

Authors

     
 
 




b

Implementing Common Core: The problem of instructional time


This is part two of my analysis of instruction and Common Core’s implementation.  I dubbed the three-part examination of instruction “The Good, The Bad, and the Ugly.”  Having discussed “the “good” in part one, I now turn to “the bad.”  One particular aspect of the Common Core math standards—the treatment of standard algorithms in whole number arithmetic—will lead some teachers to waste instructional time.

A Model of Time and Learning

In 1963, psychologist John B. Carroll published a short essay, “A Model of School Learning” in Teachers College Record.  Carroll proposed a parsimonious model of learning that expressed the degree of learning (or what today is commonly called achievement) as a function of the ratio of time spent on learning to the time needed to learn.     

The numerator, time spent learning, has also been given the term opportunity to learn.  The denominator, time needed to learn, is synonymous with student aptitude.  By expressing aptitude as time needed to learn, Carroll refreshingly broke through his era’s debate about the origins of intelligence (nature vs. nurture) and the vocabulary that labels students as having more or less intelligence. He also spoke directly to a primary challenge of teaching: how to effectively produce learning in classrooms populated by students needing vastly different amounts of time to learn the exact same content.[i] 

The source of that variation is largely irrelevant to the constraints placed on instructional decisions.  Teachers obviously have limited control over the denominator of the ratio (they must take kids as they are) and less than one might think over the numerator.  Teachers allot time to instruction only after educational authorities have decided the number of hours in the school day, the number of days in the school year, the number of minutes in class periods in middle and high schools, and the amount of time set aside for lunch, recess, passing periods, various pull-out programs, pep rallies, and the like.  There are also announcements over the PA system, stray dogs that may wander into the classroom, and other unscheduled encroachments on instructional time.

The model has had a profound influence on educational thought.  As of July 5, 2015, Google Scholar reported 2,931 citations of Carroll’s article.  Benjamin Bloom’s “mastery learning” was deeply influenced by Carroll.  It is predicated on the idea that optimal learning occurs when time spent on learning—rather than content—is allowed to vary, providing to each student the individual amount of time he or she needs to learn a common curriculum.  This is often referred to as “students working at their own pace,” and progress is measured by mastery of content rather than seat time. David C. Berliner’s 1990 discussion of time includes an analysis of mediating variables in the numerator of Carroll’s model, including the amount of time students are willing to spend on learning.  Carroll called this persistence, and Berliner links the construct to student engagement and time on task—topics of keen interest to researchers today.  Berliner notes that although both are typically described in terms of motivation, they can be measured empirically in increments of time.     

Most applications of Carroll’s model have been interested in what happens when insufficient time is provided for learning—in other words, when the numerator of the ratio is significantly less than the denominator.  When that happens, students don’t have an adequate opportunity to learn.  They need more time. 

As applied to Common Core and instruction, one should also be aware of problems that arise from the inefficient distribution of time.  Time is a limited resource that teachers deploy in the production of learning.  Below I discuss instances when the CCSS-M may lead to the numerator in Carroll’s model being significantly larger than the denominator—when teachers spend more time teaching a concept or skill than is necessary.  Because time is limited and fixed, wasted time on one topic will shorten the amount of time available to teach other topics.  Excessive instructional time may also negatively affect student engagement.  Students who have fully learned content that continues to be taught may become bored; they must endure instruction that they do not need.

Standard Algorithms and Alternative Strategies

Jason Zimba, one of the lead authors of the Common Core Math standards, and Barry Garelick, a critic of the standards, had a recent, interesting exchange about when standard algorithms are called for in the CCSS-M.  A standard algorithm is a series of steps designed to compute accurately and quickly.  In the U.S., students are typically taught the standard algorithms of addition, subtraction, multiplication, and division with whole numbers.  Most readers of this post will recognize the standard algorithm for addition.  It involves lining up two or more multi-digit numbers according to place-value, with one number written over the other, and adding the columns from right to left with “carrying” (or regrouping) as needed.

The standard algorithm is the only algorithm required for students to learn, although others are mentioned beginning with the first grade standards.  Curiously, though, CCSS-M doesn’t require students to know the standard algorithms for addition and subtraction until fourth grade.  This opens the door for a lot of wasted time.  Garelick questioned the wisdom of teaching several alternative strategies for addition.  He asked whether, under the Common Core, only the standard algorithm could be taught—or at least, could it be taught first. As he explains:

Delaying teaching of the standard algorithm until fourth grade and relying on place value “strategies” and drawings to add numbers is thought to provide students with the conceptual understanding of adding and subtracting multi-digit numbers. What happens, instead, is that the means to help learn, explain or memorize the procedure become a procedure unto itself and students are required to use inefficient cumbersome methods for two years. This is done in the belief that the alternative approaches confer understanding, so are superior to the standard algorithm. To teach the standard algorithm first would in reformers’ minds be rote learning. Reformers believe that by having students using strategies in lieu of the standard algorithm, students are still learning “skills” (albeit inefficient and confusing ones), and these skills support understanding of the standard algorithm. Students are left with a panoply of methods (praised as a good thing because students should have more than one way to solve problems), that confuse more than enlighten. 

 

Zimba responded that the standard algorithm could, indeed, be the only method taught because it meets a crucial test: reinforcing knowledge of place value and the properties of operations.  He goes on to say that other algorithms also may be taught that are consistent with the standards, but that the decision to do so is left in the hands of local educators and curriculum designers:

In short, the Common Core requires the standard algorithm; additional algorithms aren’t named, and they aren’t required…Standards can’t settle every disagreement—nor should they. As this discussion of just a single slice of the math curriculum illustrates, teachers and curriculum authors following the standards still may, and still must, make an enormous range of decisions.

 

Zimba defends delaying mastery of the standard algorithm until fourth grade, referring to it as a “culminating” standard that he would, if he were teaching, introduce in earlier grades.  Zimba illustrates the curricular progression he would employ in a table, showing that he would introduce the standard algorithm for addition late in first grade (with two-digit addends) and then extend the complexity of its use and provide practice towards fluency until reaching the culminating standard in fourth grade. Zimba would introduce the subtraction algorithm in second grade and similarly ramp up its complexity until fourth grade.

 

It is important to note that in CCSS-M the word “algorithm” appears for the first time (in plural form) in the third grade standards:

 

3.NBT.2  Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction.

 

The term “strategies and algorithms” is curious.  Zimba explains, “It is true that the word ‘algorithms’ here is plural, but that could be read as simply leaving more choice in the hands of the teacher about which algorithm(s) to teach—not as a requirement for each student to learn two or more general algorithms for each operation!” 

 

I have described before the “dog whistles” embedded in the Common Core, signals to educational progressives—in this case, math reformers—that  despite these being standards, the CCSS-M will allow them great latitude.  Using the plural “algorithms” in this third grade standard and not specifying the standard algorithm until fourth grade is a perfect example of such a dog whistle.

 

Why All the Fuss about Standard Algorithms?

It appears that the Common Core authors wanted to reach a political compromise on standard algorithms. 

 

Standard algorithms were a key point of contention in the “Math Wars” of the 1990s.   The 1997 California Framework for Mathematics required that students know the standard algorithms for all four operations—addition, subtraction, multiplication, and division—by the end of fourth grade.[ii]  The 2000 Massachusetts Mathematics Curriculum Framework called for learning the standard algorithms for addition and subtraction by the end of second grade and for multiplication and division by the end of fourth grade.  These two frameworks were heavily influenced by mathematicians (from Stanford in California and Harvard in Massachusetts) and quickly became favorites of math traditionalists.  In both states’ frameworks, the standard algorithm requirements were in direct opposition to the reform-oriented frameworks that preceded them—in which standard algorithms were barely mentioned and alternative algorithms or “strategies” were encouraged. 

 

Now that the CCSS-M has replaced these two frameworks, the requirement for knowing the standard algorithms in California and Massachusetts slips from third or fourth grade all the way to sixth grade.  That’s what reformers get in the compromise.  They are given a green light to continue teaching alternative algorithms, as long as the algorithms are consistent with teaching place value and properties of arithmetic.  But the standard algorithm is the only one students are required to learn.  And that exclusivity is intended to please the traditionalists.

 

I agree with Garelick that the compromise leads to problems.  In a 2013 Chalkboard post, I described a first grade math program in which parents were explicitly requested not to teach the standard algorithm for addition when helping their children at home.  The students were being taught how to represent addition with drawings that clustered objects into groups of ten.  The exercises were both time consuming and tedious.  When the parents met with the school principal to discuss the matter, the principal told them that the math program was following the Common Core by promoting deeper learning.  The parents withdrew their child from the school and enrolled him in private school.

 

The value of standard algorithms is that they are efficient and packed with mathematics.  Once students have mastered single-digit operations and the meaning of place value, the standard algorithms reveal to students that they can take procedures that they already know work well with one- and two-digit numbers, and by applying them over and over again, solve problems with large numbers.  Traditionalists and reformers have different goals.  Reformers believe exposure to several algorithms encourages flexible thinking and the ability to draw on multiple strategies for solving problems.  Traditionalists believe that a bigger problem than students learning too few algorithms is that too few students learn even one algorithm.

 

I have been a critic of the math reform movement since I taught in the 1980s.  But some of their complaints have merit.  All too often, instruction on standard algorithms has left out meaning.  As Karen C. Fuson and Sybilla Beckmann point out, “an unfortunate dichotomy” emerged in math instruction: teachers taught “strategies” that implied understanding and “algorithms” that implied procedural steps that were to be memorized.  Michael Battista’s research has provided many instances of students clinging to algorithms without understanding.  He gives an example of a student who has not quite mastered the standard algorithm for addition and makes numerous errors on a worksheet.  On one item, for example, the student forgets to carry and calculates that 19 + 6 = 15.  In a post-worksheet interview, the student counts 6 units from 19 and arrives at 25.  Despite the obvious discrepancy—(25 is not 15, the student agrees)—he declares that his answers on the worksheet must be correct because the algorithm he used “always works.”[iii] 

 

Math reformers rightfully argue that blind faith in procedure has no place in a thinking mathematical classroom. Who can disagree with that?  Students should be able to evaluate the validity of answers, regardless of the procedures used, and propose alternative solutions.  Standard algorithms are tools to help them do that, but students must be able to apply them, not in a robotic way, but with understanding.

 

Conclusion

Let’s return to Carroll’s model of time and learning.  I conclude by making two points—one about curriculum and instruction, the other about implementation.

In the study of numbers, a coherent K-12 math curriculum, similar to that of the previous California and Massachusetts frameworks, can be sketched in a few short sentences.  Addition with whole numbers (including the standard algorithm) is taught in first grade, subtraction in second grade, multiplication in third grade, and division in fourth grade.  Thus, the study of whole number arithmetic is completed by the end of fourth grade.  Grades five through seven focus on rational numbers (fractions, decimals, percentages), and grades eight through twelve study advanced mathematics.  Proficiency is sought along three dimensions:  1) fluency with calculations, 2) conceptual understanding, 3) ability to solve problems.

Placing the CCSS-M standard for knowing the standard algorithms of addition and subtraction in fourth grade delays this progression by two years.  Placing the standard for the division algorithm in sixth grade continues the two-year delay.   For many fourth graders, time spent working on addition and subtraction will be wasted time.  They already have a firm understanding of addition and subtraction.  The same thing for many sixth graders—time devoted to the division algorithm will be wasted time that should be devoted to the study of rational numbers.  The numerator in Carroll’s instructional time model will be greater than the denominator, indicating the inefficient allocation of time to instruction.

As Jason Zimba points out, not everyone agrees on when the standard algorithms should be taught, the alternative algorithms that should be taught, the manner in which any algorithm should be taught, or the amount of instructional time that should be spent on computational procedures.  Such decisions are made by local educators.  Variation in these decisions will introduce variation in the implementation of the math standards.  It is true that standards, any standards, cannot control implementation, especially the twists and turns in how they are interpreted by educators and brought to life in classroom instruction.  But in this case, the standards themselves are responsible for the myriad approaches, many unproductive, that we are sure to see as schools teach various algorithms under the Common Core.


[i] Tracking, ability grouping, differentiated learning, programmed learning, individualized instruction, and personalized learning (including today’s flipped classrooms) are all attempts to solve the challenge of student heterogeneity.  

[ii] An earlier version of this post incorrectly stated that the California framework required that students know the standard algorithms for all four operations by the end of third grade. I regret the error.

[iii] Michael T. Battista (2001).  “Research and Reform in Mathematics Education,” pp. 32-84 in The Great Curriculum Debate: How Should We Teach Reading and Math? (T. Loveless, ed., Brookings Instiution Press).

Authors

     
 
 




b

2016 Brown Center Report on American Education: How Well Are American Students Learning?


      
 
 




b

Brookings Live: Reading and math in the Common Core era


Event Information

March 28, 2016
4:00 PM - 4:30 PM EDT

Online Only
Live Webcast

And more from the Brown Center Report on American Education


The Common Core State Standards have been adopted as the reading and math standards in more than forty states, but are the frontline implementers—teachers and principals—enacting them? As part of the 2016 Brown Center Report on American Education, Tom Loveless examines the degree to which CCSS recommendations have penetrated schools and classrooms. He specifically looks at the impact the standards have had on the emphasis of non-fiction vs. fiction texts in reading, and on enrollment in advanced courses in mathematics.

On March 28, the Brown Center hosted an online discussion of Loveless's findings, moderated by the Urban Institute's Matthew Chingos.  In addition to the Common Core, Loveless and Chingos also discussed the other sections of the three-part Brown Center Report, including a study of the relationship between ability group tracking in eighth grade and AP performance in high school.

Watch the archived video below.

Spreecast is the social video platform that connects people.
Check out Reading and Math in the Common Core Era on Spreecast.

      
 
 




b

Three cheers for logrolling: The demise of the Sustainable Growth Rate (SGR)


Editor's note: This post originally appeared in the New England Journal of Medicine's Perspective online series on April 22, 2015.

Congress has finally euthanized the sustainable growth rate formula (SGR). Enacted in 1997 and intended to hold down growth of Medicare spending on physician services, the formula initially worked more or less as intended. Then it began to call for progressively larger and more unrealistic fee cuts — nearly 30% in some years, 21% in 2015. Aware that such cuts would be devastating, Congress repeatedly postponed them, and most observers understood that such cuts would never be implemented. Still, many physicians fretted that the unthinkable might happen.

Now Congress has scrapped the SGR, replacing it with still-embryonic but promising incentives that could catalyze increased efficiency and greater cost control than the old, flawed formula could ever really have done, in a law that includes many other important provisions. How did such a radical change occur?  And why now?

The “how” was logrolling — the trading of votes by legislators in order to pass legislation of interest to each of them. Logrolling has become a dirty word, a much-reviled political practice. But the Medicare Access and CHIP (Children’s Health Insurance Program) Reauthorization Act (MACRA), negotiated by House leaders John Boehner (R-OH) and Nancy Pelosi (D-CA) and their staffs, is a reminder that old-time political horse trading has much to be said for it.

The answer to “why now?” can be found in the technicalities of budget scoring. Under the SGR, Medicare’s physician fees were tied through a complex formula to a target based on caseloads, practice costs, and the gross domestic product. When current spending on physician services exceeded the targets, the formula called for fee cuts to be applied prospectively. Fee cuts that were not implemented were carried forward and added to any future cuts the formula might generate. Because Congress repeatedly deferred cuts, a backlog developed. By 2012, this backlog combined with assumed rapid future growth in Medicare spending caused the Congressional Budget Office (CBO) to estimate the 10-year cost of repealing the SGR at a stunning $316 billion.

For many years, Congress looked the costs of repealing the SGR squarely in the eye — and blinked. The cost of a 1-year delay, as estimated by the CBO, was a tiny fraction of the cost of repeal. So Congress delayed — which is hardly surprising.

But then, something genuinely surprising did happen. The growth of overall health care spending slowed, causing the CBO to slash its estimates of the long-term cost of repealing the SGR. By 2015, the 10-year price of repeal had fallen to $136 billion. Even this number was a figment of budget accounting, since the chance that the fee cuts would ever have been imposed was minuscule. But the smaller number made possible the all-too-rare bipartisan collaboration that produced the legislation that President Barack Obama has just signed.

The core of the law is repeal of the SGR and abandonment of the 21% cut in Medicare physician fees it called for this year. In its place is a new method of paying physicians under Medicare. Some elements are specified in law; some are to be introduced later. The hard-wired elements include annual physician fee updates of 0.5% per year through 2019 and 0% from 2020 through 2025, along with a “merit-based incentive payment system” (MIPS) that will replace current incentive programs that terminate in 2018. The new program will assess performance in four categories: quality of care, resource use, meaningful use of electronic health records, and clinical practice improvement activities. Bonuses and penalties, ranging from +12% to –4% in 2020, and increasing to +27% to –9% for 2022 and later, will be triggered by performance scores in these four areas. The exact content of the MIPS will be specified in rules that the secretary of health and human services is to develop after consultation with physicians and other health care providers.

Higher fees will be available to professionals who work in “alternative payment organizations” that typically will move away from fee-for-service payment, cover multiple services, show that they can limit the growth of spending, and use performance-based methods of compensation. These and other provisions will ramp up pressure on physicians and other providers to move from traditional individual or small-group fee-for-service practices into risk-based multi-specialty settings that are subject to management and oversight more intense than that to which most practitioners are yet accustomed.

Both parties wanted to bury the SGR. But MACRA contains other provisions, unrelated to the SGR, that appeal to discrete segments of each party. Democrats had been seeking a 4-year extension of CHIP, which serves 8 million children and pregnant women. They were running into stiff head winds from conservatives who wanted to scale back the program. MACRA extends CHIP with no cuts but does so for only 2 years.  It also includes a number of other provisions sought by Democrats: a 2-year extension of the Maternal, Infant, and Early Childhood Home Visiting program, plus permanent extensions of the Qualified Individual program, which pays Part B Medicare premiums for people with incomes just over the federal poverty thresholds, and transitional medical assistance, which preserves Medicaid eligibility for up to 1 year after a beneficiary gets a job.

The law also facilitates access to health benefits. MACRA extends for two years states’ authority to enroll applicants for health benefits on the basis of data on income, household size, and other factors gathered when people enroll in other programs such as the Supplemental Nutrition Assistance Program, the National School Lunch Program, Temporary Assistance to Needy Families (“welfare”), or Head Start. It also provides $7.2 billion over the next two years to support community health centers, extending funding established in the Affordable Care Act.

Elements of each party, concerned about budget deficits, wanted provisions to pay for the increased spending. They got some of what they wanted, but not enough to prevent some conservative Republicans in both the Senate and the House from opposing final passage. Many conservatives have long sought to increase the proportion of Medicare Part B costs that are covered by premiums. Most Medicare beneficiaries pay Part B premiums covering 25% of the program’s actuarial value. Relatively high-income beneficiaries pay premiums that cover 35, 50, 65, or 80% of that value, depending on their income. Starting in 2018, MACRA will raise the 50% and 65% premiums to 65% and 80%, respectively, affecting about 2% of Medicare beneficiaries. No single person with an income (in 2015 dollars) below $133,501 or couple with income below $267,001 would be affected initially. MACRA freezes these thresholds through 2019, after which they are indexed for inflation. Under previous law, the thresholds were to have been greatly increased in 2019, reducing the number of high-income Medicare beneficiaries to whom these higher premiums would have applied. (For reference, half of all Medicare beneficiaries currently have incomes below $26,000 a year.)

A second provision bars Medigap plans from covering the Part B deductible, which is now $147. By exposing more people to deductibles, this provision will cause some reduction in Part B spending. Everyone who buys such plans will see reduced premiums; some will face increased out-of-pocket costs. The financial effects either way will be small.

Inflexible adherence to principle contributes to the political gridlock that has plunged rates of public approval of Congress to subfreezing lows. MACRA is a reminder of the virtues of compromise and quiet negotiation. A small group of congressional leaders and their staffs crafted a law that gives something to most members of both parties. Today’s appalling norm of poisonously polarized politics make this instance of political horse trading seem nothing short of miraculous.

Authors

Publication: NEJM
     
 
 




b

Eurozone desperately needs a fiscal transfer mechanism to soften the effects of competitiveness imbalances


The eurozone has three problems: national debt obligations that cannot be met, medium-term imbalances in trade competitiveness, and long-term structural flaws.

The short-run problem requires more of the monetary easing that Germany has, with appalling shortsightedness, been resisting, and less of the near-term fiscal restraint that Germany has, with equally appalling shortsightedness, been seeking. To insist that Greece meet all of its near-term current debt service obligations makes about as much sense as did French and British insistence that Germany honor its reparations obligations after World War I. The latter could not be and were not honored. The former cannot and will not be honored either.

The medium-term problem is that, given a single currency, labor costs are too high in Greece and too low in Germany and some other northern European countries. Because adjustments in currency values cannot correct these imbalances, differences in growth of wages must do the job—either wage deflation and continued depression in Greece and other peripheral countries, wage inflation in Germany, or both. The former is a recipe for intense and sustained misery. The latter, however politically improbable it may now seem, is the better alternative.

The long-term problem is that the eurozone lacks the fiscal transfer mechanisms necessary to soften the effects of competitiveness imbalances while other forms of adjustment take effect. This lack places extraordinary demands on the willingness of individual nations to undertake internal policies to reduce such imbalances. Until such fiscal transfer mechanisms are created, crises such as the current one are bound to recur.

Present circumstances call for a combination of short-term expansionary policies that have to be led or accepted by the surplus nations, notably Germany, who will also have to recognize and accept that not all Greek debts will be paid or that debt service payments will not be made on time and at originally negotiated interest rates. The price for those concessions will be a current and credible commitment eventually to restore and maintain fiscal balance by the peripheral countries, notably Greece.


Authors

Publication: The International Economy
Image Source: © Vincent Kessler / Reuters
     
 
 




b

King v. Burwell: Chalk one up for common sense


The Supreme Court today decided that Congress meant what it said when it enacted the Affordable Care Act (ACA). The ACA requires people in all 50 states to carry health insurance and provided tax credits to help them afford it. To have offered such credits only in the dozen states that set up their own exchanges would have been cruel and unsustainable because premiums for many people would have been unaffordable.

But the law said that such credits could be paid in exchanges ‘established by a state,’ which led some to claim that the credits could not be paid to people enrolled by the federally operated exchange. In his opinion, Chief Justice Roberts euphemistically calls that wording ‘inartful.’ Six Supreme Court justices decided that, read in its entirety, the law provides tax credits in every state, whether the state manages the exchange itself or lets the federal government do it for them.

That decision is unsurprising. More surprising is that the Court agreed to hear the case. When it did so, cases on the same issue were making their ways through four federal circuits. In only one of the four circuits was there a standing decision, and it found that tax credits were available everywhere. It is customary for the Supreme Court to wait to take a case until action in lower courts is complete or two circuits have disagreed. In this situation, the justices, eyeing the electoral calendar, may have preferred to hear the case sooner rather than later to avoid confronting it in the middle of a presidential election.

Whatever the Court’s motives for taking the case, their willingness to hear the case caused supporters of the Affordable Care Act enormous unease. Were the more conservative members of the Court poised to accept an interpretation of the law that ACA supporters found ridiculous but that inartful legislative drafting gave the gloss of plausibility? Judicial demeanor at oral argument was not comforting. A 5-4 decision disallowing payment of tax credits seemed ominously plausible.

Future Challenges for the ACA

The Court’s 6-3 decision ended those fears. The existential threat to health reform from litigation is over. But efforts to undo the Affordable Care Act are not at an end. They will continue in the political sphere. And that is where they should be. ACA opponents know that there is little chance for them to roll back the Affordable Care Act in any fundamental way as long as a Democrat is in the White House. To dismantle the law, they must win the presidency in 2016.

But winning the presidency will not be enough. It would be mid 2017 before ACA opponents could draft and enact legislation to curb the Affordable Care Act and months more before it could take effect. To borrow a metaphor from the military, even if those opposed to the ACA win the presidency, they will have to deal with ‘facts on the ground.’

Well over 30 million Americans will be receiving health insurance under the Affordable Care Act. That will include people who can afford health insurance because of the tax credits the Supreme Court affirmed today. It will include millions more insured through Medicaid in the steadily growing number of states that have agreed to extend Medicaid coverage. It will include the young adult children covered under parental plans because the ACA requires this option.

Insurance companies will have millions more customers because of the ACA. Hospitals will fill more beds because previously uninsured people will be able to afford care and will have fewer unpaid bills generated by people who were uninsured but the hospitals had to admit under previous law. Drug companies and device manufacturers will be enjoying increased sales because of the ACA.

The elderly will have better drug coverage because the ACA has eliminated the notorious ‘donut hole’—the drug expenditures that Medicare previously did not cover.

Those facts will discourage any frontal assault on the ACA, particularly if the rate of increase of health spending remains as well controlled as it has been for the past seven years.

Of course, differences between supporters and opponents of the ACA will not vanish. But those differences will not preclude constructive legislation. Beginning in 2017, the ACA gives states, an opening to propose alternative ways of achieving the goals of the Affordable Care Act, alone on in groups, by alternative means. The law authorizes the president to approve such waivers if they serve the goals of the law. The United States is large and diverse. Use of this authority may help diffuse the bitter acrimony surrounding Obamacare, as my colleague, Stuart Butler, has suggested. At the same time, Obamacare supporters have their own list of changes that they believe would improve the law. At the top of the list is fixing the ‘family glitch,’ a drafting error that unintentionally deprives many families of access to the insurance exchanges and to tax credits that would make insurance affordable.

As Chief Justice Roberts wrote near the end of his opinion of the Court, “In a democracy, the power to make the law rests with those chosen by the people....Congress passed the Affordable Care Act to improve health insurance markets, not to destroy them.” The Supreme Court decision assuring that tax credits are available in all states spares the nation chaos and turmoil. It returns the debate about health care policy to the political arena where it belongs. In so doing, it brings a bit closer the time when the two parties may find it in their interest to sit down and deal with the twin realities of the Affordable Care Act: it is imperfect legislation that needs fixing, and it is decidedly here to stay.

Authors

Image Source: © Jim Tanner / Reuters
     
 
 




b

The myth behind America’s deficit


Medicare Hospital Insurance and Social Security would not add to deficits because they can’t spend money they don’t have.

The dog days of August have given way to something much worse. Congress returned to session this week, and the rest of the year promises to be nightmarish. The House and Senate passed budget resolutions earlier this year calling for nearly $5 trillion in spending cuts by 2025. More than two-thirds of those cuts would come from programs that help people with low-and moderate-incomes. Health care spending would be halved. If such cuts are enacted, the president will likely veto them. At best, another partisan budget war will ensue after which the veto is sustained. At worst, the cuts become law.

The putative justification for these cuts is that the nation faces insupportable increases in public debt because of expanding budget deficits. Even if the projections were valid, it would be prudent to enact some tax increases in order to preserve needed public spending. But the projections of explosively growing debt are not valid. They are fantasy.

Wait! you say. The Congressional Budget Office has been telling us for years about the prospect of rising deficit and exploding debt. They repeated those warnings just two months ago. Private organizations of both the left and right agree with the CBO’s projections, in general if not in detail. How can any sane person deny that the nation faces a serious long-term budget deficit problem?

The answer is simple: The CBO and private organizations use a convention in preparing their projections that is at odds with established policy and law. If, instead, projections are based on actual current law, as they claim to be, the specter of an increasing debt burden vanishes. What is that convention? Why is it wrong? Why did CBO adopt it, and why have others kept it?

CBO’s budget projections cover the next 75 years. Its baseline projections claim to be based on current law and policy. (CBO also presents an ‘alternative scenario’ based on assumed changes in law and policy). Within that period, Social Security (OASDI) and Medicare Hospital Insurance (HI) expenditures are certain to exceed revenues earmarked to pay for them. Both are financed through trust funds. Both funds have sizeable reserves — government securities — that can be used to cover short falls for a while. But when those reserves are exhausted, expenditures cannot exceed current revenues. Trust fund financing means that neither Social Security nor Medicare Hospital Insurance can run deficits. Nor can they add to the public debt.

Nonetheless, CBO and other organizations assume that Social Security and Medicare Hospital Insurance can and will spend money they don’t have and that current law bars them from spending.

One of the reasons why trust fund financing was used, first for Social Security and then for Medicare Hospital Insurance, was to create a framework that disciplined Congress earmarked to earmark sufficient revenues to pay for benefits it might award. Successive presidents and Congresses, both Republican and Democratic, have repeatedly acted to prevent either program’s cumulative spending from exceeding cumulative revenues. In 1983, for example, faced with an impending trust fund shortfall, Congress cut benefits and raised taxes enough to turn prospective cash flow trust fund deficits into cash flow surpluses. And President Reagan signed the bill. In so doing, they have reaffirmed the discipline imposed by trust fund financing.

Trust fund accounting explains why people now are worrying about the adequacy of funding for Social Security and Medicare. They recognize that the trust funds will be depleted in a couple of decades. They understand that between now and then Congress must either raise earmarked taxes or fashion benefit cuts. If it doesn’t raise taxes, benefits will be cut across the board. Either way, the deficits that CBO and other organizations have built into their budget projections will not materialize.

The implications for projected debt of CBO’s inclusion in its projections of deficits that current law and established policy do not allow are enormous, as the graph below shows.

If one excludes deficits in Social Security and Medicare Hospital Insurance that cannot occur under current law and established policy, the ratio of national debt to gross domestic product will fall, not rise, as CBO budget projections indicate. In other words, the claim that drastic cuts in government spending are necessary to avoid calamitous budget deficits is bogus.

It might seem puzzling that CBO, an agency known for is professionalism and scrupulous avoidance of political bias, would adopt a convention so at odds with law and policy. The answer is straightforward—Congress makes them do it. Section 257 of the Balanced Budget and Emergency Deficit Control Act of 1985 requires CBO to assume that the trust funds can spend money although legislation governing trust fund operations bars such expenditures. CBO is obeying the law.

No similar explanation exonerates the statement of the Committee for a Responsible Federal Budget, which on August 25, 2015 cited, with approval, the conclusion that ‘debt continues to grow unsustainably,’ or that of the Bipartisan Policy Center, which wrote on the same day that ‘America’s debt continues to grow on an unsustainable path.’ Both statements are wrong.

To be sure, the dire budget future anticipated in the CBO projections could materialize. Large deficits could result from an economic calamity or war. Congress could abandon the principle that Social Security and Medicare Hospital Insurance should be financed within trust funds. It could enact other fiscally rash policies. But such deficits do not flow from current law or reflect the trust fund discipline endorsed by both parties over the last 80 years. And it is current law and policy that are supposed to underlie budget projections. Slashing spending because a thirty-year old law requires CBO to assume that Congress will do something it has shown no sign of doing—overturn decades of bipartisan prudence requiring that the major social insurance programs spend only money specifically earmarked for them, and not a penny more—would impose enormous hardship on vulnerable populations in the name of a fiscal fantasy.



Editor's Note: This post originally appeared in Fortune Magazine.

Authors

Publication: Fortune Magazine
Image Source: © Jonathan Ernst / Reuters
     
 
 




b

Why fewer jobless Americans are counting on disability


As government funding for disability insurance is expected to run out next year, Congress should re-evaluate the costs of the program.

Nine million people in America today are receiving Social Security Disability Insurance, double the number in 1995 and six times the number in 1970. With statistics like that, it’s hardly surprising to see some in Congress worry that more will enroll in the program and costs would continue to rise, especially since government funding for disability insurance is expected to run out by the end of next year. If Congress does nothing, benefits would fall by 19% immediately following next year’s presidential election. So, Congress will likely do something. But what exactly should it do?

Funding for disability insurance has nearly run out of money before. Each time, Congress has simply increased the share of the Social Security payroll tax that goes for disability insurance. This time, however, many members of Congress oppose such a shift unless it is linked to changes that curb eligibility and promote return to work. They fear that rolls will keep growing and costs would keep rising, but findings from a report by a government panel conclude that disability insurance rolls have stopped rising and will likely shrink. The report, authored by a panel of the Social Security Advisory Board, is important in that many of the factors that caused disability insurance to rise, particularly during the Great Recession, have ended.

  • Baby-boomers, who added to the rolls as they reached the disability-prone middle age years, are aging out of disability benefits and into retirement benefits. 

  • The decades-long flood of women increased the pool of people with the work histories needed to be eligible for disability insurance. But women’s labor force participation has fallen a bit from pre-Great Recession peaks, and is not expected again to rise materially. 

  • The Great Recession, which led many who lost jobs and couldn’t find work to apply for disability insurance, is over and applications are down. A recession as large as that of 2008 is improbable any time soon. 

  • Approval rates by administrative law judges, who for many years were suspected of being too ready to approve applications, have been falling. Whatever the cause, this stringency augurs a fall in the disability insurance rolls.

Nonetheless, the Disability Insurance program is not without serious flaws. At the front end, employers, who might help workers with emerging impairments remain on the job by providing therapy or training, have little incentive to do either. Employers often save money if workers leave and apply for benefits. Creating a financial incentive to encourage employers to help workers stay active is something both liberals and conservatives can and should embrace. Unfortunately, figuring out exactly how to do that remains elusive.

At the next stage, applicants who are initially denied benefits confront intolerable delays. They must wait an average of nearly two years to have their cases finally decided and many wait far longer. For the nearly 1 million people now in this situation, the effects can be devastating. As long as their application is pending, applicants risk immediate rejection if they engage in ‘substantial gainful activity,’ which is defined as earning more than $1,090 in any month. This virtual bar on work brings a heightened risk of utter destitution. Work skills erode and the chance of ever reentering the workforce all but vanishes. Speeding eligibility determination is vital but just how to do so is also enormously controversial.

For workers judged eligible for benefits, numerous provisions intended to encourage work are not working. People have advanced ideas on how to help workers regain marketplace skills and to make it worthwhile for them to return to work. But evidence that they will work is scant.

The problems are clear enough. As noted, solutions are not. Analysts have come up with a large number of proposed changes in the program. Two task forces, one organized by The Bipartisan Policy Center and one by the Committee for a Responsible Federal Budget, have come up with lengthy menus of possible modifications to the current program. Many have theoretical appeal. None has been sufficiently tested to allow evidence-based predictions on how they would work in practice.

So, with the need to do something to sustain benefits and to do it fast, Congress confronts a program with many problems for which a wide range of untested solutions have been proposed. Studies and pilots of some of these ideas are essential and should accompany the transfer of payroll tax revenues necessary to prevent a sudden and unjustified cut in benefits for millions of impaired people who currently have little chance of returning to work. Implementing such a research program now will enable Congress to improve a program that is vital, but that is acknowledged to have serious problems.

And the good news, delivered by a group of analysts, is that rapid growth of enrollments will not break the bank before such studies can be carried out.



Editor's Note: This post originally appeared on Fortune Magazine.

Authors

Publication: Fortune Magazine
Image Source: © Randall Hill / Reuters
     
 
 




b

Can taxing the rich reduce inequality? You bet it can!


Two recently posted papers by Brookings colleagues purport to show that “even a large increase in the top marginal rate would barely reduce inequality.”[1]  This conclusion, based on one commonly used measure of inequality, is an incomplete and misleading answer to the question posed: would a stand-alone increase in the top income tax bracket materially reduce inequality?  More importantly, it is the wrong question to pose, as a stand-alone increase in the top bracket rate would be bad tax policy that would exacerbate tax avoidance incentives.  Sensible tax policy would package that change with at least one other tax modification, and such a package would have an even more striking effect on income inequality.  In brief:

    • stand-alone increase in the top tax bracket would be bad tax policy, but it would meaningfully increase the degree to which the tax system reduces economic inequality.  It would have this effect even though it would fall on just ½ of 1 percent of all taxpayers and barely half of their income.
    • Tax policy significantly reduces inequality.  But transfer payments and other spending reduce it far more.  In combination, taxes and public spending materially offset the inequality generated by market income.
    • The revenue from a well-crafted increase in taxes on upper-income Americans, dedicated to a prudent expansions of public spending, would go far to counter the powerful forces that have made income inequality more extreme in the United States than in any other major developed economy.

[1] The quotation is from Peter R. Orszag, “Education and Taxes Can’t Reduce Inequality,” Bloomberg View, September 28, 2015 (at http://bv.ms/1KPJXtx). The two papers are William G. Gale, Melissa S. Kearney, and Peter R. Orszag, “Would a significant increase in the top income tax rate substantially alter income inequality?” September 28, 2015 (at http://brook.gs/1KK40IX) and “Raising the top tax rate would not do much to reduce overall income inequality–additional observations,” October 12, 2015 (at http://brook.gs/1WfXR2G). 

Downloads

Authors

Image Source: © Jonathan Ernst / Reuters
     
 
 




b

Is the ACA in trouble?


Editor's Note: This post originally appeared in InsideSources. The author wishes to thank Kevin Lucia for helpful comments and suggestions.

United Health Care’s surprise announcement that it is considering whether to stop selling health insurance through the Affordable Care Act’s health exchanges in 2017 and is also pulling marketing and broker commissions in 2016 has health policy analysts scratching their heads. The announcement is particularly puzzling, as just a month ago, United issued a bullish announcement that it was planning to expand to 11 additional individual markets, taking its total to 34.

United’s stated reason is that this business is unprofitable. That may be true, but it is odd that the largest health insurer in the nation would vacate a growing market without putting up a fight. Is United’s announcement seriously bad news for Obamacare, as many commentators have asserted? Is United seeking concessions in another area and using this announcement as a bargaining chip? Or, is something else going on? The answer, I believe, is that the announcement, while a bit of all of these things, is less significant than many suppose.

To make sense of United’s actions, one has to understand certain peculiarities of United’s business model and some little-understood aspects of the Affordable Care Act.

  • Most of United’s business consists of group sales of insurance through employers who offer plans to their employees as a fringe benefit. United has chosen not to sell insurance aggressively to individuals in most places and, where it does, not to offer the lowest-premium plans. In some states, it does not sell to individuals at all.
  • In 49 states, insurers may sell plans either through the ACA health exchange or directly to customers outside the exchanges. The exceptions are Vermont and the District of Columbia in which individuals buying insurance must go through their exchanges. Thus, insurers may find that “good” risks—those with below-average use of health care—disproportionately buy directly, while the “poor” risks buy through the exchanges.
  • State regulators must review insurance premiums to assure that they are reasonable and set other rules that insurers must follow. This process typically involves some negotiation. With varying skill and intensity, state insurance commissioners try to hold down prices. If they are too lax, buyers may be overcharged. If they are too aggressive, insurers may simply withdraw from the market, causing politically-unpopular inconvenience. These negotiations go on separately in 50 states and the District of Columbia each and every year.
  • Finally, fewer people are now expected to buy insurance through the health exchanges than was expected a couple of years ago. ACA subsidies are modest for people with moderate incomes and the penalties for not carrying insurance have been small. Some people with modest incomes face high deductibles, high out-of-pocket costs, narrow networks of providers, or some mix of all three. As a result, some people who expected not to need much health care have chosen to ‘go bare’ and pay the modest penalties for not carrying insurance.

What seems to have happened—one can’t be sure, as the United announcement is Delphic—is that the company, which mostly delayed its participation in the individual exchanges until 2015, incurred substantial start-up costs, enrolled few customers who turned out to be sicker than anticipated, and experienced more-than-anticipated attrition. Other insurers, including Blue-Cross/Blue-Shield plans nation-wide which hold a dominant position in individual markets in many states, did well enough so that Joseph Swedish, CEO of Anthem, Inc., one of the largest of the ‘Blues,’ announced that his company is firmly committed to the exchanges. But minor players in the individual market, such as United, may have concluded that the costs of developing that market are too high for the expected pay-off.

In evaluating these diverse factors, one needs to recognize that the ACA, in general, and the health exchanges, in particular, have changed insurance markets in fundamental ways. Millions of people who were previously uninsured are now trying to understand the bewildering complexities of health insurance. Insurance companies have a lot to learn, too. The ACA now bars insurance companies from ‘underwriting’—the practice of varying premiums based on the characteristics of individual customers, something at which they were quite expert. Under the ACA, insurance companies must sell insurance to all comers, however sick they may be, and must charge premiums that can vary only based on age. Now, companies must ‘manage’ risk, which is easier for a company with a large market share of the individual market, as the Blues have in most states, than it is for a company like United with only a small share.

What this means is that United’s announcement is regrettable news for those states from which they may decide to withdraw, as its departure would reduce competition. United might also use the threat of departure to negotiate favorable terms with states and the Administration. And it means that federal regulators need to write regulations to discourage individual customers from practices that unfairly saddle insurers with risks, such as buying insurance outside open-enrollment periods designed for exceptional circumstances and then dropping coverage a few months later. But it would be a mistake to treat United’s announcement, presumably made for good and sufficient business reasons, as a portentous omen of an ACA crisis.

Authors

Publication: InsideSources
     
 
 




b

The impossible (pipe) dream—single-payer health reform


Led by presidential candidate Bernie Sanders, one-time supporters of ‘single-payer’ health reform are rekindling their romance with a health reform idea that was, is, and will remain a dream.  Single-payer health reform is a dream because, as the old joke goes, ‘you can’t get there from here.

Let’s be clear: opposing a proposal only because one believes it cannot be passed is usually a dodge.One should judge the merits. Strong leaders prove their skill by persuading people to embrace their visions. But single-payer is different. It is radical in a way that no legislation has ever been in the United States.

Not so, you may be thinking. Remember such transformative laws as the Social Security Act, Medicare, the Homestead Act, and the Interstate Highway Act. And, yes, remember the Affordable Care Act. Those and many other inspired legislative acts seemed revolutionary enough at the time. But none really was. None overturned entrenched and valued contractual and legislative arrangements. None reshuffled trillions—or in less inflated days, billions—of dollars devoted to the same general purpose as the new legislation. All either extended services previously available to only a few, or created wholly new arrangements.

To understand the difference between those past achievements and the idea of replacing current health insurance arrangements with a single-payer system, compare the Affordable Care Act with Sanders’ single-payer proposal.

Criticized by some for alleged radicalism, the ACA is actually stunningly incremental. Most of the ACA’s expanded coverage comes through extension of Medicaid, an existing public program that serves more than 60 million people. The rest comes through purchase of private insurance in “exchanges,” which embody the conservative ideal of a market that promotes competition among private venders, or through regulations that extended the ability of adult offspring to remain covered under parental plans. The ACA minimally altered insurance coverage for the 170 million people covered through employment-based health insurance. The ACA added a few small benefits to Medicare but left it otherwise untouched. It left unaltered the tax breaks that support group insurance coverage for most working age Americans and their families. It also left alone the military health programs serving 14 million people. Private nonprofit and for-profit hospitals, other vendors, and privately employed professionals continue to deliver most care.

In contrast, Senator Sanders’ plan, like the earlier proposal sponsored by Representative John Conyers (D-Michigan) which Sanders co-sponsored, would scrap all of those arrangements. Instead, people would simply go to the medical care provider of their choice and bills would be paid from a national trust fund. That sounds simple and attractive, but it raises vexatious questions.

  • How much would it cost the federal government? Where would the money to cover the costs come from?
  • What would happen to the $700 billion that employers now spend on health insurance?
  • How would the $600 billion a year reductions in total health spending that Sanders says his plan would generate come from?
  • What would happen to special facilities for veterans and families of members of the armed services?

Sanders has answers for some of these questions, but not for others. Both the answers and non-answers show why single payer is unlike past major social legislation.

The answer to the question of how much single payer would cost the federal government is simple: $4.1 trillion a year, or $1.4 trillion more than the federal government now spends on programs that the Sanders plan would replace. The money would come from new taxes. Half the added revenue would come from doubling the payroll tax that employers now pay for Social Security. This tax approximates what employers now collectively spend on health insurance for their employees...if they provide health insurance. But many don’t. Some employers would face large tax increases. Others would reap windfall gains.

The cost question is particularly knotty, as Sanders assumes a 20 percent cut in spending averaged over ten years, even as roughly 30 million currently uninsured people would gain coverage. Those savings, even if actually realized, would start slowly, which means cuts of 30 percent or more by Year 10. Where would they come from? Savings from reduced red-tape associated with individual insurance would cover a small fraction of this target. The major source would have to be fewer services or reduced prices. Who would determine which of the services physicians regard as desirable -- and patients have come to expect -- are no longer ‘needed’? How would those be achieved without massive bankruptcies among hospitals, as columnist Ezra Klein has suggested, and would follow such spending cuts? What would be the reaction to the prospect of drastic cuts in salaries of health care personnel – would we have a shortage of doctors and nurses? Would patients tolerate a reduction in services? If people thought that services under the Sanders plan were inadequate, would they be allowed to ‘top up’ with private insurance? If so, what happens to simplicity? If not, why not?

Let me be clear: we know that high quality health care can be delivered at much lower cost than is the U.S. norm. We know because other countries do it. In fact, some of them have plans not unlike the one Senator Sanders is proposing. We know that single-payer mechanisms work in some countries. But those systems evolved over decades, based on gradual and incremental change from what existed before. That is the way that public policy is made in democracies. Radical change may occur after a catastrophic economic collapse or a major war. But in normal times, democracies do not tolerate radical discontinuity. If you doubt me, consider the tumult precipitated by the really quite conservative Affordable Care Act.


Editor's note: This piece originally appeared in Newsweek.

Authors

Publication: Newsweek
Image Source: © Jim Young / Reuters
      
 
 




b

How to fix the backlog of disability claims


The American people deserve to have a federal government that is both responsive and effective. That simply isn’t the case for more than 1 million people who are awaiting the adjudication of their applications for disability benefits from the Social Security Administration.

Washington can and must do better. This gridlock harms applicants either by depriving them of much-needed support or effectively barring them from work while their cases are resolved because having any significant earnings would immediately render them ineligible. This is unacceptable.

Within the next month, the Government Accountability Office, the nonpartisan congressional watchdog, will launch a study on the issue. More policymakers should follow GAO’s lead. A solution to this problem is long overdue. Here’s how the government can do it.

Congress does not need to look far for an example of how to reduce the SSA backlog. In 2013, the Veterans Administration cut its 600,000-case backlog by 84 percent and reduced waiting times by nearly two-thirds, all within two years. It’s an impressive result.

Why have federal officials dealt aggressively and effectively with that backlog, but not the one at SSA? One obvious answer is that the American people and their representatives recognize a debt to those who served in the armed forces. Allowing veterans to languish while a sluggish bureaucracy dithers is unconscionable. Public and congressional outrage helped light a fire under the bureaucracy. Administrators improved services the old-fashioned way — more staff time. VA employees had to work at least 20 hours overtime per month.

Things are a bit more complicated at SSA, unfortunately. Roughly three quarters of applicants for disability benefits have their cases decided within about nine months and, if denied, decide not to appeal. But those whose applications are denied are legally entitled to ask for a hearing before an administrative law judge — and that is where the real bottleneck begins.

There are too few ALJs to hear the cases. Even in the best of times, maintaining an adequate cadre of ALJs is difficult because normal attrition means that SSA has to hire at least 100 ALJs a year to stay even. When unemployment increases, however, so does the number of applications for disability benefits. After exhausting unemployment benefits, people who believe they are impaired often turn to the disability programs. So, when the Great Recession hit, SSA knew it had to hire many more ALJs. It tried to do so, but SSA cannot act without the help of the Office of Personnel Management, which must provide lists of qualified candidates before agencies can hire them. SSA employs 85 percent of all ALJs and for several years has paid OPM approximately $2 million annually to administer the requisite tests and interviews to establish a register of qualified candidates. Nonetheless, OPM has persistently refused to employ legally trained people to vet ALJ candidates or to update registers. And when SSA sought to ramp up ALJ hiring to cope with the recession challenge, OPM was slow to respond.

In 2009, for example, OPM promised to supply a new register containing names of ALJ candidates. Five years passed before it actually delivered the new list of names. For a time, the number of ALJs deciding cases actually fell. The situation got so bad that the president’s January 2015 budget created a work group headed by the Office of Management and Budget and the Administrative Conference of the United States to try to break the logjam. OPM promised a list for 2015, but insisted it could not change procedures. Not trusting OPM to mend its ways, Congress in October 2015 enacted legislation that explicitly required OPM to administer a new round of tests within the succeeding six months.

These stopgap measures are inadequate to the challenge. Both applicants and taxpayers deserve prompt adjudication of the merits of claims. The million-person backlog and the two-year average waits are bad enough. Many applicants wait far longer. Meanwhile, they are strongly discouraged from working, as anything more than minimal earnings will cause their applications automatically to be denied. Throughout this waiting period, applicants have no means of self-support. Any skills applicants retain atrophy.

The shortage of ALJs is not the only problem. The quality and consistency of adjudication by some ALJs has been called into question. For example, differences in approval rates are so large that differences among applicants cannot plausibly explain them. Some ALJs have processed so many cases that they could not possibly have applied proper standards. In recognition of both problems, SSA has increased oversight and beefed up training. The numbers have improved. But large and troubling variations in workloads and approval rates persist.

For now, political polarization blocks agreement on whether and how to modify eligibility rules and improve incentives to encourage work by those able to work. But there is bipartisan agreement that dragging out the application process benefits no one. While completely eliminating hearing delays is impossible, adequate administrative funding and more, better trained hearing officers would help reduce them. Even if OPM’s past record were better than it is, OPM is now a beleaguered agency, struggling to cope with the fallout from a security breach that jeopardizes the security of the nation and the privacy of millions of current and past federal employees and federal contractors. Mending this breach and establishing new procedures will — and should — be OPM’s top priority.

That’s why, for the sake of everyone concerned, responsibility for screening candidates for administrative law judge positions should be moved, at least temporarily, to another agency, such as the Administrative Conference of the United States. Shortening the period that applicants for disability benefits now spend waiting for a final answer is an achievable goal that can and should be addressed. Our nation’s disabled and its taxpayers deserve better.


Editor's note: This piece originally appeared in Politico.

Authors

Publication: Politico
      
 
 




b

Recent Social Security blogs—some corrections


Recently, Brookings has posted two articles commenting on proposals to raise the full retirement age for Social Security retirement benefits from 67 to 70. One revealed a fundamental misunderstanding of how the program actually works and what the effects of the policy change would be. The other proposes changes to the system that would subvert the fundamental purpose of the Social Security in the name of ‘reforming’ it.

A number of Republican presidential candidates and others have proposed raising the full retirement age. In a recent blog, Robert Shapiro, a Democrat, opposed this move, a position I applaud. But he did so based on alleged effects the proposal would in fact not have, and misunderstanding about how the program actually works. In another blog, Stuart Butler, a conservative, noted correctly that increasing the full benefit age would ‘bolster the system’s finances,’ but misunderstood this proposal’s effects. He proposed instead to end Social Security as a universal pension based on past earnings and to replace it with income-related welfare for the elderly and disabled (which he calls insurance).

Let’s start with the misunderstandings common to both authors and to many others. Each writes as if raising the ‘full retirement age’ from 67 to 70 would fall more heavily on those with comparatively low incomes and short life expectancies. In fact, raising the ‘full retirement age’ would cut Social Security Old-Age Insurance benefits by the same proportion for rich and poor alike, and for people whose life expectancies are long or short. To see why, one needs to understand how Social Security works and what ‘raising the full retirement age’ means.

People may claim Social Security retirement benefits starting at age 62. If they wait, they get larger benefits—about 6-8 percent more for each year they delay claiming up to age 70. Those who don’t claim their benefits until age 70 qualify for benefits -- 77 percent higher than those with the same earnings history who claim at age 62. The increments approximately compensate the average person for waiting, so that the lifetime value of benefits is independent of the age at which they claim. Mechanically, the computation pivots on the benefit payable at the ‘full retirement age,’ now age 66, but set to increase to age 67 under current law. Raising the full retirement age still more, from 67 to 70, would mean that people age 70 would get the same benefit payable under current law at age 67. That is a benefit cut of 24 percent. Because the annual percentage adjustment for waiting to claim would be unchanged, people who claim benefits at any age, down to age 62, would also receive benefits reduced by 24 percent.

In plain English, ‘raising the full benefit age from 67 to 70' is simply a 24 percent across-the-board cut in benefits for all new claimants, whatever their incomes and whatever their life-expectancies.

Thus, Robert Shapiro mistakenly writes that boosting the full-benefit age would ‘effectively nullify Social Security for millions of Americans’ with comparatively low life expectancies. It wouldn’t. Anyone who wanted to claim benefits at age 62 still could. Their benefits would be reduced. But so would benefits of people who retire at older ages.

Equally mistaken is Stuart Butler’s comment that increasing the full-benefit age from 67 to 70 would ‘cut total lifetime retirement benefits proportionately more for those on the bottom rungs of the income ladder.’ It wouldn’t. The cut would be proportionately the same for everyone, regardless of past earnings or life expectancy.

Both Shapiro and Butler, along with many others including my other colleagues Barry Bosworth and Gary Burtless, have noted correctly that life expectancies of high earners have risen considerably, while those of low earners have risen little or not at all. As a result, the lifetime value of Social Security Old-Age Insurance benefits has grown more for high- than for low-earners. That development has been at least partly offset by trends in Social Security Disability Insurance, which goes disproportionately to those with comparatively low earnings and life expectancies and which has been growing far faster than Old-Age Insurance, the largest component of Social Security.

But even if the lifetime value of all Social Security benefits has risen faster for high earners than for low earners, an across the board cut in benefits does nothing to offset that trend. In the name of lowering overall Social Security spending, it would cut benefits by the same proportion for those whose life expectancies have risen not at all because the life expectancy of others has risen. Such ‘evenhandeness’ calls to mind Anatole France’s comment that French law ‘in its majestic equality, ...forbids rich and poor alike to sleep under bridges, beg in streets, or steal loaves of bread.’

Faulty analyses, such as those of Shapiro and Butler, cannot conceal a genuine challenge to policy makers. Social Security does face a projected, long-term funding shortfall. Trends in life expectancies may well have made the system less progressive overall than it was in the past. What should be done?

For starters, one needs to recognize that for those in successive age cohorts who retire at any given age, rising life expectancy does not lower, but rather increases their need for Social Security retirement benefits because whatever personal savings they may have accumulated gets stretched more thinly to cover more retirement years.

For those who remain healthy, the best response to rising longevity may be to retire later. Later retirement means more time to save and fewer years to depend on savings. Here is where the wrong-headedness of Butler’s proposal, to phase down benefits for those with current incomes of $25,000 or more and eliminate them for those with incomes over $100,000, becomes apparent. The only source of income for full retirees is personal savings and, to an ever diminishing degree, employer-financed pensions. Converting Social Security from a program whose benefits are based on past earnings to one that is based on current income from savings would impose a tax-like penalty on such savings, just as would a direct tax on those savings. Conservatives and liberals alike should understand that taxing something is not the way to encourage it.

Still, working longer by definition lowers retirement income needs. That is why some analysts have proposed raising the age at which retirement benefits may first be claimed from age 62 to some later age. But this proposal, like across-the-board benefit cuts, falls alike on those who can work longer without undue hardship and on those in physically demanding jobs they can no longer perform, those whose abilities are reduced, and those who have low life expectancies. This group includes not only blue-collar workers, but also many white-collar employees, as indicated by a recent study of the Boston College Retirement Center. If entitlement to Social Security retirement benefits is delayed, it is incumbent on policymakers to link that change to other ‘backstop’ policies that protect those for whom continued work poses a serious burden. It is also incumbent on private employers to design ways to make workplaces friendlier to an aging workforce.

The challenge of adjusting Social Security in the face of unevenly distributed increases in longevity, growing income inequality, and the prospective shortfall in Social Security financing is real. The issues are difficult. But solutions are unlikely to emerge from confusion about the way Social Security operates and the actual effects of proposed changes to the program. And it will not be advanced by proposals that would bring to Social Security the failed Vietnam War strategy of destroying a village in order to save it.

Authors

Image Source: © Sam Mircovich / Reuters
      
 
 




b

Disability insurance: The Way Forward


Editor’s note: The remarks below were delivered to the Committee for a Responsible Federal Budget on release of their report on the SSDI Solutions Initiative

I want to thank Marc Goldwein for inviting me to join you for today’s event. We all owe thanks to Jim McCrery and Earl Pomeroy for devoting themselves to the SSDI Solutions Initiative, to the staff of CFRB who backed them up, and most of all to the scholars and practitioners who wrote the many papers that comprise this effort. This is the sort of practical, problem-solving enterprise that this town needs more of. So, to all involved in this effort, ‘hats off’ and ‘please, don’t stop now.’

The challenge of improving how public policy helps people with disabilities seemed urgent last year. Depletion of the Social Security Disability Insurance trust loomed. Fears of exploding DI benefit rolls were widespread and intense.

Congress has now taken steps that delay projected depletion until 2022. Meticulous work by Jeffrey Liebman suggests that Disability Insurance rolls have peaked and will start falling. The Technical Panel appointed by the Social Security Advisory Board, concurred in its 2015 report. With such ‘good’ news, it is all too easy to let attention drift to other seemingly more pressing items.

But trust fund depletion and growing beneficiary rolls are not the most important reasons why policymakers should be focusing on these programs.

The primary reason is that the design and administration of disability programs can be improved with benefit to taxpayers and to people with disabilities alike. And while 2022 seems a long time off, doing the research called for in the SSDI Solutions Initiative will take all of that time and more. So, it is time to get to work, not to relax.

Before going any further, I must make a disclaimer. I was invited to talk here as chair of the Social Security Advisory Board. Everything I am going to say from now on will reflect only my personal views, not those of the other members or staff of the SSAB except where the Board has spoken as a group. The same disclaimer applies to the trustees, officers, and other staff of the Brookings Institution. Blame me, not them.

Let me start with an analogy. We economists like indices. Years ago, the late Arthur Okun came up with an index to measure how much pain the economy was inflicting on people. It was a simple index, just the sum of inflation and the unemployment rate. Okun called it the ‘misery index.’

I suggest a ‘policy misery index’—a measure of the grief that a policy problem causes us. It is the sum of a problem’s importance and difficulty. Never mind that neither ‘importance’ nor ‘difficulty’ is quantifiable. Designing and administering interventions intended to improve the lives of people with disabilities has to be at or near the top of the policy misery index.

Those who have worked on disability know what I mean. Programs for people with disabilities are hugely important and miserably hard to design and administer well. That would be true even if legislators were writing afresh on a blank legislative sheet. That they must cope with a deeply entrenched program about which analysts disagree and on which many people depend makes the problems many times more challenging.

I’m going to run through some of the reasons why designing and administering benefits for people determined to be disabled is so difficult. Some may be obvious, even banal, to the highly informed group here today. And you will doubtless think of reasons I omit.

First, the concept of disability, in the sense of a diminished capacity to work, has no clear meaning, the SSA definition of disability notwithstanding. We can define impairments. Some are so severe that work or, indeed, any other form of self-support seems impossible. But even among those with severe impairments, some people work for pay, and some don’t.

That doesn’t mean that if someone with a given impairment works, everyone with that same impairment could work if they tried hard enough. It means that physical or mental impairments incompletely identify those for whom work is not a reasonable expectation. The possibility of work depends on the availability of jobs, of services to support work effort, and of a host of personal characteristics, including functional capacities, intelligence, and grit.

That is not how the current disability determination process works. It considers the availability of jobs in the national, not the local, economy. It ignores the availability of work supports or accommodations by potential employers.

Whatever eligibility criteria one may establish for benefits, some people who really can’t work, or can’t earn enough to support themselves, will be denied benefits. And some will be awarded benefits who could work.

Good program design helps keep those numbers down. Good administration helps at least as much as, and maybe more than, program design. But there is no way to reduce the number of improper awards and improper denials to zero.

Second, the causes of disability are many and varied. Again, this observation is obvious, almost banal. Genetic inheritance, accidents and injuries, wear and tear from hard physical labor, and normal aging all create different needs for assistance.

These facts mean that people deemed unable to work have different needs. They constitute distinct interest groups, each seeking support, but not necessarily of the same kind. These groups sometimes compete with each other for always-limited resources. And that competition means that the politics of disability benefits are, shall we say, interesting.

Third, the design of programs to help people deemed unable to work is important and difficult. Moral hazard is endemic. Providing needed support and services is an act of compassion and decency. The goal is to provide such support and services while preserving incentives to work and to controlling costs borne by taxpayers.

But preserving work incentives is only part of the challenge. The capacity to work is continuous, not binary. Training and a wide and diverse range of services can help people perform activities of daily living and work.

Because resources are scarce, policy makers and administrators have to sort out who should get those services. Should it be those who are neediest? Those who are most likely to recover full capacities? Triage is inescapable. It is technically difficult. And it is always ethically fraught.

Designing disability benefit programs is hard. But administering them well is just as important and at least as difficult.

These statements may also be obvious to those who here today. But recent legislation and administrative appropriations raise doubts about whether they are obvious to or accepted by some members of Congress.

Let’s start with program design. We can all agree, I think, that incentives matter. If benefits ceased at the first dollar earned, few who come on the rolls would ever try to work.

So, Congress, for many years, has allowed beneficiaries to earn any amount for a brief period and small amounts indefinitely without losing eligibility. Under current law, there is a benefit cliff. If—after a trial work period—beneficiaries earn even $1 more than what is called substantial gainful activity, $1,130 in 2016, their benefit checks stop. They retain eligibility for health coverage for a while even after they leave the rolls. And for an extended period they may regain cash and health benefits without delay if their earnings decline.

Members of Congress have long been interested in whether a more gradual phase-out of benefits as earnings rise might encourage work. Various aspects of the current Disability Insurance program reflect Congress’s desire to encourage work.

The so-called Benefit Offset National Demonstration—or BOND—was designed to test the impact on labor supply by DI beneficiaries of one formula—replacing the “cliff” with a gradual reduction in benefits: $1 of benefit last for each $2 of earnings above the Substantial Gainful Activity level.

Alas, there were problems with that demonstration. It tested only one offset scenario – one starting point and one rate. So, there could be no way of knowing whether a 2-for-1 offset was the best way to encourage work.

And then there was the uncomfortable fact that, at the time of the last evaluation, out of 79,440 study participants only 21 experienced the offset. So there was no way of telling much of anything, other than that few people had worked enough to experience the offset.

Nor was the cause of non-response obvious. It is not clear how many demonstration participants even understood what was on offer.

Unsurprisingly, members of Congress interested in promoting work among DI recipients asked SSA to revisit the issue. The 2015 DI legislation mandates a new demonstration, christened the Promoting Opportunity Demonstration, or POD. POD uses the same 2 for 1 offset rate that BOND did, but the offset starts at an earnings level at or below earnings of $810 a month in 2016—which is well below the earnings at which the BOND phase-out began.

Unfortunately, as Kathleen Romig has pointed out in an excellent paper for the Center on Budget and Policy Priorities, this demonstration is unlikely to yield useful results. Only a very few atypical DI beneficiaries are likely to find it in their interest to participate in the demonstration, fewer even than in the BOND. That is because the POD offset begins at lower earnings than the BOND offset did. In addition, participants in POD sacrifice the right under current law that permits people receiving disability benefits to earn any amount for 9 months of working without losing any benefits.

Furthermore, the 2015 law stipulated that no Disability Insurance beneficiary could be required to participate in the demonstration or, having agreed to participate, forced to remain in the demonstration. Thus, few people are likely to respond to the POD or to remain in it.

There is a small group to whom POD will be very attractive—those few DI recipients who retain a lot of earning capacity. The POD will allow them to retain DI coverage until their earnings are quite high. For example, a person receiving a $2,000 monthly benefit—well above the average, to be sure, but well below the maximum—would remain eligible for some benefits until his or her annual earnings exceeded $57,700. I don’t know about you, but I doubt that Congress would favorably consider permanent law of this sort.

Not only would those participating be a thin and quite unrepresentative sample of DI beneficiaries in general, or even of those with some earning capacity, but selection bias resulting from the opportunity to opt out at any time would destroy the external validity of any statistical results.

Let me be clear. My comments on POD, the demonstration mandated in the 2015 legislation, are not meant to denigrate the need for, or the importance of, research on how to encourage work by DI recipients, especially those for whom financial independence is plausible. On the contrary, as I said at the outset, research is desperately needed on this issue, as well as many others. It is not yet too late to authorize a research design with a better chance of producing useful results.

But it will be too late soon. Fielding demonstrations takes time:

  • to solicit bids from contractors,
  • for contractors to formulate bids,
  • for government boards to select the best one,
  • for contractors to enroll participants,
  • for contractors to administer the demonstration,
  • and for analysts to process the data generated by the demonstrations.

That process will take all the time available between now and 2021 or 2022 when the DI trust fund will again demand attention. It will take a good deal more time than that to address the formidable and intriguing research agenda of SSDI Solutions Initiative.

I should like to conclude with plugs for two initiatives to which the Social Security Advisory Board has been giving some attention.

It takes too long for disability insurance applicants to have their cases decided. Perhaps the whole determination process should be redesigned. One of the CFRB papers proposes just that. But until that happens, it is vital to shorten the unconscionable delays separating initial denials and reconsideration from hearings before administrative law judges to which applicants are legally entitled. Procedural reforms in the hearing process might help. More ALJs surely will.

The 2015 budget act requires the Office of Personnel Management to take steps that will help increase the number of ALJs hired. I believe that the new director, Beth Colbert, is committed to reforms. But it is very hard to change legal interpretations that have hampered hiring for years and the sluggish bureaucratic culture that fostered them.

So, the jury is out on whether OPM can deliver. In a recent op-ed in Politico, Lanhee Chen, a Republican member of the SSAB, and I jointly endorsed urged Congress to be ready, if OPM fails to deliver on more and better lists of ALJ candidates and streamlined procedures for their appointment, to move the ALJ examination authority to another federal organization, such as the Administrative Conference of the United States.

Lastly, there is a facet of income support policy that we on the SSAB all agree merits much more attention than it has received. Just last month, the SSAB released a paper entitled Representative Payees: A Call to Action. More than eight million beneficiaries have been deemed incapable of managing $77 billion in benefits that the Social Security Administration provided them in 2014.

We believe that serious concern is warranted about all aspects of the representative payee program—how this infringement of personal autonomy is found to be necessary, how payees are selected, and how payee performance is monitored.

Management of representative payees is a particular challenge for the Social Security Administration. Its primary job is to pay cash benefits in the right amount to the right person at the right time. SSA does that job at rock-bottom costs and with remarkable accuracy. It is handing rapidly rising workloads with budgets that have barely risen. SSA is neither designed nor staffed to provide social services. Yet determining the need for, selecting, and monitoring representative payees is a social service function.

As the Baby Boom ages, the number of people needing help in administering cash benefits from the Social Security Administration—and from other agencies such as the Veterans Administration—will grow. So will the number needing help in making informed choices under Medicare and Medicaid.

The SSAB is determined to look into this challenge and to make constructive suggestions. We are just beginning and invite others to join in studying what I have called “the most important problem the public has never heard of.”

Living with disabilities today is markedly different from what it was in 1956 when the Disability Insurance program began. Yet, the DI program has changed little. Beneficiaries and taxpayers are pay heavily the failure of public policy to apply what has been learned over the past six decades about health, disability, function, and work.

I hope that SSA and Congress will use well the time until it next must legislate on Disability Insurance. The DI rolls are stabilizing. The economy has grown steadily since the Great Recession. Congress has reinstated demonstration authority. With adequate funding for research and testing, the SSA can rebuild its research capability. Along with the external research community, it can identify what works and help Congress improve the DI program for beneficiaries and taxpayers alike. The SSDI Solutions Initiative is a fine roadmap.

Authors

Publication: Committee for a Responsible Federal Budget
Image Source: © Max Whittaker / Reuters
      
 
 




b

A tribute to longtime Brookings staff member Kathleen Elliott Yinug

Only days before her retirement at age 71, Kathleen Elliott Yinug succumbed to a recurrence of cancer, which had been in remission for fifteen years. Over a Brookings career spanning four decades, she not only assisted several members of the Brookings community, but also became their valued friend. A woman of intelligence and liberal values, she elicited, demanded, and merited the respect of all with whom she worked.

After college, she joined the Peace Corps and was sent to the island of Yap. There she met her husband to be and there her son, Falan, was born. The family returned to the United States so that her husband could attend law school. Kathleen came to work at Brookings, helping to support her husband's law school training. When he returned to Yap, Kathleen assumed all parental responsibility. Her son has grown into a man of character, a devoted husband and father of two daughters. He and his wife, Louise, with compassion and generosity, made their home Kathleen's refuge during her final illness. Over extended periods, she held second jobs to supplement her Brookings income.

Her personal warmth, openness, and personal integrity made her a natural confidante of senior fellows, staff assistants, and research assistants, alike. She demanded and received respect from all. Her judgment on those who did not meet her standards was blunt and final; on one occasion, she 'fired'—that is, flatly refused to work with—one senior staff member whose behavior and values she rightly deplored.

With retirement approaching, Kathleen bought a condominium in Maine, a place she had come to love after numerous visits with her long-time friend, Lois Rice. After additional visits, her affection for Maine residents and the community she had chosen deepened. She spoke with intense yearning for the post-retirement time when she could take up life in her new home. That she was denied that time is a cruel caprice of life and only deepens the sense of loss of those who knew and loved her.

Authors

      
 
 




b

Brookings experts on the implications of COVID-19 for the Middle East and North Africa

The novel coronavirus was first identified in January 2020, having caused people to become ill in Wuhan, China. Since then, it has rapidly spread across the world, causing widespread fear and uncertainty. At the time of writing, close to 500,000 cases and 20,000 deaths had been confirmed globally; these numbers continue to rise at an…

       




b

The end of Kansas-Missouri’s border war should mark a new chapter for both states’ economies

This week, Governor Kelly of Kansas and Governor Parson of Missouri signed a joint agreement to end the longstanding economic border war between their two states. For years, Kansas and Missouri taxpayers subsidized the shuffling of jobs across the state line that runs down the middle of the Kansas City metro area, with few new…

       




b

Not just a typographical change: Why Brookings is capitalizing Black

Brookings is adopting a long-overdue policy to properly recognize the identity of Black Americans and other people of ethnic and indigenous descent in our research and writings. This update comes just as the 1619 Project is re-educating Americans about the foundational role that Black laborers played in making American capitalism and prosperity possible. Without Black…

       




b

Boosting growth across more of America

On Wednesday, January 29, the Brookings Metropolitan Policy Program (Brookings Metro) hosted “Boosting Growth Across More of America: Pushing Back Against the ‘Winner-take-most’ Economy,” an event delving into the research and proposals offered in Robert D. Atkinson, Mark Muro, and Jacob Whiton’s recent report “The case for growth centers: How to spread tech innovation across…

       




b

Federal fiscal aid to cities and states must be massive and immediate

And why “relief” and “bailout” are two very different things There is a glaring shortfall in the ongoing negotiations between Congress and the White House to design the next emergency relief package to stave off a coronavirus-triggered economic crisis: Relief to close the massive resource gap confronting state and local governments as they tackle safety…

       




b

Webinar: COVID-19 and the economy

With more than 1,000 deaths, 3 million and counting unemployed, and no definite end in sight, the coronavirus has upended nearly every aspect of American life. In the last two weeks, the Federal Reserve and Congress scrambled to pass policies to mitigate what will be a very deep recession. Americans across the country are asking—…

       




b

Building resilience in education to the impact of climate change

The catastrophic wind and rain of Hurricane Dorian not only left thousands of people homeless but also children and adolescents without schools. The Bahamas is not alone; as global temperatures rise, climate scientists predict that more rain will fall in storms that will become wetter and more extreme, including hurricanes and cyclones around the world.…