the Protecting our most economically vulnerable neighbors during the COVID-19 outbreak By webfeeds.brookings.edu Published On :: Mon, 16 Mar 2020 20:31:11 +0000 While we are all adjusting to new precautions as we start to understand how serious the COVID-19 coronavirus is, we also need to be concerned about how to minimize the toll that such precautions will have on our most economically vulnerable citizens. A country with the levels of racial and income inequality that we have… Full Article
the There are policy solutions that can end the war on childhood, and the discussion should start this campaign season By webfeeds.brookings.edu Published On :: Wed, 18 Mar 2020 14:52:34 +0000 President Lyndon B. Johnson introduced his “war on poverty” during his State of the Union speech on Jan. 8, 1964, citing the “national disgrace” that deserved a “national response.” Today, many of the poor children of the Johnson era are poor adults with children and grandchildren of their own. Inequity has widened so that people… Full Article
the Webinar: The effects of the coronavirus outbreak on marginalized communities By webfeeds.brookings.edu Published On :: Wed, 25 Mar 2020 19:00:40 +0000 As the coronavirus outbreak rapidly spreads, existing social and economic inequalities in society have been exposed and exacerbated. State and local governments across the country, on the advice of public health officials, have shuttered businesses of all types and implemented other social distancing recommendations. Such measures assume a certain basic level of affluence, which many… Full Article
the In the Republican Party establishment, Trump finds tepid support By webfeeds.brookings.edu Published On :: Fri, 08 May 2020 18:37:25 +0000 For the past three years the Republican Party leadership have stood by the president through thick and thin. Previous harsh critics and opponents in the race for the Republican nomination like Senator Lindsey Graham and Senator Ted Cruz fell in line, declining to say anything negative about the president even while, at times, taking action… Full Article
the “Accelerated Regular Order” — Could it Lead the Parties to a Grand Bargain? By webfeeds.brookings.edu Published On :: Fri, 19 Oct 2012 00:00:00 -0400 Suzy Khimm reports on a proposal from the Bipartisan Policy Center that would establish a framework for reaching a grand bargain on deficit reduction in 2013. In short, the BPC proposes that Congress and the president in the lame duck session would agree to a procedural framework for guiding enactment of major spending and tax reforms in 2013. In enacting the framework, Congress and the president would also avert going over the fiscal cliff. In exchange, Congress and the president would make a small down payment on deficit reduction in the lame duck, and would authorize a legislative “backstop” of entitlement cuts and elimination of tax expenditures that would become law if Congress and the president failed in 2013 to enact tax and spending reforms. The procedural elements of the BPC’s proposal bear some attention. The BPC’s not-quite-yet-a-catchphrase is “accelerated regular order.” Although it sounds like a nasty procedural disease, it’s akin to the fast-track procedures established in the Congressional Budget Act and in several other statutes. In short, the framework proposed by the BPC would instruct the relevant standing committees in 2013 to suggest to the chamber budget committees entitlement and tax reforms that would sum to $4 trillion dollars in spending cuts and new revenues (assuming extension of the Bush tax cuts). The House and Senate budget panels would each report a grand bargain bill for their chamber’s consideration that would be considered (without amendment) by simple majority vote after twenty hours of debate. Failure to meet the framework’s legislated deadlines would empower the executive branch to impose entitlement savings and to eliminate tax expenditures to meet the framework’s target. Loyal Monkey Cage readers will recognize that the BPC proposal resembles in many ways the procedural solution adopted in the Deficit Control Act in August of 2011. But there are at least two procedural differences from the 2011 deficit deal. First, rather than a super committee, the BPC envisions “regular order,” meaning that the standing committees—not a special panel hand-selected by party-leaders—would devise the legislative package. Like the August deficit deal, the BPC proposal then offers procedural protection for the package by banning the Senate filibuster and preventing changes on the chamber floors (hence, an accelerated regular order). Second, rather than a meat-axe of sequestration that imposes only spending cuts, the BPC offers a “backstop,” giving what I take to be statutory authority to the executive branch to determine which tax expenditures to eliminate and which entitlement programs to cut back. These differences from 2011 are subtle, but the BPC believes that they would improve the odds of success compared to the failed Super-committee plus sequestration plan. As a BPC staffer noted: "One of the reasons the Joint Select Committee on Deficit Reduction failed, in our view, was because only 12 lawmakers were setting policy for the entire Congress,” said Steve Bell, Senior Director of BPC’s Economic Policy Project. “The framework we propose today would both ensure an acceleration of regular budget order in the House and Senate, and it would involve all committees of relevant jurisdiction.” This is an interesting argument worth considering. Still, I’m not so sure that accelerated regular order would improve the prospects for an agreement. First, it strikes me that the real barrier to a grand bargain hasn’t been the Senate’s filibuster rule. The super committee was guaranteed a fast-track to passage, but that still didn’t motivate the parties to reach an agreement. The more relevant obstacle in 2011 and 2012 has been the bicameral chasm between a Republican House and a Democratic Senate. To be sure, eliminating the need for a sixty-vote cloture margin would smooth the way towards Senate passage. But we could easily imagine that the 60th senator (in 2013, perhaps a GOP senator like Lisa Murkowski) might be willing to sign onto a deal that would still be too moderate to secure the votes of House Republicans (assuming no change in party control of the two chambers). As we saw over the course of the 112th Congress, House passage required more than the consent of the House median (an ideologically moderate Republican) and more than the support of a majority of the GOP conference. The big deals in the 112th Congress only passed if they could attract the votes of roughly 90% of the House GOP conference. Expedited procedures can protect hard-fought compromises from being unraveled on the chamber floors but by themselves don’t seem sufficient to generate compromise in the first place. Second, and related, I’m somewhat skeptical that the small size of the super committee precluded a viable agreement. By balancing parties and chambers, the group was (in theory) a microcosm of the full Congress. If true, then delegating to the super committee was more akin to delegating to a mini-Congress. Perhaps the BPC’s idea of allowing the standing committees to generate proposals would broaden legislators’ willingness to buy-in to a final agreement. More likely, I suspect that the framework would produce a House bill perched on the right and a Senate bill left of center (since the filibuster ban would reduce Democrats’ incentives to produce a bipartisan bill). That leaves the bicameral chasm still to be bridged, suggesting that accelerated regular order might not bring Congress all that much closer to a bipartisan agreement in 2013. Consent of party leaders remains critical for an agreement. Third, the BPC proposal is unclear on the precise nature of the legislative backstop. But would either party agree in advance to the framework if they didn’t know whose ox would be gored by the administration when it exercised its power to reform entitlements and eliminate tax expenditures? Perhaps delegating such authority to the executive branch would allow legislators to avoid voters’ blame, making them more likely to vote for the framework. (That said, it’s somewhat ironic that the BPC’s embrace of accelerated regular order flows from its desire to broaden the set of legislators whose fingerprints are visible on the grand bargain.) Regardless, the prospects for cuts in entitlement programs could lead both parties to favor kicking the can down the road again before it actually explodes. Fast-track procedures have a decent track record in facilitating congressional action. (Steve Smith and I have extolled their virtues elsewhere.) But the most successful of these episodes involve narrow policy areas (such as closing obsolete military bases) on which substantial bipartisan agreement on a preferred policy outcome is already in place. Expecting a procedural device to do the hard work of securing bipartisan agreement may be asking too much of Congress’s procedural tool kit in a period of divided and split party control. Authors Sarah A. Binder Publication: The Monkey Cage Image Source: © Jonathan Ernst / Reuters Full Article
the Three Reforms to Unstick the Senate By webfeeds.brookings.edu Published On :: Thu, 29 Nov 2012 00:00:00 -0500 "We are now locked in a rolling filibuster on every issue, which is totally gridlocking the U.S. Senate. That is wrong. It is wrong for America." Who said that? Democrat Harry Reid, majority leader of the Senate? Guess again. Try former Republican leader Trent Lott, bemoaning the troubled state of the Senate in the late 1990s. No recent majority leader of either party has been saved the headache of trying to lead a Senate in which minorities can exploit the rules and stymie the chamber. This is not a new problem. Harry Reid may face a particularly unrestrained minority. But generations of Senate leaders from Henry Clay to Bill Frist have felt compelled to seek changes in Senate rules to make the chamber a more governable place. Some things never change. Twice this week, the Senate has opened debate with its party leaders engaged in a caustic battle over Reid's plans to seek changes to Senate rules in January. Read the full piece at CNN.com » Authors Sarah A. Binder Publication: CNN Image Source: © Joshua Roberts / Reuters Full Article
the Reforming the Senate at a Snail’s Pace By webfeeds.brookings.edu Published On :: Sun, 30 Dec 2012 00:00:00 -0500 As the clock runs out on the dysfunctional 112th Congress, few have been impressed by its paltry record and balky performance. But pardon my glee: December has been a great month for students of Congress. First, the House leadership was handed a blistering defeat on its “Plan B” to resolve the fiscal cliff. Next, while their leaders were meeting to negotiate an 11th hour of the 12th month fiscal cliff deal, eight senators unveiled a bipartisan proposal to head off a Democratic threat to change the rules by majority vote. When it rains, it pours! The reform package—addressing “talking filibusters” and filibusters on procedural motions – deserves a bit more attention. And it deserves an appropriate historical illustration: To the right, a 1928 Chicago Tribune cartoon that features not the talking filibuster…but a sleeping one. Seems that talking filibusters might have been few and far between even back then. Ezra Klein and Jon Bernstein have detailed the proposed changes and weighed in here and here, as has Steve Smith by tweet here and here. Since then, a coalition of nearly fifty liberal groups has rejected the proposal out of hand as watered down reform. To these several perspectives on the McCain-Levin plan, I would add the following thoughts: First, these are at best incremental reforms. The majority leader would essentially gain the right to set the Senate’s agenda by majority vote, as a four-hour debate limit would be imposed on the motion to proceed. But the majority leader would pay a price for that new power: He would lose his power to block amendments (by “filling the tree”) and the minority bill manager and leader would be newly guaranteed an amendment each upon consideration of a legislative measure. (The majority leader, it seems, might still be able to fill the tree after the guaranteed amendments are dispensed with.) This change leaves untouched the sixty-vote threshold for invoking cloture on the measure or other amendments, similar to the plans of Democratic reformers. In short, the change tries to address the grievances of both the majority (by circumventing filibusters of the motion to proceed) and the minority (by creating and guaranteeing amendment opportunities). Second, the incremental nature of the reforms is not accidental. Ezra has a point when he argues that this is “filibuster reform for people who don’t want to reform the filibuster.” Still, the incremental nature of the proposal strikes me as the price of negotiating procedural change in a legislative body whose rules already advantage the minority party: The majority gets a little only by giving a little. The barrier to reform is entrenched in the Senate’s cloture rule, given the supermajority required for ending filibusters of proposals that curtail minority rights. A Senate majority could circumvent that barrier by going nuclear with 51 votes, but that strategy is not cost-free. To be sure, reformers claim to have 51 votes for a reform-by-ruling move. But it’s not clear to me yet that the majority would be willing to pay the accompanying costs of weathering the minority’s response to going nuclear. Third, the rules address leaders’ interests more so than those of the rank and file. Some of the proposed changes are aimed at time management. For example, with the consent of the majority and minority leaders and a bipartisan handful of senators , the cloture process is sped up markedly. Similarly, the three debatable steps required to get to conference are condensed to a single motion (albeit one still subject to sixty votes if the minority objects). Other proposed changes alleviate the minority leader from objecting on his colleagues’ behalf, undermining individual senators’ ability to threaten to filibuster without actually showing up. Then again, there’s no enforcement mechanism in the proposal: Senators would be counting on the minority leader to play by the new rules and to abandon his practice of lodging objections on behalf of his absent colleagues. It’s fair to be skeptical that such informal reforms would ever stick. Fourth, I think there’s promise in the proposal’s directive to the presiding officer to put questions to a (majority) vote when opponents no longer seek to debate a bill. I share skeptics’ views that majorities might rarely want to hold the minority’s feet to the fire to wear down the opposition and that minorities might at times relish the spotlight while holding the floor. But the proposal strikes me as a potentially valuable chance to see if the change would make a difference. If approved, the McCain-Levin proposal would be adopted as a standing order of the Senate for just the upcoming Congress, providing a testing ground for this version of the talking filibuster. (Standing orders are typically approved opening day by unanimous consent; would there be such consent for McCain-Levin or another negotiated proposal?) Finally, it may be that incremental procedural change is all that a polarized Senate can agree on—especially if some Democrats are skittish about changing the rules by majority vote. Granted, majority senators won’t agree to the plan if it’s perceived as empowering the minority, not the majority, as Senator Harkin has suggested. Nor should they. In that case, an incremental package may be more than a polarized Senate can agree on—leaving the nuclear option as the only avenue for Democrats seeking to rein in the excesses of the Senate minority’s parliamentary rights. Authors Sarah A. Binder Publication: The Monkey Cage Image Source: © Jason Reed / Reuters Full Article
the Take a Little, Give a Little: The Senate's Effort at Filibuster Reform By webfeeds.brookings.edu Published On :: Thu, 24 Jan 2013 00:00:00 -0500 Today could have been the day when Senate Democrats went nuclear – reining in minority party abuse of the filibuster with a simple majority vote. That would have been my Super Bowl. Instead, the Senate is poised to adopt a bipartisan set of modest (many say, meager) changes to the Senate’s cloture rule. More like the Famous Idaho Potato Bowl, I say. As many have noted (for starters, Ezra Klein here and Jonathan Bernstein here), the proposed changes to the Senate’s Rule 22 fall far short of what reformers had hoped for. Much blame has been heaped on Harry Reid, the Democratic leader, and on a few senior Democrats, highlighting their resistance to abandoning the Senate’s sixty-vote threshold for bringing the chamber to a vote. The reforms are modest, largely finding ways of speeding up the Senate once both parties have agreed on the matter at hand (for instance on the way to advancing a measure to the floor or after cutting off debate on a nomination). Even if the changes may seem to many like small potatoes, I think there’s more to be gleaned from the Senate’s brush with reform. First, take a little, give a little. Today’s rule changes remind us that there is no free lunch when it comes to Senate reform. That hurdle is built into Rule 22, given its requirement that 67 senators consent to a vote on efforts to reform Rule 22. In the absence of majority willing to bear the costs of asserting the majority’s right to change its rules, Senate reform is necessarily bipartisan and incremental. Reforms must secure the consent of the minority, or be packaged with changes judged equally important to the opposition. (Recall that even when reformers reduced cloture to 60 votes in 1975, they paid a price: 67 votes would still be required to end debate on changing Rule 22.) Today’s reforms allow a majority to circumvent filibusters of motions to proceed to legislative measures. In return, the majority pays a price each time: The minority is guaranteed votes on two amendments, whereas previously recent leaders might have precluded all amendments by immediately “filling the tree.” To be sure, this potentially dilutes the value of the rule change for the majority. But concessions are dictated by the Senate’s inherited rules. (And, of course, nothing is that simple when it comes to Senate rules; the majority may yet fill the tree, at least after the disposition of the minority’s amendments.) Second, I suspect we might be underestimating the importance of a non-debatable motion to proceed for the majority party in a period of partisan polarization. Judging from the increase in filibusters on the motions to proceed in recent years, minority parties have fought hard to keep bills off the floor that they oppose on policy or political grounds. So long as the motion to proceed could be filibustered, majority and minority parties shared agenda-setting powers. Today’s change grants the majority a slightly stronger hand in choosing the chamber agenda. To be sure, the minority can still filibuster the bill and amendments beyond those newly guaranteed, but the reform undermines the minority’s ability to throw the majority off course. Take immigration policy, for example. Filibusters of the motion to proceed have kept the DREAM Act off the Senate floor in recent years. Minority influence over the Senate’s agenda is diminished with today’s reform. Third, these are leader-driven reforms, shaped by the unique burdens carried by the majority and (sometimes) minority leaders. For example, the reforms speed up post-cloture debate on some judicial and executive branch nominations, and allow the chamber to hurry onto cloture votes on motions to proceed to legislative business when the minority offers a modicum of support. No surprise that these housekeeping changes elicit little enthusiasm. These changes don’t make it any easier for a majority to break sizable minority opposition. And they potentially make it harder for rank and file senators to exploit the rules in pursuit of their own policy goals. But from leaders’ perspectives, the reforms rein in the excesses of rank and file dissent when a bipartisan group is ready to move ahead. As one Senate Democrat aide confided, “that’s all Reid ever really wanted.” Finally, this episode highlights the limitation of the Constitutional option and other “reform-by-ruling” strategies. There appears to have been a majority or near-majority support for securing only very limited reform of Rule 22. Senators seem unwilling to use the tactic for a major overhaul of the Senate’s cloture rule—in part because of the fear of minority retaliation, in part because the filibuster rule likely serves as the foundation of senators’ power. To be sure, Harry Reid aggressively used reform-by-ruling in the fall of 2011 to secure smaller changes to Rule 22 (as did Robert Byrd in the 1980s). But we have to reach back nearly forty years to the 1975 reforms to find a Senate majority willing to go nuclear to impose major changes to Rule 22. (Even then, reformers proceeded without the support of the majority leader, Mike Mansfield.) Perhaps senators see the consequences of weakening Rule 22 in a different light when the parties polarize over policy problems and solutions, with senators nervous about curtailing extended debate when the tables turn on their majority. Regardless, so long as majorities will only form to impose minor reform by majority vote, those majorities will be forced to live under supermajority rules that daily frustrate their policy and political agendas. And in the Senate’s world, those frustrating days can last for weeks! Authors Sarah A. Binder Publication: The Monkey Cage Image Source: © Kevin Lamarque / Reuters Full Article
the Thoughts on the Hagel Filibuster and its Political Implications By webfeeds.brookings.edu Published On :: Thu, 14 Feb 2013 00:00:00 -0500 I’m late to the conversation about whether or not Republican efforts to insist on sixty votes for cloture on Chuck Hagel’s nomination as Secretary of Defense constitutes a filibuster. Bernstein’s earlier piece ("This is what a filibuster looks like") and Fallows’ recent contribution provide good, nuanced accounts of why Republican tactics amount to a filibuster, even if some GOP senators insist otherwise. In short, the duck test applies: If it looks like a duck, swims like a duck and quacks like a duck, then …. it’s a filibuster! Still, I think there’s more to be said about the politics and implications of the Hagel nomination. A few brief thoughts: First, let’s put to rest the debate about whether insisting on sixty votes to cut off debate on a nomination is a filibuster or, at a minimum, a threatened filibuster. It is. Even if both parties have moved over the past decade(s) to more regularly insist on sixty votes to secure passage of major (and often minor) legislative measures and confirmation of Courts of Appeals nominees, we shouldn’t be fooled by the institutionalization—and the apparent normalization—of the 60-vote Senate. Refusing to consent to a majority’s effort to take a vote means (by definition) that a minority of the Senate has flexed its parliamentary muscles to block Senate action. I think it’s fair to characterize such behavior as evidence of at least a threatened filibuster—even if senators insist that they are holding up a nomination only until their informational demands are met. Second, there’s been a bit of confusion in the reporting about whether filibusters of Cabinet appointees are unprecedented. There appears to have been no successful filibusters of Cabinet appointees, even if there have been at least two unsuccessful filibusters against such nominees. (On two occasions, Cabinet appointees faced cloture votes when minority party senators placed holds on their nominations—William Verity in 1987 and Kempthorne in 2006. An EPA appointee has also faced cloture, but EPA is not technically cabinet-level, even if it is now Cabinet-status). Of course, there have been other Cabinet nominees who have withdrawn; presumably they withdrew, though, because they lacked even majority support for confirmation. Hagel’s situation will be unprecedented only if the filibuster succeeds in keeping him from securing a confirmation vote. Third, using cloture votes as an indicator of a filibuster underestimates the Senate’s seeping super-majoritarianism. (Seeping super-majoritarianism?! Egads.) At least two other recent Cabinet nominations have been subjected to 60-vote requirements: Kathleen Sebelius in 2009 (HHS) and John Bryson (Commerce) in 2011. Both nominees faced threatened filibusters by Republican senators, preventing majority leader Reid from securing the chamber’s consent to schedule a confirmation vote—until Reid agreed to require sixty votes for confirmation. The Bryson unanimous consent agreement (UCA) appears on the right, an agreement that circumvented the need for cloture. Embedding a 60-vote requirement in a UCA counts as evidence of an attempted filibuster, albeit an unsuccessful one. After all, other Obama nominees (such as Tim Geithner) were confirmed after Reid negotiated UCAs that required only 51 votes for confirmation, an agreement secured because no Republicans were threatening to filibuster. Finally, what are the implications for the Hagel nomination? If Republicans were insisting on sixty votes on Senator Cornyn’s grounds that “There is a 60-vote threshold for every nomination,” then I bet Reid would have been able to negotiate a UCA similar to Sebelius’s and Bryson’s. But Hagel’s opponents see the time delay imposed by cloture as instrumental to their efforts to sow colleagues’ doubts about whether Hagel can be confirmed (or at a minimum to turn this afternoon’s cloture vote into a party stand to make their point about Benghazi). Of course, it’s possible that the time delay will work to Democrats’ benefit if they can make headlines that GOP obstruction puts national security at risk. (Maybe Leon Panetta should have jetted to his walnut farm to make the point before the cloture vote.) Whatever the outcome, the Hagel case reminds us that little of the Senate’s business is protected from the intense ideological and partisan polarization that permeates the chamber and is amplified by the chamber’s lax rules of debate and senators’ lack of restraint. Filibustering of controversial Cabinet nominees seems to be on the road to normalization—even if Hagel is ultimately confirmed. Authors Sarah A. Binder Publication: The Monkey Cage Image Source: © Kevin Lamarque / Reuters Full Article
the Droning on: Thoughts on the Rand Paul “Talking Filibuster” By webfeeds.brookings.edu Published On :: Thu, 07 Mar 2013 00:00:00 -0500 Sen. Rand Paul has just completed his nearly thirteen hour filibuster against John Brennan's nomination to head the CIA. Breaking off his filibuster (because, he inferred, he had to pee), Rand was heralded for bringing back the "talking filibuster." There was much written (and tweeted) about his filibuster, which began with Paul’s dramatic: "I will speak until I can no longer speak…I will speak as long as it takes, until the alarm is sounded from coast to coast that our Constitution is important, that your rights to trial by jury are precious, that no American should be killed by a drone on American soil without first being charged with a crime, without first being found to be guilty by a court." I thought I would add a few late-night thoughts in honor of this day spent with C-Span 2 humming in my ear. First, I think Jon Bernstein’s reaction to the filibuster was right on the mark. There’s been a lot of enthusiasm for the talking filibuster today, from Ezra Klein's "If more filibusters went like this, there’d be no reason to demand reform," to Josh Marshall’s, "This is a good example of why we should have the talking filibuster and just the talking filibuster." But Bernstein raises a critical point: "Today’s live filibuster shows again just how easy it is to hold the Senate floor for an extended period." The motivation of recent reformers has been to reduce filibustering by raising the costs of obstruction for the minority. In theory, making the filibuster more burdensome to the minority—while putting their views under the spotlight—should make filibusters more costly and more rare. (Paul did note in coming off the Senate floor tonight that his feet hurt…) But as Bernstein points out, Paul believes in his cause, and it plays well with his constituencies. On the physical front, the tag-team of GOP senators rallying to Paul's cause also lessened the burden on Paul (as would have a pair of filibuster-proof shoes). That said, today's filibuster was a little unusual. The majority seemed unfazed by giving up the day to Paul’s filibuster, perhaps because the rest of Washington was shutdown for a pseudo-snow storm. Moreover, the Brennan nomination had bipartisan support, with Reid believing there were 60 senators ready to invoke cloture. In short, today's episode might not be a great test case for observing the potential consequences of reform. Second, keep in mind that this was a double-filibuster day. The nomination of Caitlin Halligan for the DC Court of Appeals was blocked, failing for the second time to secure cloture. With 41 Republican senators voting to block an up or down confirmation vote on Halligan, an often-noted alternative reform (which would require 41 senators to block cloture instead of 60 senators to invoke it) would have made no difference to the outcome. And what if the minority had been required to launch a talking filibuster to block Halligan’s nomination? Reid might have been willing to forfeit the floor time to Paul today. But Reid would unlikely have wanted to give up another day to Halligan’s opponents. As Steve Smith has argued, the burden of talking filibusters also falls on the majority, which typically wants to move on to other business. "Negotiating around the filibuster," Smith has argued, "would still be common." On a day with two successful minority filibusters (at least in consuming floor time and deterring the majority from its agenda), we can see why the majority might be reticent to make senators talk. Third, let's not lose sight of the target of Rand's filibuster: The head of the CIA. Although the chief spook is not technically in the president’s cabinet, the position certainly falls within the ranks of nominations that have typically been protected from filibusters. Granted, that norm was trampled with the Hagel filibuster for Secretary of Defense. But rather than seeing the potential upside of today's talking filibuster, I can't help but see the downside: In an age of intense policy and political differences between the parties, no corner of Senate business is immune to filibusters. All that said, what's not to like about a mini demonstration of a real live filibuster?! Perhaps Paul's late day Snickers break was cheating. But it was a good C-Span type of day overall, for filibuster newbies to Franklin Burdette devotees. Even Dick Durbin well after midnight seemed to be enjoying the fray. Perhaps there’s a silver lining for talking filibusters after all. Authors Sarah A. Binder Publication: The Monkey Cage Image Source: © Jonathan Ernst / Reuters Full Article
the Banning Filibusters: Is Nuclear Winter Coming to the Senate this Summer? By webfeeds.brookings.edu Published On :: Thu, 23 May 2013 12:04:00 -0400 It seems the Senate could have a really hot summer. Majority leader Harry Reid (D-NV) has reportedly threatened to “go nuclear” this July—meaning that Senate Democrats would move by majority vote to ban filibusters of executive and judicial branch nominees. According to these reports, if Senate Republicans block three key nominations (Richard Cordray to head the Consumer Financial Protection Bureau, Thomas Perez at Labor, and Gina McCarthy at EPA), Reid will call on the Democrats to invoke the nuclear option as a means of eliminating filibusters over nominees. Jon Bernstein offered a thoughtful reaction to Reid’s gambit, noting that Reid’s challenge is to “find a way to ratchet up the threat of reform in order to push Republicans as far away from that line as possible.” Jon’s emphasis on Reid’s threat is important (and is worth reading in full). Still, I think it’s helpful to dig a little deeper on the role of both majority and minority party threats that arise over the nuclear option. Before getting to Reid’s threat, two brief detours. First, a parliamentary detour to make plain two reasons why Reid’s procedural gambit is deemed “nuclear.” First, Democrats envision using a set of parliamentary moves that would allow the Senate to cut off debate on nominations by majority vote (rather than by sixty votes). Republicans (at least when they are in the minority) call this “changing the rules by breaking the rules,” because Senate rules formally require a 2/3rds vote to break a filibuster of a measure to change Senate rules. The nuclear option would avoid the formal process of securing a 2/3rds vote to cut off debate; instead, the Senate would set a new precedent by simple majority vote to exempt nominations from the reach of Rule 22. If Democrats circumvent formal rules, Republicans would deem the move nuclear. Second, Reid’s potential gambit would be considered nuclear because of the anticipated GOP reaction: As Sen. Schumer argued in 2005 when the GOP tried to go nuclear over judges, minority party senators would “blow up every bridge in sight.” The nuclear option is so-called on account of the minority’s anticipated parliamentary reaction (which would ramp up obstruction on everything else). A second detour notes simply that the exact procedural steps that would have to be taken to set a new precedent to exempt nominations from Rule 22 have not yet been precisely spelled out. Over the years, several scenarios have been floated that give us a general outline of how the Senate could reform its cloture rule by majority vote. But a CRS report written in the heat of the failed GOP effort to go nuclear in 2005 points to the complications and uncertainties entailed in using a reform-by-ruling strategy to empower simple majorities to cut off debate on nominations. My sense is that using a nuclear option to restrict the reach of Rule 22 might not be as straight forward as many assume. That gets us to the place of threats in reform-by-ruling strategies. The coverage of Reid’s intentions last week emphasized the importance of Reid’s threat to Republicans: Dare to cross the line by filibustering three particular executive branch nominees, and Democrats will go nuclear. But for Reid’s threat to be effective in convincing GOP senators to back down on these nominees, Republicans have to deem Reid’s threat credible. Republicans know that Reid refused by go nuclear last winter (and previously in January 2009), not least because a set of longer-serving Democrats opposed the strategy earlier this year. It would be reasonable for the GOP today to question whether Reid has 51 Democrats willing to ban judicial and executive branch nomination filibusters. If Republicans doubt Reid’s ability to detonate a nuclear device, then the threat won’t be much help in getting the GOP to back down. Of course, if Republicans don’t block all three nominees, observers will likely interpret the GOP’s behavior as a rational response to Reid’s threat. Eric Schickler and Greg Wawro in Filibuster suggest that the absence of reform on such occasions demonstrates that the nuclear option can “tame the minority.” Reid’s threat would have done the trick. As a potentially nuclear Senate summer approaches, I would keep handy an alternative interpretation. Reid isn’t the only actor with a threat: given Republicans’ aggressive use of Rule 22, Republicans can credibly threaten to retaliate procedurally if the Democrats go nuclear. And that might be a far more credible threat than Reid’s. We know from the report on Reid’s nuclear thinking that “senior Democratic Senators have privately expressed worry to the Majority Leader that revisiting the rules could imperil the immigration push, and have asked him to delay it until after immigration reform is done (or is killed).” That tidbit suggests that Democrats consider the GOP threat to retaliate as a near certainty. In other words, if Republicans decide not to block all three nominees and Democrats don’t go nuclear, we might reasonably conclude that the minority’s threat to retaliate was pivotal to the outcome. As Steve Smith, Tony Madonna and I argued some time ago, the nuclear option might be technically feasible but not necessarily politically feasible. To be sure, it’s hard to arbitrate between these two competing mechanisms that might underlie Senate politics this summer. In either scenario—the majority tames the minority or the minority scares the bejeezus out of the majority—the same outcome ensues: Nothing. Still, I think it’s important to keep these alternative interpretations at hand as Democrats call up these and other nominations this spring. The Senate is a tough nut to crack, not least when challenges to supermajority rule are in play. Authors Sarah A. Binder Publication: The Monkey Cage Image Source: © Joshua Roberts / Reuters Full Article
the HHS Secretary Sebelius is the Big Loser in Today's Filibuster Game-Changer By webfeeds.brookings.edu Published On :: Thu, 21 Nov 2013 16:00:00 -0500 HHS Secretary Kathleen Sebelius may lose the most from the Senate’s rule change on the filibuster—and the Affordable Care Act may be healthier for it. I wrote last month on the FixGov blog that “Republicans are the Reason Secretary Sebelius Won’t Resign” (or be fired). That argument is no longer valid. My claim—the president’s inability to get her successor confirmed because of filibustering Republicans—is nullified by the Senate’s rule change, and the benefits may reach far beyond Obamacare. The Implications of Filibuster Reform for Healthcare Problems exist in HHS. No one denies it. However, for many appointees in the Department, the Senate rules served as a life preserver in a torrent of poor implementation, managerial failures, and bad PR. So long as the president faced the prospect of long-term vacancies among appointees overseeing ACA, the HHS leadership would be spared. Today, that all changed. Moving forward, President Obama needs the support of only 51 Senate Democrats to replace top-level political appointees throughout the executive branch. This offers the president substantial breathing room. Nominees no longer need the support of every Democrat and a scarcely identifiable five Republicans. Instead, nominees can draw the ire of as many as four Democrats and still be confirmed. Maybe Kathleen Sebelius is not to blame for the botched healthcare marketplace roll out. Maybe her Office did not give the thumbs up for the President to repeat “if you like your plan you can keep it.” Maybe she did not contribute to the poor salesmanship of the legislation from the start. However, if she was to blame (and perhaps if she wasn’t), her days in the president’s cabinet may well be numbered. The same may be true for deputies and other administrators in the Department who oversaw the weaker areas of the roll out of this law. By repositioning HHS personnel or breathing new life into a Department facing continued struggles, the president may well ensure the administration of his signature legislation accomplishment improves. The right appointees can coordinate and communicate policy needs and goals up and down the bureaucratic hierarchy. Rather than settling for a program that meets or falls short of expectations, there is an opportunity to build an effective ACA. Good Governance beyond Obamacare The first half of October showed us that political actors in Congress contributed to a broken legislative branch. The second half of October showed us that political actors in the Administration contributed to a broken executive branch. Now is the time for the president to start anew and fix one branch, in the shadow of a Senate trying to fix itself. In my piece from last month, I also argued that the filibuster rules in the Senate allow for the continuation of poor management and governance. If weak appointed personnel are causing policy problems, communication miscues, and other headaches for the president, the ability to replace them with something other than the word “ACTING” was limited by the 60-vote threshold. President Obama, who has faced a string of personnel and management issues over the past year, now has greater freedom not simply to oust problematic appointees, but to install talented, effective leaders. With this ability comes a tremendous opportunity to jumpstart an administration that is sputtering. Filibuster reform will not be the magical elixir that cures all of the ills in the Obama administration. Yet, it’s a good start. The President should channel the flashiness of his campaigns and loftiness of his rhetoric into a focus on real issues of governance. Authors John Hudak Image Source: © Jason Reed / Reuters Full Article
the Congressional Master Class: The Senate Filibuster, Congress and the Federal Reserve By webfeeds.brookings.edu Published On :: Fri, 06 Dec 2013 09:11:00 -0500 In this podcast, congressional expert Sarah Binder explains why the Senate filibuster is a historical mistake. She talks about her research on Congress’s relationship with the Federal Reserve and addresses whether Congress is more polarized today than it has been in the past. Binder, a senior fellow in Governance Studies, is also a professor of political science at George Washington University and contributor to the Monkey Cage blog. SUBSCRIBE TO THE PODCAST ON ITUNES » Show notes: • The Federal Reserve: Balancing Multiple Mandates (testimony by Alice Rivlin) • Boom! What the Senate Will Be Like When the Nuclear Dust Settles • Beyond the Horse Race to Lead the Fed • Droning on: Thoughts on the Rand Paul “Talking Filibuster” • Advice and Dissent: The Struggle to Shape the Federal Judiciary • The History of the Filibuster * In the image, Senator Henry Clay speaks about the Compromise of 1850 in the Old Senate Chamber. Daniel Webster is seated to the left of Clay and John C. Calhoun to the left of the Speaker's chair. (engraving by Robert Whitechurch, ca. 1880, Library of Congress) Authors Sarah A. BinderFred Dews Full Article
the How Many Judicial Confirmations Are Due to the Filibuster Rules Change? By webfeeds.brookings.edu Published On :: Wed, 02 Jul 2014 11:29:00 -0400 The July 4th congressional recess’s pause in 2014’s record pace of judicial confirmations is a good time to explore the reason for the upsurge. The 54 confirmations at 2014’s half-way point compare to 43 in all of 2013. What’s behind the increase? Some have said that the Senate’s November 2013 rules change—to allow a simple majority to end filibusters on most nominees—“has resulted in [the] sharp increase.” There is a lot of appeal (and even a little truth) to the claim, but beware the “post hoc ergo propter hoc” fallacy that if “B” follows “A”, “A” necessarily caused “B”. There have been 61 confirmations since November 21. The rules change clearly enabled three of them. Late October and mid-November filibusters of three D.C. circuit appellate nominees were the immediate cause of the change, which in turn allowed their post-November confirmations. Saying how many of the other post-November confirmations would have failed without the rules change is an exercise in informed speculation. Here’s one way to look at it: how many of those confirmations had enough negative votes to have sustained a filibuster under the old rule? Invoking cloture—i.e., cutting off debate—under the old rule required 60 votes. Filibuster proponents were often able to prevent that by peeling off, if not 41 Nay votes, at least votes in the 30s, assuming not all 100 senators were present to vote. For this analysis, let’s set the bar at 34—the fewest number of votes that prevented a 60 vote cloture-invocation against any Obama nominee (most filibuster-sustaining votes were in the high 30’s and low 40’s). Forty five of the 51 post-November district confirmations quite probably would have happened without the rules change. They had fewer than 34 Nays. And it’s hardly automatic that the six with at least 34 Nays would have been filibustered under the old rule. Senators can and do oppose a nominee but oppose filibustering her as well. Prior to the rules change, 12 district judges were confirmed even though they had at least 34 Nays. Only one of those needed a cloture vote to move to confirmation—33 voted against cloture and 44 voted against confirmation. (Cloture votes, a rarity before the rules change, have been routine since then, and they generally get around 30-40 negative notes. But these appear to be protest votes against the rules change, inasmuch as 27 of the 51 district confirmation had no Nays and another 14 had 20 or fewer Nays.) So it’s reasonable speculation, but still speculation, that the rules change had no direct effect on district confirmations. Circuit confirmations are a different story. The three D.C. nominees clearly owe their confirmations to the rules change. Three of the seven other circuit confirmations since November had well over 34 Nays (40, 43, and 45, in fact). One nominee had represented challengers to California’s since-overturned same-sex marriage ban; another, also a Californian, was nominated to a long-vacant seat that Republican senators claimed belonged in Idaho. The third, with 45 Nays, had authored Justice Department memos providing legal justifications for drone strikes against U.S. citizens. Successful filibusters against all three, under the old rule, seem quite plausible. (The other four post-rules-change nominees were confirmed with either no, or in one case, three negative votes.) Bottom line: The rules change likely enabled at most twelve of the 61 post-rules change confirmations, and it more likely enabled only six. The frenetic pace of 2014 confirmations is due mainly to Senate Democrats’ desire to secure as many as they can before the November elections and the possibility of losing control of the confirmation process. Authors Russell Wheeler Image Source: © Larry Downing / Reuters Full Article
the Why the AI revolution hasn’t swept the military By webfeeds.brookings.edu Published On :: Wed, 06 May 2020 15:03:02 +0000 In games such as chess and Go, artificial intelligence has repeatedly demonstrated its ability to outwit the experts. Ad networks and recommendation engines are getting eerily good at predicting what consumers want to buy next. Artificial intelligence, it seems, is changing many aspects of our lives, especially on the internet. But what has been described… Full Article
the Why AI systems should disclose that they’re not human By webfeeds.brookings.edu Published On :: Thu, 07 May 2020 22:54:03 +0000 Full Article
the How the Syrian refugee crisis affected land use and shared transboundary freshwater resources By webfeeds.brookings.edu Published On :: Mon, 13 Feb 2017 18:03:23 +0000 Since 2013, hundreds of thousands of refugees have migrated southward to Jordan to escape the Syrian civil war. The migration has put major stress on Jordan’s water resources, a heavy burden for a country ranked among the most water-poor in the world, even prior to the influx of refugees. However, the refugee crisis also coincided […] Full Article
the The Kurdish Question and the Future of Iraq and Syria By webfeeds.brookings.edu Published On :: Tue, 07 Mar 2017 11:46:33 +0000 Event Announcement The weakening of the Iraqi government, the Syrian Civil War, and the rise of the so-called Islamic State have reopened questions about the future of Kurds in West Asia. To discuss recent developments and questions about the future of Iraq and Syria, Brookings India is organizing a private roundtable with Peter Galbraith. In […] Full Article
the The war and Syria’s families By webfeeds.brookings.edu Published On :: Wed, 22 Mar 2017 14:39:21 +0000 The tragedy of the Syrian conflict extends beyond its nearly 500,000 deaths, 2 million injured, and the forced displacement of half its population. The violence and social and cultural forces unleashed by the war have torn families apart, which will likely have a long lasting impact on Syria. There is universal understanding that the […] Full Article
the How to work with the Kurds—and Turkey—in Syria By webfeeds.brookings.edu Published On :: Fri, 24 Mar 2017 15:33:04 +0000 American policy towards Syria is stuck in a conundrum. President Donald Trump’s request that the Pentagon deliver him options for accelerating the campaign against ISIS has probably already generated some good tactical initiatives. But Trump’s understandable reluctance to have U.S. forces lead the fight on the ground leaves us dependent on local proxies. Unfortunately, moderate […] Full Article
the Amped in Ankara: Drug trade and drug policy in Turkey from the 1950s through today By webfeeds.brookings.edu Published On :: Wed, 05 Apr 2017 19:58:50 +0000 Key Findings Drug trafficking in Turkey is extensive and has persisted for decades. A variety of drugs, including heroin, cocaine, synthetic cannabis (bonsai), methamphetamine, and captagon (a type of amphetamine), are seized in considerable amounts there each year. Turkey is mostly a transshipment and destination country. Domestic drug production is limited to cannabis, which is […] Full Article
the The Idlib debacle is a reality check for Turkish-Russian relations By webfeeds.brookings.edu Published On :: Wed, 12 Feb 2020 07:20:18 +0000 Full Article
the A plausible solution to the Syrian refugee crisis By webfeeds.brookings.edu Published On :: Wed, 12 Feb 2020 18:09:12 +0000 The Syrian crisis is approaching its ninth year. In that span, the conflict has taken the lives of over five hundred thousand people and forced over seven million more to flee the country. Of those displaced, more than 3.6 million have sought refuge in Turkey, which now hosts more refugees than any other country in the world.… Full Article
the To help Syrian refugees, Turkey and the EU should open more trading opportunities By webfeeds.brookings.edu Published On :: Mon, 02 Mar 2020 11:05:52 +0000 After nine years of political conflict in Syria, more than 5.5 million Syrians are now displaced as refugees in Jordan, Lebanon, and Turkey, with more than 3.6 million refugees in Turkey alone. It is unlikely that many of these refugees will be able to return home or resettle in Europe, Canada, or the United States.… Full Article
the Turkey’s intervention in Syria and the art of coercive diplomacy By webfeeds.brookings.edu Published On :: Thu, 26 Mar 2020 07:39:16 +0000 Full Article
the The coronavirus has led to more authoritarianism for Turkey By webfeeds.brookings.edu Published On :: Fri, 08 May 2020 20:00:26 +0000 Turkey is well into its second month since the first coronavirus case was diagnosed on March 10. As of May 5, the number of reported cases has reached almost 130,000, which puts Turkey among the top eight countries grappling with the deadly disease — ahead of even China and Iran. Fortunately, so far, the Turkish death… Full Article
the Baltimore a year after the riots By webfeeds.brookings.edu Published On :: Fri, 01 Apr 2016 15:22:00 -0400 Jennifer S. Vey, a fellow with the Centennial Scholar Initiative, discusses the current economic, social, and political situation in Baltimore a year after the riots. “1/5 people in Baltimore lives in a neighborhood of extreme poverty, and yet these communities are located in a relatively affluent metro area, in a city with many vibrant and growing neighborhoods,” Vey says. In this podcast, Vey describes the current state of Baltimore and urges the start of discussions about the abject poverty facing many cities in the United States. Also in this episode: stay tuned for our presidential election update with John Hudak. Also, Vanda Felbab-Brown discusses global drug policy and the upcoming United Nations General Assembly special session on drug policy. Show Notes "The Third Rail" One year after: Observations on the rise of innovation districts Confronting Suburban Poverty in America Subscribe to the Brookings Cafeteria on iTunes, listen in all the usual places, and send feedback email to BCP@Brookings.edu. Authors Jennifer S. VeyFred Dews Full Article
the Setting the right economic development goals is hard work By webfeeds.brookings.edu Published On :: Mon, 04 Apr 2016 12:20:00 -0400 Amy Liu’s recent paper, “Remaking Economic Development,” is disruptive in that it rightfully undercuts the shaky foundation of what draws many practitioners to the field: the idea that success is simply structuring transactions to attract new jobs and investment. These two metrics alone can’t diagnose the economic health or trajectory of a community. Instead, as the paper outlines, setting the right goals—measured by growth, prosperity, and inclusion—provides a much richer framework for a community’s trajectory, vibrancy, and opportunity. Goal-setting clarifies how a community defines success and when it has been achieved, and promotes collaboration and increases buy-in from diverse stakeholders. If, as Liu argues, goals were designed to lead to growth, prosperity, and inclusion, metro areas would make dramatically different choices around policies, investments, and priorities, and people and communities would likely be in an overall stronger economic position. However, these goals represent a longer-term proposition than conventional measures, and, perhaps naively, “Remaking Economic Development” fails to acknowledge the barriers that prevent most communities from setting the right goals: Goals should focus on long-term interests, but election cycles prioritize clear near-term political wins. Meaty investments in infrastructure, workforce development, and fiscal policy reform needed to shift the competitive position of a community rarely yield easily defined and clearly increased short-term political capital. Economies function as regions, but many municipal strategies are not aligned with regional goals. A metropolitan area is the logical unit to measure success, but civic leaders are elected from individual municipalities, or from a state that has a larger focus. Although it’s easier to market a region of 5 million people than a city of 650,000, in Detroit, intramural competition between cities and suburbs, suburbs and suburbs, and even neighborhoods within the city all undermine the cooperative effort to set goals regionally. The failure of mayors within a region to recognize their economic connectedness is a huge problem and fuels wasteful incentive battles over retail and other projects that are not economic drivers. Goal-setting takes time, money, and requires data, but capacity is in short supply. Most economic developers and political leaders lack the framework, experience, and manpower to effectively lead a goal-setting process. And if the choice is between collectively setting goals or managing a flurry of “bird-in-hand” transactions, the transaction under consideration will always receive resources first. Leaders are primarily evaluated on near term jobs and investment figures; not having the time to lead a goal-setting process and, in many cases, lacking the approach and datasets to appropriately undertake this activity make goal-setting easy to eliminate. Despite these obstacles, setting the right goals is critical to building healthy communities. To combat political challenges, strong collaboration between business and public sector leadership is crucial, as is the recognition of diverse sub-economies with different value propositions and opportunities. Leaders must ultimately acknowledge that near term wins mean little if they are leading down a path that will not fundamentally address the long-term investment climate and the region’s productive capacities, grow wages or address employment levels, and offer broad opportunities for diverse economic participation. The following steps can help: Harness diverse, cross-sectoral perspectives. Fundamentally, businesses understand their industries better than anyone else; hence the public sector should identify ways to encourage growth, increased productivity, greater inclusion, and more competitiveness in targeted sectors by listening to businesses and jointly setting goals that marry private sector profit and public economic and social interests effectively. This approach may offset some of the other realities of short election cycles and limited capacity to participate in goal-setting or planning processes. Identify unique roles for communities within a larger regional framework. Although competition between local cities may inhibit the most meaningful dialogue and alignment of interests, deeper analysis often produces greater clarity on the niche opportunities for different communities. For example, although Detroit has a strong manufacturing base, a major project requiring more than 40 acres will likely have better site options in the suburbs. In the same vein, companies seeking a vibrant urban campus with easy access to food, living, and other amenities are likely better suited to Detroit than one of the surrounding communities. That said, both companies are likely to draw employees from communities all over the region. Effective goal-setting includes analysis that should allow municipalities to uncover their niche opportunities within a larger regional framework. Balance short-term and long-term priorities and successes. The truth is that near term investments are vital in creating momentum, providing stability, and creating jobs while long-term investments, policy decisions, and industry-focused asset development fundamentally position an economy to win over time. For example, incentives play an important role in offsetting competitive disadvantages in the short-term and should be used as a way to fill the gap while a community fixes the bigger economic challenges, ranging from exorbitant development costs to workforce development issues. The disconnect is that many communities are not focusing incentives on strategic, long-term priorities. "Remaking Economic Development” elegantly exposes the shortsightedness toward aggressive deal-making that often prevents communities from thoughtfully building their long-term economic strength with an eye on growth, prosperity, and inclusion. Sadly, the economic development profession has historically focused on growth without much attention to prosperity and inclusion, which are arguably most important in building a sustainable economy. Goal-setting—painful as it may be—is the first step towards remaking the practice and establishing an honest foundation to build a better economy in the future. Authors Rodrick Miller Image Source: © Rebecca Cook / Reuters Full Article
the The market makers: Local innovation and federal evolution for impact investing By webfeeds.brookings.edu Published On :: Thu, 28 Apr 2016 15:30:00 -0400 Announcements of new federal regulations on the use of program-related investments (PRIs) and the launch of a groundbreaking fund in Chicago are the latest signals that impact investing, once a marginal philanthropic and policy tool, is moving into the mainstream. They are also illustrative of two important and complementary paths to institutional change: fast-moving, collaborative local leadership creating innovative new instruments to meet funding demands; federal regulators updating policy to pave the way for change at scale. Impact investing, referring to “investment strategies that generate financial returns while intentionally improving social and environmental conditions,” provides an important tier of higher-risk capital to fund socially beneficial projects with revenue-generating potential: affordable housing, early childhood and workforce development programs, and social enterprises. It is estimated that there are over $60 billion of impact investments globally and interest is growing—an annual JP Morgan study of impact investors from 2015 reports that the number of impact investing deals increased 13 percent between 2013 and 2014 following a 20 percent increase in the previous year. Traditionally, foundations have split their impact investments into two pots, one for mission-related investments, designed to generate market-rate returns and maintain and grow the value of the endowment, and the other for program-related investments. PRIs can include loans, guarantees, or equity investments that advance a charitable purpose without expectation of market returns. PRIs are an attractive use of a foundation’s endowment as they allow foundations to recycle their limited grant funds and they count towards a foundation’s charitable distribution requirement of 5 percent of assets. However they have been underutilized to date due to perceived hurdles around their use–in fact among the thousands of foundations in the United States, currently only a few hundred make PRIs. But this is changing, spurred on by both entrepreneurial local action and federal leadership. On April 21, the White House announced that the U.S. Department of the Treasury and Internal Revenue Service had finalized regulations that are expected to make it easier for private foundations to put their assets to work in innovative ways. While there is still room for improvement, by clarifying rules and signaling mainstream acceptance of impact investing practices these changes should lower the barriers to entry for some institutional investors. This federal leadership is welcome, but is not by itself enough to meet the growing demand for capital investment in the civic sector. Local innovation, spurred by new philanthropic collaborations, can be transformative. On April 25 in Chicago, the Chicago Community Trust, the Calvert Foundation, and the John D. and Catherine T. MacArthur Foundation launched Benefit Chicago, a $100 million impact investment fund that aims to catalyze a new market by making it easier for individuals and institutions to put their dollars to work locally and help meet the estimated $100-400 million capital needs of the civic sector over the next five years. A Next Street report found that the potential supply of patient capital from foundations and investors in the Chicago region was more than enough to meet the demand – if there were ways to more easily connect the two. Benefit Chicago addresses this market gap by making it possible for individuals to invest directly through a brokerage or a donor-advised fund and for the many foundations without dedicated impact investing programs to put their endowments to work at scale. All of the transactional details of deal flow, underwriting, and evaluation of results are handled by the intermediary, which should lead to greater efficiency and a significant increase in the size of the impact investing market in Chicago. In the last few years, a new form of impact investing has made measurement of social return to investments even more concrete. Social impact bonds (SIBs), also known as pay for success (PFS) financing, are a way for private investors (including foundations) to provide capital to support social services with the promise of a return on their investment from a government agency if some agreed-upon social outcomes are achieved. These PFS transactions range from funding to support high-quality early childhood education programs in Chicago to reduction in chronic individual homelessness in the state of Massachusetts. Both the IRS and the Chicago announcements are bound to contribute to the growth of the impact bond market which to date represents a small segment of the impact investing market. These examples illustrate a rare and wonderful convergence of leadership at the federal and local levels around an idea that makes sense. Beyond simply broadening the number of ways that foundations can deploy funds, growing the pool of impact investments can have a powerful market-making effect. Impact investments unlock other tiers of capital, reducing risk for private investors and making possible new types of deals with longer time horizons and lower expected market return. In the near future, these federal and local moves together might radically change the philanthropic landscape. If every major city had a fund like Benefit Chicago, and all local investors had a simple on-ramp to impact investing, the pool of capital to help local organizations meet local needs could grow exponentially. This in turn could considerably improve funding for programs—like access to quality social services and affordable housing—that show impact over the long term. Impact investing can be a bright spot in an otherwise somber fiscal environment if localities keep innovating and higher levels of government evolve to support, incentivize, and smooth its growth. These announcements from Washington and Chicago are examples of the multilevel leadership and creative institutional change we need to ensure that we tap every source of philanthropic capital, to feel some abundance in an era where scarcity is the dominant narrative. Editor's Note: Alaina Harkness is a fellow at Brookings while on leave from the John D. and Catherine T. MacArthur Foundation, which is a donor to the Brookings Institution. The findings, interpretations and conclusions posted in this piece are solely those of the authors and not determined by any donation. Authors Alaina J. HarknessEmily Gustafsson-Wright Image Source: © Jeff Haynes / Reuters Full Article
the It happens on the pavement: Putting cities at the center of countering violent extremism By webfeeds.brookings.edu Published On :: Wed, 01 Jun 2016 11:40:00 -0400 In March alone, at least nine cities across three continents were hit by terrorist attacks. Municipalities—from megacities to tertiary cities—continue to bear the brunt of such attacks: in the short term, they provide first response and take essential security measures; in the longer term, they suffer from the fallout of intercommunal tensions and economic slowdowns, which can last for years and spread beyond the target city. Yet, post-attack discussions tend to be dominated by what national governments can do to prevent future attacks—whether through enhanced border security, law enforcement, intelligence, or military measures; or though intensified efforts to resolve underlying conflicts; or through more cooperation with foreign governments. This is understandable given the resources of national governments and their long-standing monopoly on force and foreign policy. Nevertheless, a small but growing number of cities and other local authorities are realizing that they have an essential role to play in countering violent extremism (CVE) as well. Urban trend-setters There is nothing new about cities coming to the realization that they need to act in the face of global challenges. Mayors and city-networks such as the C40 Climate Action Leadership Group have vocally engaged on the global stage to counter carbon emissions. Cities have frequently shown themselves to be generally more nimble and less averse to risk-taking than their national counterparts. Mayors operate under intense expectations to “get things done,” but when it comes to the threats of transnational violent extremism, what does that mean? Much like with climate change and other global challenges where cities are becoming increasingly active stakeholders, cities are serving as laboratories for developing and testing innovative initiatives to prevent violent extremism from taking root, designed and implemented in collaboration with local communities. [C]ities are serving as laboratories for developing and testing innovative initiatives to prevent violent extremism from taking root. The comparative advantages of local authorities are manifold: They are best positioned to understand the grievances that might make their citizens vulnerable to terrorist recruitment; to identify the drivers and early signs of violent extremism; to build trust between the community and local police; to develop multi-agency prevention efforts that involve families, community leaders, social workers, and mental health professionals; and to develop programs that offer alternatives to alienated youth who might otherwise be attracted to violence. Recognizing these advantages, local leaders are developing strategies and programs to address the violent extremist threat at each stage of the radicalization cycle. Cities across Europe have been at the forefront of these efforts, with Aarhus, Denmark often cited as a model. The approach of Aarhus involves both prevention and care, relying an extensive community-level network to help young people returning from Syria an opportunity to reintegrate in Danish society (provided they haven’t committed a crime) and mentoring to try to dissuade people from traveling to the conflict. In Montgomery County, Maryland, the county authorities are involved in a community intervention program that includes training for faith leaders, teachers, social service providers, police, and parents on how to recognize the early signs of extremism in underserviced immigrant communities. In Montreal, a $2 million, multi-disciplinary “anti-radicalization center” provides mothers who suspect their children may be vulnerable to radicalization or recruitment with resources that don’t involve contacting the police. The center focuses on training people how to identify the signs of radicalization and researching the drivers of radicalization in Montreal and what works to prevent its growth. Cities are dynamic actors, in part, because they have no problem borrowing from each other. Inspired by the Montreal initiative, Brussels opened a prevention-focused, anti-radicalization center, which—like the Montreal center—keeps the police out of the picture unless necessary to confront an imminent threat. In Australia, both Victoria and New South Wales have set aside funds to support local NGO-led interventions that target individuals who may be radicalizing and build community resilience. In Mombasa, Kenya, Governor Hassan Ali Joho is working with the regional parliament and local civil society groups to develop a county-level CVE strategy that includes a heavy focus on providing youth with positive alternatives to joining al-Shabab. Except for Mombasa, nearly all municipality-led CVE efforts are taking place in the global north. Throughout the world, mayors and other local leaders are not part of national-level conversations about how to prevent future attacks. If national governments insist on viewing national security issues like violent extremism as being the exclusive policy domain of the capital, they will miss crucial opportunities to address a threat that is increasingly localized. Part of the challenge is that, much like on other global issues, municipal authorities operate within the policy and bureaucratic frameworks of national governments. Those governments can enable or, just as frequently, impede effective local action. Thus, there is often a ceiling for local actors. Raising or breaking through the ceiling is particularly difficult in the security space, given the monopoly that many national governments want to maintain over issues of national security—even while recognizing the need for local solutions. Flattening the CVE policy space The good news is that in countries where local authorities can innovate and lead, energy around city-led CVE efforts is increasing. Cities are sharing lessons learned and challenges, with city-to-city networks like with the Strong Cities Network (SCN)—which held its first summit earlier this month in Antalya, Turkey—sprouting to facilitate cooperation. Yet, a significant majority of SCN members are in countries where national governments already acknowledge local authorities’ key role in CVE. With a few exceptions, cities from large swathes of the globe—including in regions where the problem of violent extremism is most acute, like the Middle East and North Africa, as well as Asia—are not enabled to contribute to efforts to prevent violent extremism from taking root in their communities. CVE discussions in general should highlight ways in which national policymakers have enabled effective local CVE activities, as well as roadblocks and solutions. These discussions should also be brought into multilateral platforms such as the U.N. Global Counterterrorism Forum. A number of other steps could be taken to enhance vertical cooperation on CVE. For example, countries could involve municipal-level representatives (not simply the national ministry responsible for engaging with such authorities) in developing national CVE plans and provide such authorities with a role in implementation. National governments that already do this could start including representatives of cities in security and broader foreign policy dialogues, particularly with those that continue to resist their involvement. National governments should incentivize local authorities to work with their communities to innovate in this issue area. A public-private innovation fund could be established to support city-led CVE projects in countries where political will exceeds resources; those international donors committed to supporting local solutions to global challenges and increasing the involvement of local authorities in national security conversations should invest in such a fund and, more broadly, in building the capacity of city-level officials and practitioners in the CVE sphere. None of these steps is likely to be an elixir—after all, the notion that national security issues should be handled exclusively at the national level is deeply entrenched. However, taking these steps can generate gradual improvements in vertical cooperation on CVE issues, much like we have seen with international and inter-agency counterterrorism cooperation involving national governments over the past decade. Authors Eric RosandIan Klaus Full Article
the Mapping—and tackling—the world's violence By webfeeds.brookings.edu Published On :: Thu, 02 Jun 2016 12:02:00 -0400 What are the greatest dangers to citizens of the world's cities, as well as its towns, villages, and rural areas? This is an important issue to understand as we approach the general election season in the United States, when candidates for the highest office in the land will have to help voters make sense of the state of violence around the world—and tell us what they would do about it. Headlines can be deceiving. We hear about China's rise, Russia's adventures, North Korea's nuclear misbehavior, the Iran nuclear deal, Pakistan and Afghanistan, and of course ISIS and civil war in the Middle East all the time. But it is also worth taking a step back to understand the broader state of violence on the planet today. Do so, and you might be surprised. As part of a Brookings-JPMorgan Chase project that we call Securing Global Cities, we have attempted to map these trends in violence, benefiting greatly from ongoing work at European think tanks like the Peace Research Institute Oslo (PRIO) and the Stockholm International Peace Research Institute (SIPRI), the University of Maryland, and the United Nations. Here are some of the most salient facts and figures: Even with Russian President Vladimir Putin's activities from Ukraine to Syria in recent years, interstate conflict remains low and mild in intensity by historical standards, thankfully. China's activities in the South China Sea, however concerning, do not presently broach the threshold of interstate war. Unfortunately, the picture is more muddled for civil war. It remains less prevalent and less deadly than in the worst periods of the Cold War and the 1990s. But it has ticked up considerably since the beginning of the Arab spring in 2011, especially in the broad arc from the Sahel in Africa through the Middle East and to South Asia. Worldwide, perhaps 100,000 people a year are dying in civil wars. Yet war and terrorism are not the primary security threats to most people on the planet today. Notably, each year, more than 400,000 people are murdered around the globe, according to the United Nations Office on Drugs and Crime. Murder rates are highest in the Americas and in Africa, at least twice the global average. They are greatest in central and southern Africa, and from Brazil and Venezuela/Colombia to Central America and the Caribbean and Mexico. The least violent parts of the world include most of East Asia and Western Europe, despite the terrorism threat afflicting the latter region of late. The “most improved” regions in recent decades include Colombia, former war-torn African states like Angola, Mozambique, Liberia, and Sierra Leone, as well as parts of Southeast Asia and a number major U.S. cities. If one broadens the lens on the definition of violence, motor vehicle accidents constitute an even bigger threat. The World Health Organization estimates that 1.2 million people a year die in such accidents worldwide. As cities and countries think about future security, they must bear in mind not just these current realities but the potential for catastrophe—from earthquakes, droughts, pandemics, nuclear reactor disasters, and massive infrastructural failures. In a worst case, tens of millions could suddenly be put at acute risk. There is much to celebrate about the human condition today. Despite the headlines, life has actually never been safer or more prosperous for a higher fraction of the world's population. But our progress is fragile, and it is of course incomplete. The next U.S. president needs a plan for Syria, Libya, and Yemen, to be sure. But he or she also needs to address the broader challenges of urban and global security for a planet that is getting healthier and more secure but which still has a very long ways to go. A good first step is to collect and study what works in key cities and countries around the world so that we can all learn from each other, on topics ranging from breaking up gangs to corralling drug traffickers to stopping terrorism. A great deal has been learned; it is time to spread the knowledge, and emulate the best practices worldwide. Authors Michael E. O'Hanlon Full Article
the U.S. metros ranked by the 5 characteristics of inclusive economies By webfeeds.brookings.edu Published On :: Fri, 10 Jun 2016 11:11:00 -0400 Ranking U.S. metro areas, or counties, or even countries, by some fixed metric is a straightforward and often useful way to start a deeper dive into a larger body of research. For example, the top 10 counties by share of taxpayers claiming EITC, or the top 10 metro areas by change in prosperity. But what if the phenomenon being measured is more complex, has interacting characteristics that make a top 10 list less useful? In new research, Brookings Senior Fellow Alan Berube, along with his colleagues at the Metropolitan Policy Program, and John Irons of the Rockefeller Foundation, ask “What makes an economy inclusive?” Inclusive economies, they say, “expand opportunities for more broadly shared prosperity, especially for those facing the greatest barriers to advancing their well-being.” A new Rockefeller Foundation framework identifies five characteristics of inclusive economies: equity, participation, stability, sustainability, and growth. A typical ranking approach would list the top 10 inclusive economies (or the bottom 10) based on some score derived from data. It turns out, however, that understanding the “trends and relationships that might reveal the ‘big picture’ of what makes an economy inclusive” doesn’t lend itself to typical ranking techniques, and instead requires looking at relationships among the characteristics to ascertain that “big picture.” Take, for example, equity, defined as: “More opportunities are available to enable upward mobility for more people.” For this analysis, Brookings researchers used 16 discrete indicators—such as the Gini coefficient, median income of less-educated workers as a share of overall median income, and transportation costs as a share of income—to come up with an equity score for each of the 100 largest U.S. metro areas. (Likewise, each of the other four inclusive economy indicators are composites of many discrete indicators, for a total of about 100 across the five.) Looking at equity alone, the top 10 metro areas are: Allentown, PA-NJ Harrisburg, PA Ogden, UT Scranton, PA Des Moines, IA Salt Lake City, UT Wichita, KS Grand Rapids, MI Pittsburgh, PA Worcester, MA-CT Top 10 lists can also be fashioned for the other four dimensions in the inclusive economies research, each showing a different mix of U.S. metro areas. For example, the top three metro areas in the growth characteristic are San Jose, CA; Houston, TX; and Austin, TX. For participation: Madison, WI; Harrisburg, PA; and Des Moines. Stability: Madison; Minneapolis, MN-WI; and Provo, UT. And, sustainability: Seattle; Boston; and Portland, OR-WA. In fact, 30 different metropolitan areas are present in the combination of the five inclusive top 10 lists, spanning the country from Oxnard, to Omaha, to Raleigh. The individual top 10 lists for each inclusive economy characteristic look like this: Because these rankings each impart useful and distinctive information about metro economies, Brookings researchers next combined the data into an overall ranking of the 100 metro areas “based on their average rankings on individual indicators for each of the five inclusive economy characteristics.” Instead of generating a ranking from 1 to 100, the analysis produces a grid-like chart that shows how metro areas fare not only in terms of inclusiveness (top to bottom), but also along a left-to-right spectrum that demonstrates the trade-offs between growth and equity. Here’s a sample from the chart (visit and study the chart here; note that wealth is depicted but by itself is not part of the inclusive economy score): One thing that stands out when considering this colorful chart against the disaggregated top 10 lists is how unrelated they seem to be. San Jose sits at the upper right position of the chart, suggesting that it ranks as one of the most inclusive metro economies, and yet it ranks only 51st on equity. By contrast, Allentown, PA—on the left of the second row—ranked first in equity, but lower on other measures. However, taken as a whole, both Allentown and San Jose are in the top 20 metro areas overall for inclusiveness. Detroit sits along the bottom row of the inclusiveness chart. Among the five characteristics, it posts its highest rank in growth (37th overall), with much lower ranks in the other categories, even though it ranks 29th for wealth. Las Vegas, NV, is one of the least wealthy metro areas (91st), but ranks 19th in terms of equity. Berube and Irons point to what they call “a few important insights” about the chart and these data: Judged across all five characteristics, the “most” and “least” inclusive metro economies are geographically and economically diverse. More equitable metropolitan economies also exhibit higher levels of participation and stability. Growth and equity vary independently across metropolitan areas. Metro areas with similar performance across the five characteristics may not possess the same capacity to improve their performance. For more detailed discussion, and the complete inclusive economies chart, see “Measuring ‘inclusive economies’ in metropolitan America,’ by John Irons and Alan Berube. See also “A metro map of inclusive economies,” showing metro areas that are similar to others in these outcomes. Finally, download detailed information on the composition of the 100 indicators used to measure the five inclusive economies indicators. Authors Fred Dews Full Article
the Countering violent extremism programs are not the solution to Orlando mass shooting By webfeeds.brookings.edu Published On :: Wed, 29 Jun 2016 00:00:00 -0400 In the early hours of Sunday June 12, 2016, a madman perpetrated the mass murder of 49 people in a nightclub considered a safe space for Orlando’s LGBT community. Politicians quickly went into gear to exploit this tragedy to push their own agendas. Glaringly silent on the civil rights of LGBT communities, Donald Trump and Ted Cruz repeated their calls to ban, deport, and more aggressively prosecute Muslims in the wake of this attack. As if Muslims in America are not already selectively targeted in counterterrorism enforcement, stopped for extra security by the TSA at airports, and targeted for entrapment in terrorism cases manufactured by the FBI. Other politicians reiterated calls for Muslim communities to fight extremism purportedly infecting their communities, all while ignoring the fact that domestic terrorism carried out by non-Muslim perpetrators since 9/11 has had a higher impact than the jihadist threat. Asking Muslim American communities to counter violent extremism is a red herring and a nonstarter. In 2011, the White House initiated a countering violent extremism (CVE) program as a new form of soft counterterrorism. Under the rubric of community partnerships, Muslim communities are invited to work with law enforcement to prevent Muslims from joining foreign terrorist groups such as ISIS. Federal grants and rubbing elbows with high level federal officials are among the fringe benefits for cooperation, or cooptation as some critics argue, with the CVE program. Putting aside the un-American imposition of collective responsibility on Muslims, it is a red herring to call on Muslims to counter violent extremism. An individual cannot prevent a criminal act about which s/he has no knowledge. Past cases show that Muslim leaders, or the perpetrators’ family members for that matter, do not have knowledge of planned terrorist acts. Hence, Muslims and non-Muslims alike are in the same state of uncertainty and insecurity about the circumstances surrounding the next terrorist act on American soil. CVE is also a nonstarter for a community under siege by the government and private acts of discrimination. CVE programs expect community leaders and parents to engage young people on timely religious, political, and social matters. While this is generally a good practice for all communities, it should not be conducted through a security paradigm. Nor can it occur without a safe space for honest dialogue. After fifteen years of aggressive surveillance and investigations, there are few safe spaces left in Muslim communities. Thanks in large part to mass FBI surveillance, mosques have become intellectual deserts where no one dares engage in discussions on sensitive political or religious topics. Fears that informants and undercover agents may secretly report on anyone who even criticizes American foreign policy have stripped mosques from their role as a community center where ideas can be freely debated. Government deportations of imams with critical views have turned Friday sermons into sterile monologues about mundane topics. And government efforts to promote “moderate” Muslims impose an assimilationist, anti-intellectual, and tokenized Muslim identity. For these reasons, debates about religion, politics, and society among young people are taking place online outside the purview of mosques, imams, and parents. Meanwhile, Muslim youth are reminded in their daily lives that they are suspect and their religion is violent. Students are subjected to bullying at school. Mosques are vandalized in conjunction with racist messages. Workers face harassment at work. Muslim women wearing headscarves are assaulted in public spaces. Whether fear or bigotry drives the prejudice, government action and politicians’ rhetoric legitimize discrimination as an act of patriotism. Defending against these civil rights assaults is consuming Muslim Americans’ community resources and attention. Worried about their physical safety, their means of livelihood, and the well-being of their children in schools; many Muslim Americans experience the post-9/11 era as doubly victimized by terrorism. Their civil rights are violated by private actors and their civil liberties are violated by government actors—all in retribution for a criminal act about which they had no prior knowledge, and which they had no power to prevent by a criminal with whom they had no relationship. To be sure, we should not sit back and allow another mass shooting to occur without a national conversation about the causes of such violence. But wasting time debating ineffective and racialized CVE programs is not constructive. Our efforts are better spent addressing gun violence, the rise of homophobic violence, and failed American foreign policy in the Middle East. We all have a responsibility to do what we can to prevent more madmen from engaging in senseless violence that violates our safe spaces. This article was originally published in the Huffington Post. Authors Sahar Aziz Publication: The Huffington Post Image Source: © Jonathan Ernst / Reuters Full Article
the The reimagination of downtown Los Angeles By webfeeds.brookings.edu Published On :: Fri, 08 Jul 2016 11:00:00 -0400 Los Angeles has long been a city associated with the common ills of urban excess: sprawl, homelessness, and congestion. More charitable descriptions paint it as West Coast paradise, boasting sunshine and celebrities in equal measure. A three-day visit to downtown Los Angeles exposed the nuances behind these stereotypes. Hosted by the Los Angeles Downtown Center Business Improvement District, which is focused on strengthening downtown as an innovation district, our visit began as a real estate tour but quickly revealed regeneration and innovation activity that confounded our expectations. Downtown LA (DTLA)’s innovation district focuses not just on tech firms but also on historic LA industry strengths like fashion, design, and real estate. LA may have sat in the shadow of the Silicon Valley tech boom, but it appears to be revitalizing in time for the convergence economy, in which tech is no longer a separate sector but ingrained in all forms of economic and creative activity. And at a time where firms are revaluing proximity, vibrancy, and authenticity, DTLA could not be in a better place. While a number of U.S. cities subjected their downtowns to a range of urban renewal initiatives, the urban fabric of DTLA is largely intact. Vibrant areas like South Broadway feature boutique hotels, a dozen theatres, and clothing stores and bars that exist in historic infrastructure like reclaimed theatres. There is an urban feel that is authentically LA. The initial renaissance of DTLA began in the late 1990s, after the residential units within its 65 blocks had dwindled to just 10,000. Along with transportation improvements, permissive planning policies such as adaptive reuse—which allowed commercial buildings to be converted into residential use—were instrumental in increasing DTLA’s residential population. Since 1999, the residential population and housing units have tripled. With new bars and restaurants springing up on every corner, it is no surprise that three-quarters of DTLA’s current residents are aged between 23 and 44. Building on this residential surge, an increasing number of businesses are now setting up or relocating downtown. DTLA office space has not always been an easy sell. Employers balk at the prospect of subjecting their workforce to the punishing commute. And Bunker Hill and the adjacent Financial District, the epicenter of the central business district, offers little more than unpopulated plazas and cubicled office space. DTLA has worked to serve its newfound residential population and attract more workers and companies by retrofitting buildings to modern aesthetic standards. The exposed brickwork and ceiling equipment of many DTLA offices like those of Nationbuilder, an online platform used for political and civic campaigns, is not just a statement of style but a conscious decision to make downtown office buildings feel hospitable to creative firms. The BLOC, a 1.9 million square foot retail development, is essentially a mall that has been turned inside out, with the roof removed to reveal an open air plaza, unrecognizable from the fortress-style building that once sat in the same spot. While downtown’s office blocks are a fantastic asset in attracting innovation activity, the area also boasts a vast amount of warehouse space. These larger footprints, most often used for textile or food production, are attracting a range of activities that require space or, in the case of Tesla’s Hyperloop, secrecy. Such industrial firms are interspersed with new art galleries and a historic knitting mill, proof of the area’s artistic heritage. The individuals leading the drive for a DTLA innovation district, such as Nick Griffin, director of Economic Development for the Downtown Center Business Improvement District, are realistic about challenges, such as the lack of quality public space, and proactive in leveraging existing assets, such as the large supply of creative office space. These efforts and LA’s distinctive industry strengths are combatting one of the biggest challenges to attracting businesses downtown: the strength of competing areas like Silicon Beach, which includes Santa Monica and Playa del Rey and offers an established tech ecosystem alongside an attractive location. Another challenge? Like many U.S. cities, LA bears the scars of suburban sprawl and a legacy of under investment in public transportation. Congestion is a constant complaint. But here too LA is making progress. In November, Angelinos will vote on an extension of Measure R—a 2008 ballot initiative raising the sales tax to fund core transportation projects—to provide sustainable funding for transportation infrastructure and improve access to the city center through the metro system. Other ambitious projects, such as the Regional Connector, a light rail subway through the middle of downtown, will have a profound effect on the area's connectivity. This project is not just about getting people to and from downtown—it will also have a transformative effect on public space. The city is working with Project for Public Spaces to redesign one of the Connector’s hubs, Pershing Square, with the aim of providing a public space where employees and residents can convene and collaborate. Connectivity will play a vital role in the continuing success of DTLA’s resurgence. But the DTLA innovation district’s main opportunity lies in better serving and connecting the people who make it work. With hometown authenticity and civic commitment, DTLA is on its way to creating a city center that is greater than the sum of its parts. DOWNTOWN LA IN NUMBERS Size: Approx 8.6 sq. miles Major districts: Civic Center, Bunker Hill, Financial District, South Park, Fashion District, Jewelry District, Historic Core, Little Tokyo, Exposition Park, Toy District, Central City East, Arts District, City West, Chinatown, and Central Industrial District Residential population: 60,600 66% of residents are between the ages of 23 and 44 Average median household income: $98,000 Education status: 79% of residents hold a college degree Average workday population: 500,000 Photo Credit: Hunter Kerhart Authors Kat HannaAndrew Altman Image Source: Hunter Full Article
the The muni market in the post-Detroit and post-Puerto Rico bankruptcy era By webfeeds.brookings.edu Published On :: Tue, 12 Jul 2016 14:10:00 -0400 Event Information July 12, 20162:10 PM - 4:00 PM EDTOnline OnlyLive Webcast Puerto Rico is the latest, but probably not the last, case of a local government confronting financial strains that call into question its ability to meet its obligations to bondholders while providing services to its taxpaying constituents. Puerto Rico is, of course, a special case because it is a territory, not a state or municipality. Will Puerto Rico’s problems have ripple effects for the $3.7 trillion U.S. municipal bond market? What about the resolution of Detroit's bankruptcy? How will state and local governments and the courts weigh the interests of pensioners, employees, taxpayers and bondholders when there isn't enough money to go around? On Tuesday, July 12, the Hutchins Center on Fiscal and Monetary Policy at Brookings webcasted the keynote address from the 5th annual Municipal Finance Conference, delivered by the sitting governor of Puerto Rico, Hon. Alejandro García Padilla. After Governor Padilla’s remarks on Puerto Rico’s future, Hutchins Center Director David Wessel moderated a panel on the politics and practice of municipal finance in the post-Detroit and post-Puerto Rico era. Join the conversation and tweet questions for the panelists at #MuniFinance. Video Keynote address by Alejandro García PadillaPanel: The muni market in the post-Detroit and post-Puerto Rico eraChanging patterns in household ownership of municipal debtMunicipal borrowing costs and state policies for distressed municipalitiesMunicipal finance structure and Chapter 9 creditor prioritiesTerm limits and municipal borrowing costsWhy has regional income convergence in the U.S. declined?State strategies for detecting fiscal distress in local governmentsPensions and other post-employment benefits Transcript Download the uncorrected transcript (.pdf) Event Materials Garcia Padilla Slides20160712_munifinance_puertorico_detroit_transcript Full Article
the Cities as classrooms: The Urban Thinkscape project By webfeeds.brookings.edu Published On :: Thu, 21 Jul 2016 09:00:00 -0400 We’re just over midway through the hazy days of summer vacation, and children without access to high quality enrichment opportunities are already slipping behind their wealthier peers. As noted in a recent New York Times article, in addition to the decrease in math proficiency that most kids experience over the break, low-income children also lose more than two months of reading skills—skills they don’t regain during the school year. This compounds the already deep educational disparities found among students of different socioeconomic groups, which can be observed as early as 18 months of age. Most efforts to address these gaps focus on improving our K-12 educational systems. Yet, children spend an average of 80 percent of their waking time outside of a classroom—a simple, yet startling statistic that highlights the need to explore a broader range of solutions. As we learned at a recent Brookings event, Urban Thinkscape, an ongoing project from developmental psychologists Kathy Hirsh-Pasek and Roberta Michnick Golinkoff, might be one of those solutions. Drawing on findings from their research on guided play—particularly from interventions like the Ultimate Block Party and The Supermarket Study—the project embeds playful learning activities, such as games and puzzles, into public places where children routinely spend time during non-school hours. Designed by architect Itai Palti, each installation is created with specific learning goals in mind and reflects best practices in psychological research. With a pilot led by researcher Brenna Hassinger-Das in progress in the West Philadelphia Promise Zone, the project is already revealing important lessons—not only for educators, but for urban planners and policymakers as well. The first involves the (often under-appreciated) need to work with local residents. Through meetings and focus groups with leaders of community organizations, neighbors, and Promise Zone stakeholders, the team gained a clearer understanding of resident needs, spurred interest in the project, identified potential sites, and improved designs. Residents were brought into the process early, empowered to offer suggestions at several stages, and will continue to be engaged as the project is implemented and assessed. The upshot? When community members are meaningfully involved—and local wisdom valued—from the onset, residents become invested in the project and feel a sense of ownership of it over the long haul. This not only improves the likelihood that the project will succeed, but also helps foster neighborhood trust and cohesion, and builds social capital that can be applied to future efforts. BRENNA HASSINGER-DAS - A community focus group gives feedback on the West Philadelphia Urban Thinkscape project, January 21, 2016. A second lesson is the extent to which a full scaling of the project could help transform distressed neighborhoods through what Project for Public Spaces often refers to as “lighter, quicker, cheaper” interventions. Many high poverty urban areas are challenged with large numbers of vacant or underutilized properties, as well as dull spaces (like bus stops) that serve only utilitarian functions. The Urban Thinkscape project aims to take such spaces and remake them into opportunities for interaction and learning—and by doing so create tangible improvements to the neighborhood’s physical fabric. While the West Philadelphia pilot has substantial long-term planning behind it, ideally the “playful” installments will be refined over time so they can be more easily and cheaply implemented in other urban neighborhoods. Finally, the Urban Thinkscape interventions have the potential to advance academic and spatial skills in children, reducing the gap in school readiness, and ultimately fostering better educational and life outcomes. Many families in high poverty neighborhoods can’t afford extracurricular enrichment activities, particularly during the summer. And even where they might be offered—via community centers, or through other nonprofit initiatives focused on the arts, STEM activities, or sports—children may only experience them at certain times of the week. Urban Thinkscape aims to supplement these activities by embedding learning opportunities into the everyday landscape through interventions that develop numeracy, literacy, and other skills necessary to succeed in school and eventually the workforce. From an urban planning and policy perspective, this individual development is critical to helping build family wealth and vibrant, healthy city neighborhoods. Though still nascent in its development, the Urban Thinkscape model appears to be a fun, innovative way to give children—and their caregivers—learning opportunities outside the classroom, while creating new gathering spaces and improved public places. In this way, the project is creatively employing the city itself as an agent of change. If the full vision of this work is realized, perhaps we can finally put the brakes on the “summer-slide” such that all kids can start the school year at the top of their game. Authors Jennifer S. VeyJason Hachadorian Full Article
the Making the Rescue Package Work: Asset and Equity Purchases By webfeeds.brookings.edu Published On :: Fri, 10 Oct 2008 12:00:00 -0400 Executive Summary If the main purpose of the Emergency Economic Stabilization Act of 2008 is to give banks confidence in each other, then enabling Treasury directly to bolster the capital positions of banks that need more capital may be an even more effective way to restoring confidence to the inter-bank market than the purchased of troubled assets. Whatever Congress may have intended about the pricing of the distressed assets, it also authorized a much more direct way to recapitalize the financial system and weak banks in particular: direct purchases by Treasury of securities that individual institutions may wish to issue to bolster their capital. At this writing, Treasury reportedly is considering ways do this. In this essay, we outline a specific bank recapitalization plan for Treasury to consider. In particular, Treasury could announce its willingness to entertain applications for capital injections, using a set pricing formula. For publicly traded banks, Treasury could buy at the price as of a given date, such as the price one or more days before its plan was announced. For privately-owned banks, Treasury could use a price based on the average price-to-book value for publicly traded banks as of that date. To prevent government intrusion into the affairs of the banks, the stock should be non-voting. Treasury would make clear that it only would take minority positions. There should be no takeovers of more companies—AIG, Fannie and Freddie are quite enough. Treasury also should announce that it will dispose (or sell back to the bank) any stock acquired through these actions as soon as the financial system has stabilized and the bank is in sound financial condition (perhaps a time limit, such as three years, should be a working presumption). We believe Treasury can accommodate a systematic recapitalization plan within the funding it has been given – initially $350 billion and another $350 billion later upon request to Congress (unless it disapproves) – by using the required disclosures about its asset purchases as a way of jump starting private sector pricing and trading of these securities. This should conserve Treasury’s resources it might otherwise use for asset purchases, and thus free up funds to recapitalize weak banks directly, but in an orderly fashion. Treasury will have to be careful when it buys distressed assets to guard against the possibility that banks will just dump their worst stuff on taxpayers. The Department will also have to be careful when buying equity in banks. There cannot be an open invitation for bank owners to move assets out of the bank and then, in effect, say: “We don’t want this bank, you buy it.” To avoid this problem, Treasury should work closely with the FDIC and other regulators to determine whether or not a particular bank is eligible for an equity injection. The Department also may need to limit the scope of the recapitalization program to larger national banks, if it becomes infeasible to allow smaller banks to participate. Making the Rescue Package Work: Asset and Equity Purchases [1] The unprecedented financial rescue plan – technically the Emergency Economic Stabilization Act of 2008 (“EESA,” the “Act”, or the “plan”) -- has now been enacted by the Congress. One of the goals of the plan is to end the immediate panic in inter-bank lending markets, and on this basis several omens are not encouraging. The Dow Jones stock index has been dropping daily, by large amounts, since EESA was enacted. The TED spread measures the difference between the interest rate on short term Treasury bills and the interest rate banks pay to borrow from each other (the LIBOR) and is a widely accepted measure of perceived risk in the financial sector. For several years this spread had hovered around 50 basis points or half a percentage point, reflecting the fact that lending to other financial institutions was considered almost as safe as buying Treasury bills. However, the spread shot up to 2.4 percentage points in July 2007 as the financial crisis hit, and it fluctuated widely in subsequent months. Following passage of the plan it remains even more elevated than it was last July—it was 3.8 percentage points as of October 7 and broke 4 percent on October 8. Financial institutions simply do not trust each other’s credit worthiness. Some of the market worries, of course, reflect the fragile state of the U.S. and global economies, but clearly the passage of the rescue plan itself has not calmed markets. A second and related goal for the plan, according to media accounts, is to facilitate the recapitalization of the financial system, but the language of the bill is surprisingly coy about this. While the Act aims to “restore liquidity and stability to the financial system” it also directs the Treasury Secretary to prevent “unjust enrichment of financial institutions participating” in the asset purchase program. It is not yet clear whether Treasury will choose to recapitalize banks through its asset purchases – by buying them at prices above the values to which banks and other sellers have already written them down – or whether Treasury will simply use its purchases to stabilize prices for these securities and thus provide liquidity to the market, even if it may result in additional write-downs of their values (and thus additional reductions in capital). Whatever Congress may have intended about the pricing of the distressed assets, it also authorized a much more direct way to recapitalize the financial system and weak banks in particular: direct purchases by Treasury of securities that individual institutions may wish to issue to bolster their capital. Of course, in normal times, such authority would be unnecessary because financial institutions would seek to tap private sources of capital first. But these are not normal times, to say the least. If the main purpose of the plan is to give banks confidence in each other, then enabling Treasury directly to bolster the capital positions of banks that need more capital may be an even more effective way to restoring confidence to the inter-bank market. Accordingly, we outline here a possible supplementary bank recapitalization plan that we believe Treasury should pursue, at the same time it purchases distressed assets. As this paper is being completed on October 9, 2008, The New York Times reports that the Treasury is now considering such a move. We are encouraged by this and in this essay we provide both a rationale for doing so and some concrete suggestions for how such a direct recapitalization program might work. We do not support further nationalization of the banking system beyond what has already been done but we believe that the crisis has become so severe that the asset purchase plan on its own will not be enough to turn the current situation around. Additional capital is urgently needed and could be supplied by Treasury purchases of minority, non-voting equity stakes, or by warrants. We believe Treasury can accommodate a systematic recapitalization plan within the funding it has been given – initially $350 billion and another $350 billion later upon request to Congress (unless it disapproves) – by using the required disclosures about its asset purchases as a way of jump starting private sector pricing and trading of these securities. This should conserve Treasury’s resources it might otherwise use for asset purchases, and thus free up funds to recapitalize weak banks directly, but in an orderly fashion, as we describe below. Why Do Banks Need More Capital? Financial institutions make money by borrowing money on favorable terms, that is, at low interest rates, and then lending it out at higher rates or by buying assets that yield higher returns. They may make money in other ways too, but the state of their balance sheets of assets and liabilities is crucial. In order to create a viable financial institution that can accommodate requests by depositors to take money out, someone has to put up capital and typically this comes from the equity in the company. The owners of the company have an incentive to keep this equity capital low and to build a large volume of borrowing and lending off a small base of capital—to increase leverage. This is because the profits earned are divided among the equity owners and the less capital there is, the higher the return on equity. Governments for many years and in almost all countries have regulations in place setting capital requirements for banks in particular to stop them from taking too much risk in the pursuit of high returns and also protect any fund that insures their deposits against loss (the FDIC in this country). But some of our larger banks in recent years found a way around these rules by establishing “off-balance sheet” entities – Structured Investment Vehicles (“SIVs”) – to purchase mortgage-related and other asset-backed securities that the banks were issuing. In addition, large investment banks significantly increased their leverage in the years running up to the recent crisis, and were able to do so without mandated capital requirements. As a result, when the mortgage crisis hit, our financial system was weaker than was widely believed, and in the case of large banks in particular, than was officially reported.[2] The mortgage crisis, which first surfaced in 2006 and has escalated rapidly since then, has hit bank balance sheets severely. As banks were forced to recognize losses on the mortgages they held in their portfolio, and especially to write down the values of their mortgage securities to their “market values” (even though the prices in those “markets” reflected relatively few “fire-sale” trades), they suffered reductions of their capital. Furthermore, the large banks that had created SIVs to escape such events found they could not hide from them when the SIVs could no longer roll over the commercial paper they had issued to finance their holdings of mortgage securities. To avoid dumping these securities on the market to satisfy their creditors, the banks took the SIVs back on their balance sheets, only to suffer further losses to their capital. As we have seen, some of our largest banks – Washington Mutual and Wachovia, to name two – have not been able to survive all of this, and have been forced or are or being forced into the hands of stronger survivors. Other banks have been doing their best to shore up their capital bases by issuing new equity to replace the losses they have absorbed on delinquent loans and declining prices of their asset-backed securities. According to media reports, financial institutions (largely banks) worldwide have suffered over $700 billion in such losses to date, of which they replaced approximately $500 billion by issuing new equity. But more losses are sure to come; indeed Secretary Paulson has said to expect further bank failures. Earlier this year, the International Monetary Fund projected that losses due to the credit crisis worldwide could hit $1 trillion. The IMF has recently upped that forecast to $1.4 trillion. If anything close to this latest forecast is realized, then many banks – here and abroad – will need to raise even more equity, but in a capital market that is now highly more risk averse than only a few months ago. It is in this environment that banks have grown much less comfortable dealing with each other, even though they must to keep the financial system running. Every day, some banks have more cash on hand, or reserves, than they need to meet reserve requirements and ordinary demands for liquidity, while others are short of such funds. In the United States, banks thus trade with each other in the Federal Funds market while global banks borrow and lend to each other through the London Interbank market using the LIBOR rate of interest. The Federal Reserve’s main objective of monetary policy is to stabilize the “Fed funds” rate around a target, now just lowered to 1.5%, down from 2% where it has been for some months (and down from 5.25% before subprime mortgage crisis). To do so, the Fed has added a huge amount of liquidity to the financial system, even going so far this week as to buy up commercial paper issued by corporations, an unprecedented step. But the Fed does not and probably cannot control the longer term inter-bank market, in which banks lend to each other typically over a 3-month period. The steep jump in the 3-month inter-bank lending rate – well over 4 percent – reflects two fundamental facts that EESA is designed to address. One is that banks don’t trust each others’ valuations of the mortgage and possibly other asset-backed securities they are all holding, precisely because the “markets” in those securities are so thin and thus not generating reliable prices. The second problem is that banks either are short of capital themselves, or fear that their counterparties are. No wonder that banks are so unwilling to lend to each other for a period even as short as three months – which in this environment, can seem like an eternity. The capital shortage in the banking system, in particular, has severe implications for the rest of the economy. An institution that is short of capital is forced to cut back on its lending and this shows up in denials of lines of credit to companies and reductions in credit limits for consumers. Households cut back on spending; it is difficult to get a mortgage or a car loan; and companies reduce investment and curtail operations. And as we learn in any college course on banking, the impact of a loss of capital on bank lending can be multiplied. Each dollar of bank capital supports roughly ten dollars of overall lending in the economy. Each dollar of lost capital thus can result in ten dollars of lending contraction. The impact of an economy-wide bank contraction can be devastating for Main Street. The Great Depression was greatly exacerbated by the collapse of banks. The long stagnation in Japan was in large part the result of a failure to recapitalize the banks. How bad is the current problem? We do not know how many banks, insurance companies or other financial institutions are in a weakened state, or perhaps even more important, may become weakened as the overall economy deteriorates. The official data published so far don’t really help on this score. The FDIC compiles information on the number and collective assets held by “problem banks,” or those in danger in failing. As of the second quarter of 2008, there were 117 such banks with assets of $78 billion up from 90 in the second quarter with assets of $28 billion., These figures did not include Washington Mutual, which would have failed had it not been bought by J.P. Morgan, or Wachovia, which at this writing, looks like it will be acquired by Wells Fargo (but also was in danger of failing without being acquired by someone). Together these banks hold more than $500 billion in customer deposits. Furthermore, according to recent media reports, even some large insurance companies (beyond AIG) may be having capital problems, having suffered large losses on the securities they hold in reserve to meet future claims. Can the Asset Purchase Plan Succeed in Recapitalizing the Banks? In principle, there are two ways in which the original Treasury asset purchase plan would recapitalize the banks. The first method is premised on the view that private markets are unwilling to supply capital to the banks because investors do not know how much their assets are worth. The Treasury, it is argued, would use its asset purchase plan as a way of revealing the prices of the assets and once that information is known, the banks will be able to raise new capital again from private markets. But better pricing will only attract capital if there are investors out there who are willing to supply it. Given the dramatic downturn in equities markets, finding such willing investors will be difficult, to say the least. Those investors that provided capital to banks early on in the crisis have been hit hard by the subsequent decline in equity prices and are reluctant to get burned again. When Bank of America said it would raise $10 billion from the markets, for example, its stock price fell sharply, suggesting there is a lot of market resistance to be overcome before private investors are willing to recapitalize the banking system. Second, in principle, Treasury could recapitalize the banks by buying distressed assets at prices above those at which the securities are currently carried on the books of the institutions that sell them (original book or purchase value minus any write-offs).[3] In this case, the bank would be able to report a capital gain from its sale to the Treasury, a gain that would reverse, at least in part, the capital losses it had taken in the past and thereby add to its capital. Treasury has said it will use reverse auctions[4] when it buys assets, and it is possible that the Department will be able to construct some auctions that will enable some holders of troubled assets to sell them to the Treasury at prices that earn a capital gain. But we are somewhat skeptical how many securities will fall into this category. For one thing, asset-backed securities are not homogenous, like traditional equity or bonds. In addition, it would be surprising in the current environment if reverse auctions would reveal prices that are above the written-down values of many of these securities. After all, an auction does not necessarily produce valuations that reflect the “hold to maturity” price rather than the “liquidation” price for the securities, as Fed Chairman Ben Bernanke suggested the purchase plan would accomplish. Accordingly, we strongly suspect that Treasury will have to purchase many securities in one-on-one deals rather than through auctions. But in doing this, it may be both legally and politically difficult for the Treasury to pay prices in negotiations that are above the valuations banks or other sellers already have given them. Section 101 (e) of EESA specifically requires the Treasury Secretary “to take such steps as may be necessary to prevent unjust enrichment” of participating financial institutions, and Congress could construe such language to preclude such sales.[5] Furthermore, even if there were not a specific prohibition in the EESA, Treasury may wish to avoid the public criticism it would face if it purchased assets at prices that would allow participating institutions to book gains. And, in the case of sales at prices below the explicit or implicit price of the securities carried on an institution’s books, the sales will trigger further accounting losses and thus additional deductions from reported capital. In short, we are not at all confident that the Treasury’s planned purchases of troubled securities, by themselves, will do much to recapitalize the banking system. This does not mean that the planned asset purchases will not deliver some needed help. Although at this writing the inter-bank lending market remains frozen even though EESA has been enacted and signed into law, one reason why banks and others may not yet have confidence that it will lead to a thaw in credit markets is that the guidelines for the asset purchases have not yet been issued. Once these guidelines are announced and the purchases begin, and the markets start to see real results, it is possible that some of the missing trust in the banking system will come back.[6] However, Treasury may not need to spend, and for reasons elaborated below we do not believe it should spend, anywhere near the full $700 billion, or perhaps even most of the initial $350 billion tranche in borrowing authority, to liquefy the markets for mortgage and other asset-backed securities. EESA requires Treasury to publish (within two days) information about each of these purchases. We urge the Department to include in such publications (presumably on its website) regular data on the defaults and delinquencies to date of the loans underlying each batch of securities it purchases. Such information should enable financial institutions that are still holding similar securities not only to price them more accurately, but also to give market participants enough confidence to begin trading these securities without further Treasury purchases. Husbanding its resources should be a prime objective for Treasury. In conducting its purchases of troubled assets, it should target first those asset categories that are the most illiquid. The main objective always should be jump-starting private sector activity or at least bringing greater clarity to the pricing of particular classes of securities. There is no need for Treasury, therefore, to make repeat purchases of similar securities (such as collateralized debt obligations issued within several months of each other, structured in roughly a similar way). Rather, the aim should be to make a market in as many different asset categories as are reasonably necessary to provide guidance to market participants, no more, no less. Yet no one can be confident at this point that asset purchases alone will give banks sufficient confidence to begin dealing with each other at much lower interest rates. If the asset purchases do the trick, fine. But if they don’t, Treasury should make sure it has enough financial ammunition to pursue a second, more direct, strategy for restoring banks’ confidence – the direct bank recapitalization strategy to which we now turn. Recapitalizing the Financial System Directly Having the government put capital into financial institutions directly is not a new idea. It is the approach followed in this crisis for Fannie and Freddie and has been used in other countries. Sweden recapitalized its banks by adding capital to them during its crisis in the 1980s. Most recently, the British government has announced a sweeping bank recapitalization amidst the current crisis. And of more relevance to the U.S. situation, Congress specifically added authority in EESA for Treasury to make direct capital injections into banks. In recent days, Treasury Secretary Paulson has acknowledged that the Department may take advantage of this authority and thus use some of its funds to buy equity in troubled banks. This is a welcome development. Even if Treasury’s asset purchase program restores confidence in the pricing of troubled securities, many banks still believe that many other banks lack sufficient capital, and thus can still be reluctant to lend to them. The fact that the FDIC stands ready (especially with its new unlimited line of credit at the Treasury) to assist acquiring banks in taking over failing banks is probably not sufficient, even with a successful Treasury asset purchase program, to provide this confidence. Bank lenders to failed banks can still lose money in such transactions, or at the very least may have difficulty accessing their funds for some period, at times when all banks seem to want or need as much liquidity as they can get. How might such a capital injection program work? Treasury could announce its willingness to entertain applications for capital injections, using a set pricing formula. For publicly traded banks, Treasury could buy at the price as of a given date, such as the price one or more days before its plan was announced, as has been suggested by former St. Louis Federal Reserve Bank President William Poole.[7] For privately-owned banks, Treasury could use a price based on the average price-to-book value for publicly traded banks as of that date. To prevent government intrusion into the affairs of the banks, the stock should be non-voting. Treasury would make clear that it only would take minority positions. There should be no takeovers of more companies—AIG, Fannie and Freddie are quite enough. Treasury also should announce that it will dispose (or sell back to the bank) any stock acquired through these actions as soon as the financial system has stabilized and the bank is in sound financial condition (perhaps a time limit, such as three years, should be a working presumption). The Treasury will have to be careful when it buys distressed assets to guard against the possibility that banks will just dump their worst stuff on the taxpayers. The Department also will have to be careful when buying equity in banks, especially if it decides to go for a broad, nationwide program. There cannot be an open invitation for owners to move assets out of the bank and then, in effect, say: “We don’t want this bank, you buy it.” This problem suggests that Treasury would need to work closely with the FDIC and other regulators to determine whether or not a particular bank is eligible for an equity injection. Treasury also may need to limit the scope of the program to larger banks, if it becomes infeasible to allow smaller banks to participate. We presume that Treasury did not initially embrace the idea of a more systematic recapitalization of the banking system out of concern not to have any further government involvement in the banking system, especially on the heels of the Fannie/Freddie conservatorship and the Fed’s rescue of AIG. That Treasury is now considering direct capital injections indicates that this may no longer be a concern. In our view, limiting Treasury’s purchases to non-voting stock in any event would address this concern directly. Conclusion Ben Bernanke has compared the current financial crisis to a heart attack in the economy. For some heart attacks, it is enough to administer drugs and change diet and exercise habits. But in acute cases, major surgery is needed and the current crisis is in the acute phase. Direct surgery in the form of capital injected into financial institutions, along with direct asset purchases, should help calm the inter-banking lending market. Based on recent monthly data it appears that GDP started to fall in mid-year and the economy is moving into recession so the proposals made here will not change that. Nor can the proposals compel banks to make loans to their traditional customers – consumers and businesses – in the current climate of fear. But Treasury can do something to mitigate that fear and thus, along with the recent further easing of monetary policy, likely additional fiscal stimulus and further homeowner relief, the Department will help reduce the severity of the current recession if it uses all the tools in its financial arsenal. [1] Note: This is the second essay in a series on the financial crisis and how to respond. For the first essay, see http://www.brookings.edu/papers/2008/0922_fixing_finance_baily_litan.aspx [2] The government’s reported bank capital ratios, for example, did not take account of the off-balance sheet assets and liabilities of the SIVs, which large banks later had to take back on their balance sheets directly. [3] Some institutions holding these securities may not have fully marked them to “market” under current accounting rules, but instead simply have added to their reserves for possible future losses to reflect the likelihood of such write-downs. In the lattercase, the securities may implicitly be marked down by a percentage reflecting the loan loss reserve attributable to them. If this latter percentage is not publicly stated, Treasury may require participating institutions to break it out for the Department as a condition for participating in the program (and if the Department does not do this, it may be compelled to do so either by the Executive branch Oversight authority or the Congressional oversight committee established under the Act). [4] A regular auction is where the seller puts an item out on the market and then potential buyers bid for it. The seller then takes the highest price. In a reverse auction, the buyer puts out a notice of what item he or she wants to buy and then sellers compete to supply this item. The buyer then chooses the lowest price. Reverse auctions are the way a lot of private companies and government entities manage their procurement processes. [5] The rest of this subsection includes as an example of such unjust enrichment the sale of a troubled asset to the Treasury at a higher price than what the seller paid to acquire it. But this language is not exclusive. Congress, the public or the media could construe unjust enrichment also to include sales of securities at prices above those implicitly or explicitly carried by the institution on its books. [6] The Treasury asset purchase plan would also a provide a valuable service by speeding the de-leveraging process. As we described earlier, banks are leveraged and hold capital that is only a fraction of their assets or liabilities. When they take a hit to their capital base, they must either replenish the capital or scale back their balance sheets. When it became impossible to sell the assets except at fire-sale prices, they were not able to do this. Selling the asset to the Treasury will help them scale down. To get bank lending going again, however, we want them to be able to make new lending, not to just scale back. [7] Speech made at the National Association of Business Economists conference, Washington DC, October 6, 2008. Downloads Download Authors Martin Neil BailyRobert E. Litan Full Article
the The Origins of the Financial Crisis By webfeeds.brookings.edu Published On :: Mon, 24 Nov 2008 09:34:00 -0500 SUMMARY The financial crisis that has been wreaking havoc in markets in the U.S. and across the world since August 2007 had its origins in an asset price bubble that interacted with new kinds of financial innovations that masked risk; with companies that failed to follow their own risk management procedures; and with regulators and supervisors that failed to restrain excessive risk taking. A bubble formed in the housing markets as home prices across the country increased each year from the mid 1990s to 2006, moving out of line with fundamentals like household income. Like traditional asset price bubbles, expectations of future price increases developed and were a significant factor in inflating house prices. As individuals witnessed rising prices in their neighborhood and across the country, they began to expect those prices to continue to rise, even in the late years of the bubble when it had nearly peaked. The rapid rise of lending to subprime borrowers helped inflate the housing price bubble. Before 2000, subprime lending was virtually non-existent, but thereafter it took off exponentially. The sustained rise in house prices, along with new financial innovations, suddenly made subprime borrowers — previously shut out of the mortgage markets — attractive customers for mortgage lenders. Lenders devised innovative Adjustable Rate Mortgages (ARMs) — with low "teaser rates," no down-payments, and some even allowing the borrower to postpone some of the interest due each month and add it to the principal of the loan — which were predicated on the expectation that home prices would continue to rise. But innovation in mortgage design alone would not have enabled so many subprime borrowers to access credit without other innovations in the so-called process of "securitizing" mortgages — or the pooling of mortgages into packages and then selling securities backed by those packages to investors who receive pro rata payments of principal and interest by the borrowers. The two main government-sponsored enterprises devoted to mortgage lending, Fannie Mae and Freddie Mac, developed this financing technique in the 1970s, adding their guarantees to these "mortgage-backed securities" (MBS) to ensure their marketability. For roughly three decades, Fannie and Freddie confined their guarantees to "prime" borrowers who took out "conforming" loans, or loans with a principal below a certain dollar threshold and to borrowers with a credit score above a certain limit. Along the way, the private sector developed MBS backed by non-conforming loans that had other means of "credit enhancement," but this market stayed relatively small until the late 1990s. In this fashion, Wall Street investors effectively financed homebuyers on Main Street. Banks, thrifts, and a new industry of mortgage brokers originated the loans but did not keep them, which was the "old" way of financing home ownership. Over the past decade, private sector commercial and investment banks developed new ways of securitizing subprime mortgages: by packaging them into "Collateralized Debt Obligations" (sometimes with other asset-backed securities), and then dividing the cash flows into different "tranches" to appeal to different classes of investors with different tolerances for risk. By ordering the rights to the cash flows, the developers of CDOs (and subsequently other securities built on this model), were able to convince the credit rating agencies to assign their highest ratings to the securities in the highest tranche, or risk class. In some cases, so-called "monoline" bond insurers (which had previously concentrated on insuring municipal bonds) sold protection insurance to CDO investors that would pay off in the event that loans went into default. In other cases, especially more recently, insurance companies, investment banks and other parties did the near equivalent by selling "credit default swaps" (CDS), which were similar to monocline insurance in principle but different in risk, as CDS sellers put up very little capital to back their transactions. These new innovations enabled Wall Street to do for subprime mortgages what it had already done for conforming mortgages, and they facilitated the boom in subprime lending that occurred after 2000. By channeling funds of institutional investors to support the origination of subprime mortgages, many households previously unable to qualify for mortgage credit became eligible for loans. This new group of eligible borrowers increased housing demand and helped inflate home prices. These new financial innovations thrived in an environment of easy monetary policy by the Federal Reserve and poor regulatory oversight. With interest rates so low and with regulators turning a blind eye, financial institutions borrowed more and more money (i.e. increased their leverage) to finance their purchases of mortgage-related securities. Banks created off-balance sheet affiliated entities such as Structured Investment Vehicles (SIVs) to purchase mortgage-related assets that were not subject to regulatory capital requirements Financial institutions also turned to short-term "collateralized borrowing" like repurchase agreements, so much so that by 2006 investment banks were on average rolling over a quarter of their balance sheet every night. During the years of rising asset prices, this short-term debt could be rolled over like clockwork. This tenuous situation shut down once panic hit in 2007, however, as sudden uncertainty over asset prices caused lenders to abruptly refuse to rollover their debts, and over-leveraged banks found themselves exposed to falling asset prices with very little capital. While ex post we can certainly say that the system-wide increase in borrowed money was irresponsible and bound for catastrophe, it is not shocking that consumers, would-be homeowners, and profit-maximizing banks will borrow more money when asset prices are rising; indeed, it is quite intuitive. What is especially shocking, though, is how institutions along each link of the securitization chain failed so grossly to perform adequate risk assessment on the mortgage-related assets they held and traded. From the mortgage originator, to the loan servicer, to the mortgage-backed security issuer, to the CDO issuer, to the CDS protection seller, to the credit rating agencies, and to the holders of all those securities, at no point did any institution stop the party or question the little-understood computer risk models, or the blatantly unsustainable deterioration of the loan terms of the underlying mortgages. A key point in understanding this system-wide failure of risk assessment is that each link of the securitization chain is plagued by asymmetric information – that is, one party has better information than the other. In such cases, one side is usually careful in doing business with the other and makes every effort to accurately assess the risk of the other side with the information it is given. However, this sort of due diligence that is to be expected from markets with asymmetric information was essentially absent in recent years of mortgage securitization. Computer models took the place of human judgment, as originators did not adequately assess the risk of borrowers, mortgage services did not adequately assess the risk of the terms of mortgage loans they serviced, MBS issuers did not adequately assess the risk of the securities they sold, and so on. The lack of due diligence on all fronts was partly due to the incentives in the securitization model itself. With the ability to immediately pass off the risk of an asset to someone else, institutions had little financial incentive to worry about the actual risk of the assets in question. But what about the MBS, CDO, and CDS holders who did ultimately hold the risk? The buyers of these instruments had every incentive to understand the risk of the underlying assets. What explains their failure to do so? One part of the reason is that these investors — like everyone else — were caught up in a bubble mentality that enveloped the entire system. Others saw the large profits from subprime-mortgage related assets and wanted to get in on the action. In addition, the sheer complexity and opacity of the securitized financial system meant that many people simply did not have the information or capacity to make their own judgment on the securities they held, instead relying on rating agencies and complex but flawed computer models. In other words, poor incentives, the bubble in home prices, and lack of transparency erased the frictions inherent in markets with asymmetric information (and since the crisis hit in 2007, the extreme opposite has been the case, with asymmetric information problems having effectively frozen credit markets). In the pages that follow, we tell this story more fully. Downloads Download English Version Authors Martin Neil BailyMatthew S. Johnson Robert E. Litan Full Article
the Regulating Insurance After the Crisis By webfeeds.brookings.edu Published On :: Wed, 04 Mar 2009 12:00:00 -0500 EXECUTIVE SUMMARY Despite a long-standing policy debate, insurance remains the only major financial industry not to be regulated at the federal level, a tradition dating from the 19th century. However, recent financial turmoil has fundamentally changed the terms of this important discussion. Many contend that as opposed to as many 51 separate regulators, a single federal insurance regulator would: allow insurers to pass substantial savings on to their consumers; preempt market distorting state regulation of rates; attract the expert talent needed to supervise the increasingly complex industry products; improve competition between insurers and non-insurance financial institutions for insurance-like products; better position insurers to compete globally and; make national policy with respect to insurer solvency. However, state insurance regulators and some smaller insurers and insurance agents favor the current system, arguing that: they alone have the interest, expertise, and accessibility to consumers to handle best consumer complaints; insurance rates must be subject to oversight if not outright control to protect consumers; and state regulators have moved aggressively in recent years to improve their solvency regulation. After weighing these arguments, I conclude in this essay that insurers and agents operating in multiple states should have the option to operate under a more streamlined regulatory system, and in particular to choose between being chartered and thus regulated by individual state regulators, or by a new federal insurance regulator. Congress has considered but not yet enacted legislation establishing this “optional federal charter” system, analogous (although not identical) to the regulatory system that has long governed the U.S. banking industry. Further, the recent financial crisis and associated bailout of AIG make it is clear that, in addition to the optional federal charter, the government should require federal solvency and consumer protection regulation of the largest insurers that are deemed to be “systemically important financial institutions.” Clearly, if the federal government is potentially needed as a source of debt or equity funds for certain insurers, there is a strong case for having the federal authorities actively oversee the financial safety and soundness of at least those firms that may benefit from federal, and thus national taxpayer, assistance. Downloads Download Authors Robert E. Litan Full Article
the The U.S. Financial and Economic Crisis: Where Does It Stand and Where Do We Go From Here? By webfeeds.brookings.edu Published On :: Mon, 15 Jun 2009 12:00:00 -0400 INTRODUCTION The Obama administration needs to focus on executing its existing financial rescue plans, keep the TARP focused on the banking sector, and create a contingency plan should the banking system destabilize again. Crystal balls are dangerous, especially when it comes to economic predictions, which is why it is important for the administration to chart a path forward. Public policy must remain focused on the very real possibility that the apparent easing in the economy’s decline may be followed by little or no growth for several quarters and there could possibly be another negative turn. One of the risks is that the United States is very connected to the rest of the world, most of which is in severe recession. The global economy could be a significant drag on U.S. growth. Cautious optimism should be the order of the day. We fear that the recent reactions of the financial markets and of some analysts carry too much of the optimism without recognizing enough of the uncertainty. There remains a lot of uncertainty and policymakers should not rest on their laurels or turn to other policies, even if they look more exciting. It is vital to follow through on the current financial rescue plans and to have well-conceived contingency plans in case there is another dip down. We propose three recommendations for the financial rescue plans: Focus on execution of existing programs. The Administration has created programs to deal with each of the key elements necessary to solve the financial crisis. All of them have significant steps remaining and some of them have not even started yet, such as the programs to deal with toxic assets. As has been demonstrated multiple times now since October 2008, these are complex programs that require a great deal of attention. It is time to execute rather than to create still more efforts. Resist the temptation to allocate money from the TARP to other uses—it is essential to maintain a reserve of Congressionally-authorized funds in case they are needed for the banks. It would be difficult to overemphasize the remaining uncertainties about bank solvency as they navigate what will remain a rough year or more. The banks could easily need another $300 billion of equity capital and might need still more. It is essential that the administration have the ammunition readily available. Third, make sure there is a contingency plan to deal with a major setback for the banking system. The plan needs broad support within the administration and among regulators and, ideally, from key congressional leaders. We probably won’t need it, but there is too high a chance that we will require it for us to remain without one. The country cannot afford even the appearance of the ad hoc and changing nature of the responses that were evident last fall. We also give the administration a thumbs-up for their bank recapitalization as well as the TALF program, while they are much more skeptical of the Treasury’s approaches to toxic assets. The authors also believe it is time to focus on the truly mind-blowing budget deficits given the danger that markets will not be able to absorb the amount of government borrowing needed without triggering a rise in U.S. interest rates and perhaps an unstable decline in the value of the dollar, nor do they believe there should be a another fiscal stimulus except under extreme circumstances. Downloads Download Full Paper - English Authors Martin Neil BailyDouglas J. Elliott Full Article
the Strengthening and Streamlining Prudential Bank Supervision By webfeeds.brookings.edu Published On :: Thu, 06 Aug 2009 09:03:01 -0400 There are a number of causes of the financial crisis that has devastated the U.S. economy and spread globally. Weakness in financial sector regulation was one of the causes and the proliferation of different regulators is, in turn, a cause of the regulatory failure. There is a bewildering, alphabet soup variety of regulators and supervisors for banks and other financial institutions that failed in their task of preventing the crisis and, at the same time, created an excessive regulatory burden on the industry because of overlapping and duplicative functions.We can do better. This paper makes the case for a single micro prudential regulator, that is to say, one federal agency that has responsibility for the supervision and regulation of all federally chartered banks and all major non-bank financial institutions. There would still be state-chartered financial institutions covered by state regulators, but the federal regulator would share regulatory authority with the states. The Objectives Approach to Regulation The Blueprint for financial reform prepared by the Paulson Treasury proposed a system of objectives-based regulation, an approach that had been previously suggested and that is the basis for regulation in Australia. The White Paper prepared by the Geithner Treasury did not use the same terminology, but it is clear from the structure of the paper that their approach is essentially an objectives-based one, as they lay out the different elements of regulatory reform that should be covered. I support the objectives approach to regulation. There should be three major objectives of regulation, as follows. • To make sure that there is micro-prudential supervisions, so that customers and taxpayers are protected against excessive risk taking that may cause a single institution to fail. • To make sure that whole financial sector retains its balance and does not become unstable. That means someone has to warn about the build up of risk across several institutions and perhaps take regulatory actions to restrain lending used to purchase assets whose prices are creating a speculative bubble. • To regulate the conduct of business. That means to watch out for the interests of consumers and investors, whether they are small shareholders in public companies or households deciding whether to take out a mortgage or use a credit card. In applying this approach, it is vital for both the economy and the financial sector that the Federal Reserve has independence as it makes monetary policy. Experience in the United States and around the world supports the view that an independent central bank results in better macroeconomic performance and restrains inflationary expectations. An independent Fed setting monetary policy is essential. An advantage of objectives-based regulation is that it forces us to consider what are the “must haves” of financial regulation—those things absolutely necessary to reduce the chances of another crisis. Additionally we can see the “must not haves”—the regulations that would have negative effects. It is much more important to make sure that the job gets done right, that there are no gaps in regulation that could contribute to another crisis and that there not be over-regulation that could stifle innovation and slow economic growth, than it is that the boxes of the regulatory system be arranged in a particular way. In turn, this means that the issue of regulatory consolidation is important but only to the extent that it makes it easier or harder to achieve the three major objectives of regulation efficiently and effectively. For objectives-based regulation to work, it is essential to harness the power of the market as a way to enhance stability. It will never be possible to have enough smart regulators in place that can outwit private sector participants who really want to get around regulations because they inhibit profit opportunities or because of the burdens imposed. A good regulatory environment is structured so that people who take risks stand to lose their own money if their bets do not work out. The crisis we are going through was caused by both market and regulatory failures and the market failures were often the result of a lack of transparency (“asymmetric information” in the jargon of economics). Those who invested money and lost it often did not realize the risks they were taking. To the extent that policymakers can enhance transparency, they can make market forces work better and help achieve the goal of greater stability. Having a single micro prudential regulator would help greatly in meeting the objectives of regulation, a point that will be taken up in more detail below. It is not a new idea. In 1993-94, the Clinton and Riegle proposals for financial regulation said that a single micro prudential regulator would provide the best protection for the economy and for the industry. In the Blueprint developed by the Paulson Treasury, it was proposed that there be a single micro prudential regulator. Read the full paper » (pdf) Downloads Download Authors Martin Neil Baily Full Article
the Can the US sue China for COVID-19 damages? Not really. By webfeeds.brookings.edu Published On :: Wed, 29 Apr 2020 14:58:58 +0000 Full Article
the Webinar: Following the money: China Inc’s growing stake in India-China relations By webfeeds.brookings.edu Published On :: Thu, 30 Apr 2020 07:45:58 +0000 By Nidhi Varma https://www.youtube.com/watch?v=6BhEaetvl7M On April 30, 2020, Brookings India organised its first Foreign Policy & Security Studies webinar panel discussion to discuss a recent Brookings India report, “Following the money: China Inc’s growing stake in India-China relations” by Ananth Krishnan, former Visiting Fellow at Brookings India. The panel featured Amb. Shivshankar Menon, Distinguished Fellow,… Full Article
the How is the coronavirus outbreak affecting China’s relations with India? By webfeeds.brookings.edu Published On :: Thu, 30 Apr 2020 12:02:00 +0000 China’s handling of the coronavirus pandemic has reinforced the skeptical perception of the country that prevails in many quarters in India. The Indian state’s rhetoric has been quite measured, reflecting its need to procure medical supplies from China and its desire to keep the relationship stable. Nonetheless, Beijing’s approach has fueled Delhi’s existing strategic and economic concerns. These… Full Article
the Technology competition between the US and a Global China By webfeeds.brookings.edu Published On :: Tue, 05 May 2020 09:00:59 +0000 In this special edition of the Brookings Cafeteria Podcast, Lindsey Ford, a David M. Rubenstein Fellow in Foreign Policy, interviews two scholars on some of the key issues in the U.S.-China technology competition, which is the topic of the most recent release of papers in the Global China series. Tom Stefanick is a visiting fellow… Full Article
the The Trump administration and the “Free and Open Indo-Pacific” By webfeeds.brookings.edu Published On :: Tue, 05 May 2020 19:29:31 +0000 EXECUTIVE SUMMARY The Trump administration rolled out a new “Free and Open Indo-Pacific” concept in late 2017. Since this point, the administration’s new strategy has generated as many questions as it has answers. Despite dramatic shifts in many aspects of U.S. foreign policy after the 2016 election, there are notable areas of continuity between the… Full Article
the Pandemic politics: Does the coronavirus pandemic signal China’s ascendency to global leadership? By webfeeds.brookings.edu Published On :: Wed, 06 May 2020 07:52:44 +0000 The absence of global leadership and cooperation has hampered the global response to the coronavirus pandemic. This stands in stark contrast to the leadership and cooperation that mitigated the financial crisis of 2008 and that contained the Ebola outbreak of 2014. At a time when the United States has abandoned its leadership role, China is… Full Article
the Africa in the news: South Africa looks to open up; COVID-19 complicates food security, malaria response By webfeeds.brookings.edu Published On :: Sat, 25 Apr 2020 11:30:28 +0000 South Africa announces stimulus plan and a pathway for opening up As of this writing, the African continent has registered over 27,800 COVID-19 cases, with over 1,300 confirmed deaths, according to the Africa Centers for Disease Control and Prevention. Countries around the continent continue to instate various forms of social distancing restrictions: For example, in… Full Article
the It’s time to help Africa fight the virus By webfeeds.brookings.edu Published On :: Wed, 29 Apr 2020 19:53:20 +0000 Full Article
the How the AfCFTA will improve access to ‘essential products’ and bolster Africa’s resilience to respond to future pandemics By webfeeds.brookings.edu Published On :: Thu, 30 Apr 2020 22:10:14 +0000 Africa’s extreme vulnerability to the disruption of international supply chains during the COVID-19 pandemic highlights the need to reduce the continent’s dependence on non-African trading partners and unlock Africa’s business potential. While African countries are right to focus their energy on managing the immediate health crisis, they must not lose sight of finalizing the Africa… Full Article