ag

COVID in the Maghreb: Responses and impacts

       




ag

Arms Control Agreement With Russia Should Cover More Than Nuclear Weapons

With the Russia investigation and impeachment behind him, President Trump finally may feel empowered to engage with Russian President Vladimir Putin and pursue an arms control deal.  




ag

Breaking Down the Huawei v. Pentagon Dispute

If nothing else, the long-running Huawei situation shows the importance of considering the supply chain when it comes to cybersecurity. Huawei being the Chinese telecommunications equipment maker basically banned by the federal government. Bruce Schneier joins Host Tom Temin on Federal Drive.




ag

Spies Are Fighting a Shadow War Against the Coronavirus

Calder Walton describes four ways how intelligence services are certain to contribute to defeating COVID-19 and why pandemic intelligence will become a central part of future U.S. national security.




ag

How the Pentagon Is Struggling to Stay out of Politics

 Gen. Mark. A. Milley’s job is to provide sound military advice to the president. But at a deeper level, his responsibility is to safeguard the independence and integrity of the armed forces. The last thing the country needs is a military leadership that’s trying to curry favor with any commander in chief, particularly one who’s hungry for affirmation.




ag

Moving to the Cloud: How the Public Sector Can Leverage the Power of Cloud Computing

Event Information

July 21, 2010
10:00 AM - 12:00 PM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

The U.S. government spends billions of dollars each year on computer hardware, software and file servers that may no longer be necessary. Currently, the public sector makes relatively little use of cloud computing, even though studies suggest substantial government savings from a migration to more Internet-based computing with shared resources.

On July 21, the Center for Technology Innovation at Brookings hosted a policy forum on steps to enhance public sector adoption of cloud computing innovations. Brookings Vice President Darrell West moderated a panel of experts, including David McClure of the General Services Administration, Dawn Leaf of the National Institute for Standards and Technology, and Katie Ratte of the Federal Trade Commission. West released a paper detailing the policy changes required to improve the efficiency and effectiveness of federal computing.

Audio

Transcript

Event Materials

     
 
 




ag

Privacy and Security in the Cloud Computing Age


Event Information

October 26, 2010
10:00 AM - 11:30 AM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

Although research suggests that considerable efficiencies can be gained from cloud computing technology, concerns over privacy and security continue to deter government and private-sector firms from migrating to the cloud. By its very nature, storing information or accessing services through remote providers would seem to raise the level of privacy and security risks. But is such apprehension warranted? What are the real security threats posed to individuals, business and government by cloud computing technologies? Do the cost-saving benefits outweigh the dangers?

On October 26, the Brookings Institution hosted a policy forum on the privacy and security challenges raised by cloud computing. Governance Studies Director Darrell West moderated a panel of technology industry experts examining how cloud computing systems can generate innovation and cost savings without sacrificing privacy and security. West will also present findings from his forthcoming paper “Privacy, Security, and Innovation in Cloud Computing.”

After the program, panelists took audience questions.

Transcript

Event Materials

     
 
 




ag

Addressing Export Control in the Age of Cloud Computing


Executive Summary

The move to the cloud is one of the defining information technology trends of the early 21st century. By providing businesses, universities, government agencies, and other entities with access to shared and often physically dispersed computing resources, cloud computing can simultaneously offer increased flexibility, reduced cost, and access to a wider array of services.

Cloud computing has also created a set of new challenges. For example, the issues of privacy and security in the cloud are well recognized and have been extensively discussed in the business and popular press. However, one critical issue that has received very little attention with respect to cloud computing is export control.

In the broadest sense, export control relates to regulations that the United States and many other countries have put in place to restrict the export of various sensitive items, information, and software.

There is an inherent tension between cloud computing and export control. While the concept of the cloud is centered on the premise of removing the need to track the details of data movement among various destinations, export control regulations are built largely around restrictions tied to those very movements.

If cloud computing is to reach its full potential, it is critical for providers and users of cloud services to address its implications with respect to export control. It is equally important to adapt the export control regulations to reflect the increasing prevalence of cloud computing in a manner that preserves the ability of American companies to benefit from the efficiencies of the cloud while also ensuring that American national security and foreign policy interests are adequately protected.

Downloads

Authors

Image Source: © Valentin Flauraud / Reuters
      
 
 




ag

The World Bank steps up on fragility and conflict: Is it asking the right questions?

At the beginning of this century, about one in four of the world's extreme poor lived in fragile and conflict affected situations (FCS). By the end of this year, FCS will be home to the majority of the world's extreme poor. Increasingly, we live in a "two-speed world." This is the key finding of a…

       




ag

Addressing COVID-19 in resource-poor and fragile countries

Responding to the coronavirus as individuals, society, and governments is challenging enough in the United States and other developed countries with modern infrastructure and stable systems, but what happens when a pandemic strikes poor and unstable countries that have few hospitals, lack reliable electricity, water, and food supplies, don’t have refrigeration, and suffer from social…

       




ag

Leaving all to younger hands: Why the history of the women’s suffragist movement matters

The campaign to win passage of the 19th Amendment guaranteeing women the right to vote stands as one of the most significant and wide-ranging moments of political mobilization in all of American history. Among other outcomes, it produced the largest one-time increase in voters ever. As important as the goal of suffrage was, the struggle…

       




ag

Get rid of the White House Coronavirus Task Force before it kills again

As news began to leak out that the White House was thinking about winding down the coronavirus task force, it was greeted with some consternation. After all, we are still in the midst of a pandemic—we need the president’s leadership, don’t we? And then, in an abrupt turnaround, President Trump reversed himself and stated that…

       




ag

The Hutchins Center Explains: Budgeting for aging America


For decades, we have been hearing that the baby-boom generation was like a pig moving through a python–bigger than the generations before and after.

That’s true. But that’s also a very misleading metaphor for understanding the demographic forces that are driving up federal spending: They aren’t temporary. The generation born between 1946 and 1964 is the beginning of a demographic transition that will persist for decades after the baby boomers die, the consequence of lengthening lifespans and declining fertility. Putting the federal budget on a sustainable course requires long-lasting fixes, not short-lived tweaks.  

First, a few demographic facts.

As the chart below illustrates, there was a surge in births in the U.S. at the end of World War II, a subsequent decline, and then an uptick as baby boomers began having children.

Although the population has been rising, the number of births in the U.S. the past few years has been below the peak baby-boom levels, possibly because many couples chose not to have children during bad economic times. More significant, fertility rates–roughly the number of babies born per woman during her lifetime–have fallen well below pre-baby-boom levels.

Meanwhile, Americans are living longer. In 1950, a man who made it to age 65 could expect to live until 78 and a woman until 81. Social Security’s actuaries project that a man who lived to age 65 in 2010 will reach 84 and a woman age 86.

Put all this together, and it’s clear that a growing fraction of the U.S. population will be 65 or older.   

The combination of longer life spans and lower fertility rates means the ratio of elderly (over 65) to working-age population (ages 20 to 64) is rising. As the chart below illustrates, the ratio will rise steadily as more baby boomers reach retirement age–and then it levels off.  

Simply put, this doesn’t look like a pig in a python.  

So what do these demographic facts portend for the federal budget?  In simple dollars and cents, the federal government spends more on the old than the young. More older Americans means more federal spending on Social Security and Medicare, the health insurance program for the elderly. On top of that, health care spending per person is likely to continue to grow faster than the overall economy.

The net result: 85 percent of the increase in federal spending that the Congressional Budget Office projects for the next 10 years, based on current policies, will go toward Social Security, Medicare and other major federal health programs, and interest on the national debt.

Restraining future deficits and the size of the federal debt mean restraining spending on these programs or raising taxes–and probably both. One-time savings or minor tweaks won’t suffice. Nor will limiting the belt-tightening to annually appropriated spending.

The fundamental fiscal problem is not coping with the retirement of the baby boomers and then going back to budgets that resemble those of the past. The fundamental fiscal problem is that retirement of the baby boomers marks a major demographic transition for the nation, one that will require long-lived changes to benefit programs and taxes.


Editor's Note: This post originally appeared on The Wall Street Journal's Washington Wire on December 18, 2015.
     
 
 




ag

Should Congress raise the full retirement age to 70?


No. We should exempt workers earning the lowest wages.

Social Security faces a serious funding problem. The program takes in too little money to pay all that has been promised to future beneficiaries. Government forecasters predict Social Security’s reserve fund will be depleted between 2030 and 2034. There are two basic ways we can eliminate the funding gap: cut benefits or increase contributions. A common proposal is to increase the age at which workers can claim full retirement benefits. For people nearing retirement today, the full retirement age is 66. As a result of a 1983 law, that age will rise to 67 for workers born after 1959.

When policymakers urge us to raise the retirement age, they are proposing to increase the full retirement age beyond 67, possibly to 70, for workers now in their 30s or 40s. This saves money, but it also cuts monthly retirement benefits by the same percentage for every worker, unless workers delay claiming benefits. The policy might seem fair if workers in future generations could all expect to share in gains in life expectancy. However, new research shows that gains in life expectancy have been very unequal, with the biggest improvements among workers who earn top incomes. Life expectancy gains for workers with the lowest incomes have been small or negligible.

If the full retirement age were raised, future retirees with high lifetime earnings can expect to receive some compensation when their monthly benefits are cut. Because they can expect to live longer than today’s retirees, they will receive benefits for a longer span of years after 65. For low-wage workers, there is no compensation. Since they are not living longer, their lifetime benefits will fall by the same proportion as their monthly benefits. Thus, “raising the retirement age” is a policy that cuts the lifetime benefits of future low-wage workers by a bigger percentage than it does of future high-wage workers.

The fact that low-wage workers have seen small or negligible gains in life expectancy signals that their health when they are past 60 is no better than that of low-wage workers born 20 or 30 years ago. This suggests their capacity to work past 60 is no better than it was for past generations. A sensible policy for cutting future benefits should therefore preserve current benefit levels for workers who have contributed to Social Security for many years but have earned low wages.

Editor's note: This piece originally appeared in CQ Researcher.

Authors

Publication: CQ Researcher
Image Source: © Lucy Nicholson / Reuters
      
 
 




ag

What did ASEAN meetings reveal about US engagement in Southeast Asia?

Just back from Southeast Asia, Senior Fellow Jonathan Stromseth reports on the outcomes from the annual ASEAN (Association of Southeast Asian Nations) summit, including the continued delay of the Regional Comprehensive Economic Partnership, China's economic influence in the region, and how the Trump administration's rhetoric and actions are being perceived in the region. http://directory.libsyn.com/episode/index/id/11923064 Related…

       




ag

A conversation with Somali Finance Minister Abdirahman Duale Beileh on economic adjustment in fragile African states

Fragile and conflict-affected states in Africa currently account for about one-third of those living in extreme poverty worldwide. These states struggle with tradeoffs between development and stabilization, the need for economic stimulus and debt sustainability, and global financial stewardship and transparency. Addressing fragility requires innovative approaches, the strengthening of public and private sector capacity, and…

       




ag

Africa in the news: Nagy visits Africa, locust outbreak threatens East Africa, and Burundi update

Security and youth top agenda during US Assistant Secretary of State Nagy’s visit to Africa On January 15, U.S. Assistant Secretary of State for African Affairs Tibor Nagy headed to Africa for a six-nation tour that included stops in the Central African Republic, Ethiopia, Kenya, South Sudan, Sudan, and Somalia. Security was on the top of the agenda…

       




ag

The day after: Enforcing The Hague verdict in the South China Sea

The U.N. arbitral tribunal's decision was an unequivocal rebuke of China’s expansive maritime claims and increasingly assertive posturing in adjacent waters. But, as Richard Heydarian argues, despite the Philippines' landmark victory, what is at stake is no less than the future of the regional security architecture.

      
 
 




ag

The next stage in health reform


Health reform (aka Obamacare) is entering a new stage. The recent announcement by United Health Care that it will stop selling insurance to individuals and families through most health insurance exchanges marks the transition. In the next stage, federal and state policy makers must decide how to use broad regulatory powers they have under the Affordable Care Act (ACA) to stabilize, expand, and diversify risk pools, improve local market competition, encourage insurers to compete on product quality rather than premium alone, and promote effective risk management. In addition, insurance companies must master rate setting, plan design, and network management and effectively manage the health risk of their enrollees in order to stay profitable, and consumers must learn how to choose and use the best plan for their circumstances.

Six months ago, United Health Care (UHC) announced that it was thinking about pulling out of the ACA exchanges. Now, they are pulling out of all but a “handful” of marketplaces. UHC is the largest private vendor of health insurance in the nation. Nonetheless, the impact on people who buy insurance through the ACA exchanges will be modest, according to careful analyses from the Kaiser Family Foundation and the Urban Institute. The effect is modest for three reasons. One is that in some states UHC focuses on group insurance, not on insurance sold to individuals, where they are not always a major presence. Secondly, premiums of UHC products in individual markets are relatively high. Third, in most states and counties ACA purchasers will still have a choice of two or more other options. In addition, UHC’s departure may coincide with or actually cause the entry of other insurers, as seems to be happening in Iowa.

The announcement by UHC is noteworthy, however. It signals the beginning for ACA exchanges of a new stage in their development, with challenges and opportunities different from and in many ways more important than those they faced during the first three years of operation, when the challenge was just to get up and running. From the time when HealthCare.Gov and the various state exchanges opened their doors until now, administrators grappled non-stop with administrative challenges—how to enroll people, helping them make an informed choice among insurance offerings, computing the right amount of assistance each individual or family should receive, modifying plans when income or family circumstances change, and performing various ‘back office’ tasks such as transferring data to and from insurance companies. The chaotic first weeks after the exchanges opened on October 1, 2013 have been well documented, not least by critics of the ACA. Less well known are the countless behind-the-scenes crises, patches, and work-arounds that harried exchange administrators used for years afterwards to keep the exchanges open and functioning.

The ACA forced not just exchange administrators but also insurers to cope with a new system and with new enrollees. Many new exchange customers were uninsured prior to signing up for marketplace coverage. Insurers had little or no information on what their use of health care would be. That meant that insurers could not be sure where to set premiums or how aggressively to try to control costs, for example by limiting networks of physicians and hospitals enrollees could use. Some did the job well or got lucky. Some didn’t. United seems to have fallen in the second category. United could have stayed in the 30 or so state markets they are leaving and tried to figure out ways to compete more effectively, but since their marketplace premiums were often not competitive and most of their business was with large groups, management decided to focus on that highly profitable segment of the insurance market. Some insurers, are seeking sizeable premium increases for insurance year 2017, in part because of unexpectedly high usage of health care by new exchange enrollees.

United is not alone in having a rough time in the exchanges. So did most of the cooperative plans that were set up under the ACA. Of the 23 cooperative plans that were established, more than half have gone out of business and more may follow. These developments do not signal the end of the ACA or even indicate a crisis. They do mark the end of an initial period when exchanges were learning how best to cope with clerical challenges posed by a quite complicated law and when insurance companies were breaking into new markets. In the next phase of ACA implementation, federal and state policy makers will face different challenges: how to stabilize, expand, and diversify marketplace risk pools, promote local market competition, and encourage insurers to compete on product quality rather than premium alone. Insurance company executives will have to figure out how to master rate setting, plan design, and network management and manage risk for customers with different characteristics than those to which they have become accustomed.

Achieving these goals will require state and federal authorities to go beyond the core implementation decisions that have absorbed most of their attention to date and exercise powers the ACA gives them. For example, section 1332 of the ACA authorizes states to apply for waivers starting in 2017 under which they can seek to achieve the goals of the 2010 law in ways different from those specified in the original legislation. Along quite different lines, efforts are already underway in many state-based marketplaces, such as the District of Columbia, to expand and diversify the individual market risk pool by expanding marketing efforts to enroll new consumers, especially young adults. Minnesota’s Health Care Task Force recently recommended options to stabilize marketplace premiums, including reinsurance, maximum limits on the excess capital reserves or surpluses of health plans, and the merger of individual and small group markets, as Massachusetts and Vermont have done.

In normal markets, prices must cover costs, and while some companies prosper, some do not. In that respect, ACA markets are quite normal. Some regional and national insurers, along with a number of new entrants, have experienced losses in their marketplace business in 2016. One reason seems to be that insurers priced their plans aggressively in 2014 and 2015 to gain customers and then held steady in 2016. Now, many are proposing significant premium hikes for 2017.

Others, like United, are withdrawing from some states. ACA exchange administrators and state insurance officials must now take steps to encourage continued or new insurer participation, including by new entrants such as Medicaid managed care organizations (MCOs). For example, in New Mexico, where in 2016 Blue Cross Blue Shield withdrew from the state exchange, state officials now need to work with that insurer to ensure a smooth transition as it re-enters the New Mexico marketplace and to encourage other insurers to join it. In addition, state insurance regulators can use their rate review authority to benefit enrollees by promoting fair and competitive pricing among marketplace insurers. During the rate review process, which sometimes evolves into a bargaining process, insurance regulators often have the ability to put downward pressure on rates, although they must be careful to avoid the risk of underpricing of marketplace plans which could compromise the financial viability of insurers and cause them to withdraw from the market. Exchanges have an important role in the affordability of marketplace plans too. For example ACA marketplace officials in the District of Columbia and Connecticut work closely with state regulators during the rate review process in an effort to keep rates affordable and adequate to assure insurers a fair rate of return.

Several studies now indicate that in selecting among health insurance plans people tend to give disproportionate weight to premium price, and insufficient attention to other cost provisions—deductibles and cost sharing—and to quality of service and care. A core objective of the ACA is to encourage insurance customers to evaluate plans comprehensively. This objective will be hard to achieve, as health insurance is perhaps the most complicated product most people buy. But it will be next to impossible unless customers have tools that help them take account of the cost implications of all plan features and report accurately and understandably on plan quality and service. HealthCare.gov and state-based marketplaces, to varying degrees, are already offering consumers access to a number of decision support tools, such as total cost calculators, integrated provider directories, and formulary look-ups, along with tools that indicate provider network size. These should be refined over time. In addition, efforts are now underway at the federal and state level to provide more data to consumers so that they can make quality-driven plan choices. In 2018, the marketplaces will be required to display federally developed quality ratings and enrollee satisfaction information. The District of Columbia is examining the possibility of adding additional measures. California has proposed that starting in 2018 plans may only contract with providers and hospitals that have met state-specified metrics of quality care and promote safety of enrollees at a reasonable price. Such efforts will proliferate, even if not all succeed.

Beyond regulatory efforts noted above, insurance companies themselves have a critical role to play in contributing to the continued success of the ACA. As insurers come to understand the risk profiles of marketplace enrollees, they will be better able to set rates, design plans, and manage networks and thereby stay profitable. In addition, insurers are best positioned to maintain the stability of their individual market risk pools by developing and financing marketing plans to increase the volume and diversity of their exchange enrollments. It is important, in addition, that insurers, such as UHC, stop creaming off good risks from the ACA marketplaces by marketing limited coverage insurance products, such as dread disease policies and short term plans. If they do not do so voluntarily, state insurance regulators and the exchanges should join in stopping them from doing so.

Most of the attention paid to the ACA to date has focused on efforts to extend health coverage to the previously uninsured and to the administrative stumbles associated with that effort. While insurance coverage will broaden further, the period of rapid growth in coverage is at an end. And while administrative challenges remain, the basics are now in place. Now, the exchanges face the hard work of promoting vigorous and sustainable competition among insurers and of providing their customers with information so that insurers compete on what matters: cost, service, and quality of health care.

Editor's note: This piece originally appeared in Real Clear Markets. Kevin Lucia and Justin Giovannelli contributed to this article with generous support from The Commonwealth Fund.

Authors

Image Source: © Brian Snyder / Reuters
       




ag

How global cities are innovating to leverage foreign investment

Over the past 10 years, Portland, Ore. has seen its foreign direct investment (FDI) pipeline grow from 5% of the total share of regional investment to 30%. A deliberate effort by Greater Portland Inc., the regional public-private economic development organization (EDO) of Portland, led this progress through the integration of FDI strategy into mainstream economic…

       




ag

Talent-driven economic development: A new vision and agenda for regional and state economies

Talent-driven economic development underscores a fundamental tenet of the modern economy: workforce capabilities far surpass any other driver of economic development. This paper aims to help economic development leaders recognize that the future success of both their organizations and regions is fundamentally intertwined with talent development. From that recognition, its goal is to allow economic…

       




ag

International volunteer service and the 2030 development agenda


Event Information

June 14, 2016
9:00 AM - 12:50 PM EDT

Falk Auditorium
Brookings Institution
1775 Massachusetts Avenue NW
Washington, DC 20036

Register for the Event
A 10th anniversary forum


The Building Bridges Coalition was launched at the Brookings Institution in June 2006 to promote the role of volunteer service in achieving development goals and to highlight research and policy issues across the field in the United States and abroad. Among other efforts, the coalition promotes innovation, scaling up, and best practices for international volunteers working in development.

On June 14, the Brookings Institution and the Building Bridges Coalition co-hosted a 10th anniversary forum on the role of volunteers in achieving the United Nation’s Sustainable Development Goals for 2030 and on the coalition’s impact research. General Stanley McChrystal was the keynote speaker and discussed initiatives to make a year of civilian service as much a part of growing up in America as going to high school.

Afterwards, three consecutive panels discussed how to provide a multi-stakeholder platform for the advancement of innovative U.S.-global alliances with nongovernmental organizations, faith-based entities, university consortia, and the private sector in conjunction with the launch of the global track of Service Year Alliance.

For more information on the forum and the Building Bridges Coalition, click here.

Video

Audio

Transcript

Event Materials

      
 
 




ag

Cities and states are on the front lines of the economic battle against COVID-19

The full economic impact of the COVID-19 pandemic came into sharp relief this week, as unemployment claims and small business closures both skyrocketed. Addressing the fallout will require a massive federal stimulus, and both Congress and the White House have proposed aid packages exceeding $1 trillion. But as we noted on Monday, immediate assistance to…

       




ag

Thoughts on the Hagel Filibuster and its Political Implications


I’m late to the conversation about whether or not Republican efforts to insist on sixty votes for cloture on Chuck Hagel’s nomination as Secretary of Defense constitutes a filibuster. Bernstein’s earlier piece ("This is what a filibuster looks like") and Fallows’ recent contribution provide good, nuanced accounts of why Republican tactics amount to a filibuster, even if some GOP senators insist otherwise. In short, the duck test applies: If it looks like a duck, swims like a duck and quacks like a duck, then …. it’s a filibuster!

Still, I think there’s more to be said about the politics and implications of the Hagel nomination. A few brief thoughts:

First, let’s put to rest the debate about whether insisting on sixty votes to cut off debate on a nomination is a filibuster or, at a minimum, a threatened filibuster. It is. Even if both parties have moved over the past decade(s) to more regularly insist on sixty votes to secure passage of major (and often minor) legislative measures and confirmation of Courts of Appeals nominees, we shouldn’t be fooled by the institutionalization—and the apparent normalization—of the 60-vote Senate. Refusing to consent to a majority’s effort to take a vote means (by definition) that a minority of the Senate has flexed its parliamentary muscles to block Senate action. I think it’s fair to characterize such behavior as evidence of at least a threatened filibuster—even if senators insist that they are holding up a nomination only until their informational demands are met.

Second, there’s been a bit of confusion in the reporting about whether filibusters of Cabinet appointees are unprecedented. There appears to have been no successful filibusters of Cabinet appointees, even if there have been at least two unsuccessful filibusters against such nominees. (On two occasions, Cabinet appointees faced cloture votes when minority party senators placed holds on their nominations—William Verity in 1987 and Kempthorne in 2006. An EPA appointee has also faced cloture, but EPA is not technically cabinet-level, even if it is now Cabinet-status). Of course, there have been other Cabinet nominees who have withdrawn; presumably they withdrew, though, because they lacked even majority support for confirmation. Hagel’s situation will be unprecedented only if the filibuster succeeds in keeping him from securing a confirmation vote.

Third, using cloture votes as an indicator of a filibuster underestimates the Senate’s seeping super-majoritarianism. (Seeping super-majoritarianism?! Egads.) At least two other recent Cabinet nominations have been subjected to 60-vote requirements: Kathleen Sebelius in 2009 (HHS) and John Bryson (Commerce) in 2011. Both nominees faced threatened filibusters by Republican senators, preventing majority leader Reid from securing the chamber’s consent to schedule a confirmation vote—until Reid agreed to require sixty votes for confirmation. The Bryson unanimous consent agreement (UCA) appears on the right, an agreement that circumvented the need for cloture. Embedding a 60-vote requirement in a UCA counts as evidence of an attempted filibuster, albeit an unsuccessful one. After all, other Obama nominees (such as Tim Geithner) were confirmed after Reid negotiated UCAs that required only 51 votes for confirmation, an agreement secured because no Republicans were threatening to filibuster.

Finally, what are the implications for the Hagel nomination? If Republicans were insisting on sixty votes on Senator Cornyn’s grounds that “There is a 60-vote threshold for every nomination,” then I bet Reid would have been able to negotiate a UCA similar to Sebelius’s and Bryson’s. But Hagel’s opponents see the time delay imposed by cloture as instrumental to their efforts to sow colleagues’ doubts about whether Hagel can be confirmed (or at a minimum to turn this afternoon’s cloture vote into a party stand to make their point about Benghazi). Of course, it’s possible that the time delay will work to Democrats’ benefit if they can make headlines that GOP obstruction puts national security at risk. (Maybe Leon Panetta should have jetted to his walnut farm to make the point before the cloture vote.) Whatever the outcome, the Hagel case reminds us that little of the Senate’s business is protected from the intense ideological and partisan polarization that permeates the chamber and is amplified by the chamber’s lax rules of debate and senators’ lack of restraint. Filibustering of controversial Cabinet nominees seems to be on the road to normalization—even if Hagel is ultimately confirmed.

Authors

Publication: The Monkey Cage
Image Source: © Kevin Lamarque / Reuters
      
 
 




ag

Chicago’s Regional Housing Initiative promotes regional mobility


Stephen was still a teenager on the north side of St. Louis when his dad, a police officer, was killed during a robbery in their neighborhood. Despite the trauma, Stephen later joined the police force to continue his dad’s legacy and commitment to safe and inclusive neighborhoods. But even before the fatal shooting of Michael Brown in Ferguson in 2014, Stephen (not his real name) yearned to right local wrongs through broader approaches. “The darkest forces weren’t necessarily the ones getting arrested,” he observed. “So I retired from the police force after 22 years, essentially to chase after a different type of perpetrator.” Wanting to focus on policies at multiple levels of government that “were causing the disparities that fueled increasing crime and violence in St. Louis,” Stephen pivoted to civil rights enforcement, tracking policy violations and innovations at a government agency in the St. Louis region.

I met Stephen in February while in St. Louis for a conference his agency organized on HUD’s recently strengthened Affirmatively Furthering Fair Housing (AFFH ) rule, which increases local accountability in promoting residential integration. He wasn’t a speaker at the event, but hearing his story reinforced the importance of combating the deeply entrenched and often invisible causes of segregation.

Recent events and new academic research, including landmark findings by Raj Chetty and colleagues testifying to the benefits of low-poverty neighborhoods for low-income kids, the updated AFFH rule, and the Supreme Court’s disparate impact decision upholding other tools to fight segregation have brought renewed attention to these challenges. Meanwhile, underlying these developments, poverty has failed to decline since the recession and, as recent Brookings research shows, has become more concentrated in neighborhoods of extreme poverty.

How can regional leaders and practitioners respond to these challenges? I was in St. Louis to discuss one part of the solution—advancing more mixed-income neighborhoods. In the Chicago region, our housing and community development-focused firm, BRicK Partners, is collaborating with the Chicago Metropolitan Agency for Planning (CMAP), the Illinois Housing Development Authority (IHDA), and 10 metropolitan Chicago public housing authorities, with support and leadership from HUD, to develop and operate the Regional Housing Initiative (RHI)

RHI is a small, systemic, and potentially scalable “work around” of a very specific set of programs and policies that contribute inadvertently to regional inequities. A flexible and regional pool of resources working across the many traditional public housing authority (PHA) and municipal jurisdictions in the Chicago region, RHI increases quality rental housing in neighborhoods with good jobs, schools, and transit access and provides more housing options to households on Housing Choice Voucher (HCV) waiting lists. Recognizing that the federal formulas allocating HCVs to each individual PHA are not responsive to population, employment, or poverty trends, RHI partners convert and pool a small portion of their HCVs to provide place-based operating subsidies in support of development activity that advances local and regional priorities. RHI supports both opportunity areas with strong markets and quality amenities as well as revitalization areas where public and private sector partners are planning and investing toward that end. In both cases, the bulk of RHI investments are in the suburbs, where the PHAs are smaller and the rental stock more limited. 

RHI has committed over 550 RHI subsidies to nearly 40 mixed-income and supportive housing developments across Chicagoland, supporting more than 2,200 total apartments, over half of which are in opportunity areas. The pooling and transferring of subsidies has allowed RHI to support proposals that local jurisdictions wouldn’t be able to undertake otherwise.

Although a number of innovative programs around the country provide assistance to households moving to opportunity areas, RHI is unique its focus on increasing the supply of housing in opportunity areas regionwide. Its approach is consistent with lessons learned from Brookings’ work on Confronting Suburban Poverty in America: With CMAP as a strong quarterback, RHI has addressed the shortage of rental housing in the suburbs by working across jurisdictions, developing shared priorities, metrics and selection criteria, and by working with IHDA and other stakeholders to leverage greater private sector investment.

This recipe for success is now being deployed in communities beyond Chicago. Baltimore is preparing to advertise for its first round of developer applicants under the leadership of the Baltimore Metropolitan Council, with regionwide PHAs, the State Housing Finance Agency, and a regional housing counselor lined up as supportive partners. In St. Louis, the regional planning and housing finance organizations both attended the February conference where I met Stephen, signaling the potential for greater collaboration for both these entities and the PHAs.

Like many housing advocates and professionals, my colleagues and I at BRicK Partners derive a lot of satisfaction from supporting communities like Baltimore and St. Louis and individuals like Stephen and his peers with replicable best practices. Given today’s political realities, we don’t expect major changes in the federal formulas and statutes behind some of the regional inequities, but “work arounds” such as RHI can still scale up. Nationwide, just a small percentage of HCVs have been converted for such flexible supply-side solutions, but there is reason to be hopeful that this will change. The Regional Mobility Demonstration proposed in the 2017 budget as well as federal public housing voucher legislation passed by the House of Representatives earlier this year are signs that there is real momentum to advance regional strategies that increase access to opportunity for low income residents and families. 

Authors

  • Robin Snyderman
Image Source: © Jason Reed / Reuters
     
 
 




ag

Solutions to Chicago’s youth violence crisis


Arne Duncan, former U.S. secretary of education during the Obama administration and now a nonresident senior fellow with the Brown Center on Education Policy, discusses the crisis of youth violence in Chicago and solutions that strengthen schools and encourage more opportunities for those who are marginalized to make a living in the legal economy.

“The best thing we can do is create hope, opportunity and jobs particularly on the South and West side for young and black men who have been disenfranchised, who have been on the streets. If we can give them some chances to earn a living in a legal economy not selling drugs and not on street corners, I think we have a chance to do something pretty significant here,” Duncan says. “My fundamental belief is that the police cannot solve this on their own we have to create opportunities for young people in communities who have been marginalized for far too long.”

Also in this episode, Bruce Katz, the Centennial Scholar, who discusses how European cities are addressing the refugee crisis in a new segment from our Refugee Series.

Thanks to audio engineer and producer Zack Kulzer, with editing help from Mark Hoelscher, plus thanks to Carisa Nietsche, Bill Finan, Jessica Pavone, Eric Abalahin, Rebecca Viser, and our intern Sara Abdel-Rahim.

Subscribe to the Brookings Cafeteria on iTunes, listen in all the usual places, and send feedback email to BCP@Brookings.edu 

Authors

Image Source: © Khaled Abdullah / Reuters
      
 
 




ag

The reimagination of downtown Los Angeles


Los Angeles has long been a city associated with the common ills of urban excess: sprawl, homelessness, and congestion. More charitable descriptions paint it as West Coast paradise, boasting sunshine and celebrities in equal measure.

A three-day visit to downtown Los Angeles exposed the nuances behind these stereotypes. Hosted by the Los Angeles Downtown Center Business Improvement District, which is focused on strengthening downtown as an innovation district, our visit began as a real estate tour but quickly revealed regeneration and innovation activity that confounded our expectations. 

Downtown LA (DTLA)’s innovation district focuses not just on tech firms but also on historic LA industry strengths like fashion, design, and real estate. LA may have sat in the shadow of the Silicon Valley tech boom, but it appears to be revitalizing in time for the convergence economy, in which tech is no longer a separate sector but ingrained in all forms of economic and creative activity.

And at a time where firms are revaluing proximity, vibrancy, and authenticity, DTLA could not be in a better place. While a number of U.S. cities subjected their downtowns to a range of urban renewal initiatives, the urban fabric of DTLA is largely intact. Vibrant areas like South Broadway feature boutique hotels, a dozen theatres, and clothing stores and bars that exist in historic infrastructure like reclaimed theatres. There is an urban feel that is authentically LA.

The initial renaissance of DTLA began in the late 1990s, after the residential units within its 65 blocks had dwindled to just 10,000.

Along with transportation improvements, permissive planning policies such as adaptive reuse—which allowed commercial buildings to be converted into residential use—were instrumental in increasing DTLA’s residential population. Since 1999, the residential population and housing units have tripled. With new bars and restaurants springing up on every corner, it is no surprise that three-quarters of DTLA’s current residents are aged between 23 and 44.

Building on this residential surge, an increasing number of businesses are now setting up or relocating downtown.

DTLA office space has not always been an easy sell. Employers balk at the prospect of subjecting their workforce to the punishing commute. And Bunker Hill and the adjacent Financial District, the epicenter of the central business district, offers little more than unpopulated plazas and cubicled office space.

DTLA has worked to serve its newfound residential population and attract more workers and companies by retrofitting buildings to modern aesthetic standards. The exposed brickwork and ceiling equipment of many DTLA offices like those of Nationbuilder, an online platform used for political and civic campaigns, is not just a statement of style but a conscious decision to make downtown office buildings feel hospitable to creative firms. The BLOC, a 1.9 million square foot retail development, is essentially a mall that has been turned inside out, with the roof removed to reveal an open air plaza, unrecognizable from the fortress-style building that once sat in the same spot.

While downtown’s office blocks are a fantastic asset in attracting innovation activity, the area also boasts a vast amount of warehouse space. These larger footprints, most often used for textile or food production, are attracting a range of activities that require space or, in the case of Tesla’s Hyperloop, secrecy. Such industrial firms are interspersed with new art galleries and a historic knitting mill, proof of the area’s artistic heritage.

The individuals leading the drive for a DTLA innovation district, such as Nick Griffin, director of Economic Development for the Downtown Center Business Improvement District, are realistic about challenges, such as the lack of quality public space, and proactive in leveraging existing assets, such as the large supply of creative office space.

These efforts and LA’s distinctive industry strengths are combatting one of the biggest challenges to attracting businesses downtown: the strength of competing areas like Silicon Beach, which includes Santa Monica and Playa del Rey and offers an established tech ecosystem alongside an attractive location.

Another challenge? Like many U.S. cities, LA bears the scars of suburban sprawl and a legacy of under investment in public transportation. Congestion is a constant complaint.

But here too LA is making progress.

In November, Angelinos will vote on an extension of Measure R—a 2008 ballot initiative raising the sales tax to fund core transportation projects—to provide sustainable funding for transportation infrastructure and improve access to the city center through the metro system.

Other ambitious projects, such as the Regional Connector, a light rail subway through the middle of downtown, will have a profound effect on the area's connectivity. This project is not just about getting people to and from downtown—it will also have a transformative effect on public space. The city is working with Project for Public Spaces to redesign one of the Connector’s hubs, Pershing Square, with the aim of providing a public space where employees and residents can convene and collaborate.

Connectivity will play a vital role in the continuing success of DTLA’s resurgence. But the DTLA innovation district’s main opportunity lies in better serving and connecting the people who make it work. With hometown authenticity and civic commitment, DTLA is on its way to creating a city center that is greater than the sum of its parts.

DOWNTOWN LA IN NUMBERS

Size: Approx 8.6 sq. miles

Major districts: Civic Center, Bunker Hill, Financial District, South Park, Fashion District, Jewelry District, Historic Core, Little Tokyo, Exposition Park, Toy District, Central City East, Arts District, City West, Chinatown, and Central Industrial District

Residential population: 60,600
66% of residents are between the ages of 23 and 44

Average median household income: $98,000

Education status: 79% of residents hold a college degree

Average workday population: 500,000


Photo Credit: Hunter Kerhart

Authors

  • Kat Hanna
  • Andrew Altman
Image Source: Hunter
      
 
 




ag

Democrats and Republicans disagree: Carbon taxes


Editor’s note: This week the Democrats gather in Philadelphia to nominate a candidate for president and adopt a party platform. Given that there are no minority reports to the Democratic platform, it is likely that it will be adopted as-is this week. And so we can begin the comparison of the two major party platforms. For those who say there are no differences between the Republican and Democratic parties, just read the platforms side-by-side. In many instances, the differences are—as Donald Trump would say, yuuuge. But in one surprising instance, the two parties actually agree. This piece walks readers through one of the biggest contrasts, while an earlier piece by Elaine Kamarck detailed a striking similarity.

When it comes to Republicans and the environment, black is the new green. In addition to denouncing “radical environmentalists” and calling for dismantling the EPA, the platform adopted in Cleveland yesterday calls coal “abundant, clean, affordable, reliable domestic energy resource” and unequivocally opposes “any” carbon tax.

Meanwhile, Democrats are moving in the opposite direction. By the time the party’s draft 2016 platform emerged from the final regional committee meeting in Orlando, it contained a robust section on environmental issues in general and climate change in particular. One of the many amendments adopted in Orlando contains the following sentence: “Democrats believe that carbon dioxide, methane, and other greenhouse gases should be priced to reflect their negative externalities, and to accelerate the transition to a clean energy economy and help meet our climate goals.” In plain English, there should be what amounts to a tax (whatever it may be called) on the atmospheric emissions principally responsible for climate change, including but not limited to CO2.

As Brookings’ Adele Morris pointed out in a recent paper, this proposal raises a host of design issues, including determining initial price levels, payers, recipients, and uses of revenues raised. It would have to be squared with existing federal tax, climate, and energy policies as well as with climate initiatives at the state level.

But these devilish details should not obstruct the broader view: To the best of my knowledge, this is the first time that the platform of a major American political party has advocated taxing greenhouse gas emissions. Many economists, including some with a conservative orientation, will applaud this proposal. Many supporters and producers of fossils fuels will be dismayed.

It remains to be seen how the American people will respond. In a survey conducted in 2015 by Resources for the Future in partnership with Stanford University and the New York Times, 67 percent of the respondents endorsed requiring companies “to pay a tax to the government for every ton of greenhouse gases [they] put out,” with the proviso that all the revenue would be devoted to reducing the amount of income taxes that individuals pay. Previous surveys found similar sentiments: public support increases sharply when the greenhouse gas tax is explicitly revenue-neutral and declines sharply if it threatens an overall increase in individual taxes.

Once this plank of the Democratic platform becomes widely known, Republicans are likely to attack it as yet another example of Democrats’ propensity to raise taxes. The platform’s silence on the question of revenue-neutrality may add some credibility to this charge. Much will depend on the ability of the Democratic Party and its presidential nominee to clarify its proposal and to link it to goals the public endorses.

      
 
 




ag

Making the Rescue Package Work: Asset and Equity Purchases

Executive Summary

If the main purpose of the Emergency Economic Stabilization Act of 2008 is to give banks confidence in each other, then enabling Treasury directly to bolster the capital positions of banks that need more capital may be an even more effective way to restoring confidence to the inter-bank market than the purchased of troubled assets. Whatever Congress may have intended about the pricing of the distressed assets, it also authorized a much more direct way to recapitalize the financial system and weak banks in particular: direct purchases by Treasury of securities that individual institutions may wish to issue to bolster their capital. At this writing, Treasury reportedly is considering ways do this. In this essay, we outline a specific bank recapitalization plan for Treasury to consider.

In particular, Treasury could announce its willingness to entertain applications for capital injections, using a set pricing formula. For publicly traded banks, Treasury could buy at the price as of a given date, such as the price one or more days before its plan was announced. For privately-owned banks, Treasury could use a price based on the average price-to-book value for publicly traded banks as of that date. To prevent government intrusion into the affairs of the banks, the stock should be non-voting. Treasury would make clear that it only would take minority positions. There should be no takeovers of more companies—AIG, Fannie and Freddie are quite enough. Treasury also should announce that it will dispose (or sell back to the bank) any stock acquired through these actions as soon as the financial system has stabilized and the bank is in sound financial condition (perhaps a time limit, such as three years, should be a working presumption).

We believe Treasury can accommodate a systematic recapitalization plan within the funding it has been given – initially $350 billion and another $350 billion later upon request to Congress (unless it disapproves) – by using the required disclosures about its asset purchases as a way of jump starting private sector pricing and trading of these securities. This should conserve Treasury’s resources it might otherwise use for asset purchases, and thus free up funds to recapitalize weak banks directly, but in an orderly fashion.

Treasury will have to be careful when it buys distressed assets to guard against the possibility that banks will just dump their worst stuff on taxpayers. The Department will also have to be careful when buying equity in banks. There cannot be an open invitation for bank owners to move assets out of the bank and then, in effect, say: “We don’t want this bank, you buy it.” To avoid this problem, Treasury should work closely with the FDIC and other regulators to determine whether or not a particular bank is eligible for an equity injection. The Department also may need to limit the scope of the recapitalization program to larger national banks, if it becomes infeasible to allow smaller banks to participate.

Making the Rescue Package Work: Asset and Equity Purchases [1]

The unprecedented financial rescue plan – technically the Emergency Economic Stabilization Act of 2008 (“EESA,” the “Act”, or the “plan”) -- has now been enacted by the Congress. One of the goals of the plan is to end the immediate panic in inter-bank lending markets, and on this basis several omens are not encouraging.

The Dow Jones stock index has been dropping daily, by large amounts, since EESA was enacted. The TED spread measures the difference between the interest rate on short term Treasury bills and the interest rate banks pay to borrow from each other (the LIBOR) and is a widely accepted measure of perceived risk in the financial sector. For several years this spread had hovered around 50 basis points or half a percentage point, reflecting the fact that lending to other financial institutions was considered almost as safe as buying Treasury bills. However, the spread shot up to 2.4 percentage points in July 2007 as the financial crisis hit, and it fluctuated widely in subsequent months. Following passage of the plan it remains even more elevated than it was last July—it was 3.8 percentage points as of October 7 and broke 4 percent on October 8. Financial institutions simply do not trust each other’s credit worthiness. Some of the market worries, of course, reflect the fragile state of the U.S. and global economies, but clearly the passage of the rescue plan itself has not calmed markets.

A second and related goal for the plan, according to media accounts, is to facilitate the recapitalization of the financial system, but the language of the bill is surprisingly coy about this. While the Act aims to “restore liquidity and stability to the financial system” it also directs the Treasury Secretary to prevent “unjust enrichment of financial institutions participating” in the asset purchase program. It is not yet clear whether Treasury will choose to recapitalize banks through its asset purchases – by buying them at prices above the values to which banks and other sellers have already written them down – or whether Treasury will simply use its purchases to stabilize prices for these securities and thus provide liquidity to the market, even if it may result in additional write-downs of their values (and thus additional reductions in capital).

Whatever Congress may have intended about the pricing of the distressed assets, it also authorized a much more direct way to recapitalize the financial system and weak banks in particular: direct purchases by Treasury of securities that individual institutions may wish to issue to bolster their capital. Of course, in normal times, such authority would be unnecessary because financial institutions would seek to tap private sources of capital first. But these are not normal times, to say the least.

If the main purpose of the plan is to give banks confidence in each other, then enabling Treasury directly to bolster the capital positions of banks that need more capital may be an even more effective way to restoring confidence to the inter-bank market. Accordingly, we outline here a possible supplementary bank recapitalization plan that we believe Treasury should pursue, at the same time it purchases distressed assets. As this paper is being completed on October 9, 2008, The New York Times reports that the Treasury is now considering such a move. We are encouraged by this and in this essay we provide both a rationale for doing so and some concrete suggestions for how such a direct recapitalization program might work. We do not support further nationalization of the banking system beyond what has already been done but we believe that the crisis has become so severe that the asset purchase plan on its own will not be enough to turn the current situation around. Additional capital is urgently needed and could be supplied by Treasury purchases of minority, non-voting equity stakes, or by warrants.

We believe Treasury can accommodate a systematic recapitalization plan within the funding it has been given – initially $350 billion and another $350 billion later upon request to Congress (unless it disapproves) – by using the required disclosures about its asset purchases as a way of jump starting private sector pricing and trading of these securities. This should conserve Treasury’s resources it might otherwise use for asset purchases, and thus free up funds to recapitalize weak banks directly, but in an orderly fashion, as we describe below.

Why Do Banks Need More Capital?

Financial institutions make money by borrowing money on favorable terms, that is, at low interest rates, and then lending it out at higher rates or by buying assets that yield higher returns. They may make money in other ways too, but the state of their balance sheets of assets and liabilities is crucial. In order to create a viable financial institution that can accommodate requests by depositors to take money out, someone has to put up capital and typically this comes from the equity in the company. The owners of the company have an incentive to keep this equity capital low and to build a large volume of borrowing and lending off a small base of capital—to increase leverage. This is because the profits earned are divided among the equity owners and the less capital there is, the higher the return on equity.

Governments for many years and in almost all countries have regulations in place setting capital requirements for banks in particular to stop them from taking too much risk in the pursuit of high returns and also protect any fund that insures their deposits against loss (the FDIC in this country). But some of our larger banks in recent years found a way around these rules by establishing “off-balance sheet” entities – Structured Investment Vehicles (“SIVs”) – to purchase mortgage-related and other asset-backed securities that the banks were issuing. In addition, large investment banks significantly increased their leverage in the years running up to the recent crisis, and were able to do so without mandated capital requirements. As a result, when the mortgage crisis hit, our financial system was weaker than was widely believed, and in the case of large banks in particular, than was officially reported.[2]

The mortgage crisis, which first surfaced in 2006 and has escalated rapidly since then, has hit bank balance sheets severely. As banks were forced to recognize losses on the mortgages they held in their portfolio, and especially to write down the values of their mortgage securities to their “market values” (even though the prices in those “markets” reflected relatively few “fire-sale” trades), they suffered reductions of their capital. Furthermore, the large banks that had created SIVs to escape such events found they could not hide from them when the SIVs could no longer roll over the commercial paper they had issued to finance their holdings of mortgage securities. To avoid dumping these securities on the market to satisfy their creditors, the banks took the SIVs back on their balance sheets, only to suffer further losses to their capital.

As we have seen, some of our largest banks – Washington Mutual and Wachovia, to name two – have not been able to survive all of this, and have been forced or are or being forced into the hands of stronger survivors. Other banks have been doing their best to shore up their capital bases by issuing new equity to replace the losses they have absorbed on delinquent loans and declining prices of their asset-backed securities. According to media reports, financial institutions (largely banks) worldwide have suffered over $700 billion in such losses to date, of which they replaced approximately $500 billion by issuing new equity.

But more losses are sure to come; indeed Secretary Paulson has said to expect further bank failures. Earlier this year, the International Monetary Fund projected that losses due to the credit crisis worldwide could hit $1 trillion. The IMF has recently upped that forecast to $1.4 trillion. If anything close to this latest forecast is realized, then many banks – here and abroad – will need to raise even more equity, but in a capital market that is now highly more risk averse than only a few months ago.

It is in this environment that banks have grown much less comfortable dealing with each other, even though they must to keep the financial system running. Every day, some banks have more cash on hand, or reserves, than they need to meet reserve requirements and ordinary demands for liquidity, while others are short of such funds. In the United States, banks thus trade with each other in the Federal Funds market while global banks borrow and lend to each other through the London Interbank market using the LIBOR rate of interest. The Federal Reserve’s main objective of monetary policy is to stabilize the “Fed funds” rate around a target, now just lowered to 1.5%, down from 2% where it has been for some months (and down from 5.25% before subprime mortgage crisis). To do so, the Fed has added a huge amount of liquidity to the financial system, even going so far this week as to buy up commercial paper issued by corporations, an unprecedented step. But the Fed does not and probably cannot control the longer term inter-bank market, in which banks lend to each other typically over a 3-month period.

The steep jump in the 3-month inter-bank lending rate – well over 4 percent – reflects two fundamental facts that EESA is designed to address. One is that banks don’t trust each others’ valuations of the mortgage and possibly other asset-backed securities they are all holding, precisely because the “markets” in those securities are so thin and thus not generating reliable prices. The second problem is that banks either are short of capital themselves, or fear that their counterparties are. No wonder that banks are so unwilling to lend to each other for a period even as short as three months – which in this environment, can seem like an eternity.

The capital shortage in the banking system, in particular, has severe implications for the rest of the economy. An institution that is short of capital is forced to cut back on its lending and this shows up in denials of lines of credit to companies and reductions in credit limits for consumers. Households cut back on spending; it is difficult to get a mortgage or a car loan; and companies reduce investment and curtail operations. And as we learn in any college course on banking, the impact of a loss of capital on bank lending can be multiplied. Each dollar of bank capital supports roughly ten dollars of overall lending in the economy. Each dollar of lost capital thus can result in ten dollars of lending contraction. The impact of an economy-wide bank contraction can be devastating for Main Street. The Great Depression was greatly exacerbated by the collapse of banks. The long stagnation in Japan was in large part the result of a failure to recapitalize the banks.

How bad is the current problem? We do not know how many banks, insurance companies or other financial institutions are in a weakened state, or perhaps even more important, may become weakened as the overall economy deteriorates. The official data published so far don’t really help on this score. The FDIC compiles information on the number and collective assets held by “problem banks,” or those in danger in failing. As of the second quarter of 2008, there were 117 such banks with assets of $78 billion up from 90 in the second quarter with assets of $28 billion., These figures did not include Washington Mutual, which would have failed had it not been bought by J.P. Morgan, or Wachovia, which at this writing, looks like it will be acquired by Wells Fargo (but also was in danger of failing without being acquired by someone). Together these banks hold more than $500 billion in customer deposits. Furthermore, according to recent media reports, even some large insurance companies (beyond AIG) may be having capital problems, having suffered large losses on the securities they hold in reserve to meet future claims.

Can the Asset Purchase Plan Succeed in Recapitalizing the Banks?

In principle, there are two ways in which the original Treasury asset purchase plan would recapitalize the banks. The first method is premised on the view that private markets are unwilling to supply capital to the banks because investors do not know how much their assets are worth. The Treasury, it is argued, would use its asset purchase plan as a way of revealing the prices of the assets and once that information is known, the banks will be able to raise new capital again from private markets. But better pricing will only attract capital if there are investors out there who are willing to supply it. Given the dramatic downturn in equities markets, finding such willing investors will be difficult, to say the least. Those investors that provided capital to banks early on in the crisis have been hit hard by the subsequent decline in equity prices and are reluctant to get burned again. When Bank of America said it would raise $10 billion from the markets, for example, its stock price fell sharply, suggesting there is a lot of market resistance to be overcome before private investors are willing to recapitalize the banking system.

Second, in principle, Treasury could recapitalize the banks by buying distressed assets at prices above those at which the securities are currently carried on the books of the institutions that sell them (original book or purchase value minus any write-offs).[3] In this case, the bank would be able to report a capital gain from its sale to the Treasury, a gain that would reverse, at least in part, the capital losses it had taken in the past and thereby add to its capital.

Treasury has said it will use reverse auctions[4] when it buys assets, and it is possible that the Department will be able to construct some auctions that will enable some holders of troubled assets to sell them to the Treasury at prices that earn a capital gain. But we are somewhat skeptical how many securities will fall into this category. For one thing, asset-backed securities are not homogenous, like traditional equity or bonds. In addition, it would be surprising in the current environment if reverse auctions would reveal prices that are above the written-down values of many of these securities. After all, an auction does not necessarily produce valuations that reflect the “hold to maturity” price rather than the “liquidation” price for the securities, as Fed Chairman Ben Bernanke suggested the purchase plan would accomplish.

Accordingly, we strongly suspect that Treasury will have to purchase many securities in one-on-one deals rather than through auctions. But in doing this, it may be both legally and politically difficult for the Treasury to pay prices in negotiations that are above the valuations banks or other sellers already have given them. Section 101 (e) of EESA specifically requires the Treasury Secretary “to take such steps as may be necessary to prevent unjust enrichment” of participating financial institutions, and Congress could construe such language to preclude such sales.[5] Furthermore, even if there were not a specific prohibition in the EESA, Treasury may wish to avoid the public criticism it would face if it purchased assets at prices that would allow participating institutions to book gains. And, in the case of sales at prices below the explicit or implicit price of the securities carried on an institution’s books, the sales will trigger further accounting losses and thus additional deductions from reported capital.

In short, we are not at all confident that the Treasury’s planned purchases of troubled securities, by themselves, will do much to recapitalize the banking system. This does not mean that the planned asset purchases will not deliver some needed help. Although at this writing the inter-bank lending market remains frozen even though EESA has been enacted and signed into law, one reason why banks and others may not yet have confidence that it will lead to a thaw in credit markets is that the guidelines for the asset purchases have not yet been issued. Once these guidelines are announced and the purchases begin, and the markets start to see real results, it is possible that some of the missing trust in the banking system will come back.[6]

However, Treasury may not need to spend, and for reasons elaborated below we do not believe it should spend, anywhere near the full $700 billion, or perhaps even most of the initial $350 billion tranche in borrowing authority, to liquefy the markets for mortgage and other asset-backed securities. EESA requires Treasury to publish (within two days) information about each of these purchases. We urge the Department to include in such publications (presumably on its website) regular data on the defaults and delinquencies to date of the loans underlying each batch of securities it purchases. Such information should enable financial institutions that are still holding similar securities not only to price them more accurately, but also to give market participants enough confidence to begin trading these securities without further Treasury purchases.

Husbanding its resources should be a prime objective for Treasury. In conducting its purchases of troubled assets, it should target first those asset categories that are the most illiquid. The main objective always should be jump-starting private sector activity or at least bringing greater clarity to the pricing of particular classes of securities. There is no need for Treasury, therefore, to make repeat purchases of similar securities (such as collateralized debt obligations issued within several months of each other, structured in roughly a similar way). Rather, the aim should be to make a market in as many different asset categories as are reasonably necessary to provide guidance to market participants, no more, no less.

Yet no one can be confident at this point that asset purchases alone will give banks sufficient confidence to begin dealing with each other at much lower interest rates. If the asset purchases do the trick, fine. But if they don’t, Treasury should make sure it has enough financial ammunition to pursue a second, more direct, strategy for restoring banks’ confidence – the direct bank recapitalization strategy to which we now turn.

Recapitalizing the Financial System Directly

Having the government put capital into financial institutions directly is not a new idea. It is the approach followed in this crisis for Fannie and Freddie and has been used in other countries. Sweden recapitalized its banks by adding capital to them during its crisis in the 1980s. Most recently, the British government has announced a sweeping bank recapitalization amidst the current crisis. And of more relevance to the U.S. situation, Congress specifically added authority in EESA for Treasury to make direct capital injections into banks.

In recent days, Treasury Secretary Paulson has acknowledged that the Department may take advantage of this authority and thus use some of its funds to buy equity in troubled banks. This is a welcome development. Even if Treasury’s asset purchase program restores confidence in the pricing of troubled securities, many banks still believe that many other banks lack sufficient capital, and thus can still be reluctant to lend to them. The fact that the FDIC stands ready (especially with its new unlimited line of credit at the Treasury) to assist acquiring banks in taking over failing banks is probably not sufficient, even with a successful Treasury asset purchase program, to provide this confidence. Bank lenders to failed banks can still lose money in such transactions, or at the very least may have difficulty accessing their funds for some period, at times when all banks seem to want or need as much liquidity as they can get.

How might such a capital injection program work? Treasury could announce its willingness to entertain applications for capital injections, using a set pricing formula. For publicly traded banks, Treasury could buy at the price as of a given date, such as the price one or more days before its plan was announced, as has been suggested by former St. Louis Federal Reserve Bank President William Poole.[7] For privately-owned banks, Treasury could use a price based on the average price-to-book value for publicly traded banks as of that date. To prevent government intrusion into the affairs of the banks, the stock should be non-voting. Treasury would make clear that it only would take minority positions. There should be no takeovers of more companies—AIG, Fannie and Freddie are quite enough. Treasury also should announce that it will dispose (or sell back to the bank) any stock acquired through these actions as soon as the financial system has stabilized and the bank is in sound financial condition (perhaps a time limit, such as three years, should be a working presumption).

The Treasury will have to be careful when it buys distressed assets to guard against the possibility that banks will just dump their worst stuff on the taxpayers. The Department also will have to be careful when buying equity in banks, especially if it decides to go for a broad, nationwide program. There cannot be an open invitation for owners to move assets out of the bank and then, in effect, say: “We don’t want this bank, you buy it.” This problem suggests that Treasury would need to work closely with the FDIC and other regulators to determine whether or not a particular bank is eligible for an equity injection. Treasury also may need to limit the scope of the program to larger banks, if it becomes infeasible to allow smaller banks to participate.

We presume that Treasury did not initially embrace the idea of a more systematic recapitalization of the banking system out of concern not to have any further government involvement in the banking system, especially on the heels of the Fannie/Freddie conservatorship and the Fed’s rescue of AIG. That Treasury is now considering direct capital injections indicates that this may no longer be a concern. In our view, limiting Treasury’s purchases to non-voting stock in any event would address this concern directly.

Conclusion

Ben Bernanke has compared the current financial crisis to a heart attack in the economy. For some heart attacks, it is enough to administer drugs and change diet and exercise habits. But in acute cases, major surgery is needed and the current crisis is in the acute phase. Direct surgery in the form of capital injected into financial institutions, along with direct asset purchases, should help calm the inter-banking lending market.

Based on recent monthly data it appears that GDP started to fall in mid-year and the economy is moving into recession so the proposals made here will not change that. Nor can the proposals compel banks to make loans to their traditional customers – consumers and businesses – in the current climate of fear. But Treasury can do something to mitigate that fear and thus, along with the recent further easing of monetary policy, likely additional fiscal stimulus and further homeowner relief, the Department will help reduce the severity of the current recession if it uses all the tools in its financial arsenal.



[1] Note: This is the second essay in a series on the financial crisis and how to respond. For the first essay, see http://www.brookings.edu/papers/2008/0922_fixing_finance_baily_litan.aspx

[2] The government’s reported bank capital ratios, for example, did not take account of the off-balance sheet assets and liabilities of the SIVs, which large banks later had to take back on their balance sheets directly.

[3] Some institutions holding these securities may not have fully marked them to “market” under current accounting rules, but instead simply have added to their reserves for possible future losses to reflect the likelihood of such write-downs. In the lattercase, the securities may implicitly be marked down by a percentage reflecting the loan loss reserve attributable to them. If this latter percentage is not publicly stated, Treasury may require participating institutions to break it out for the Department as a condition for participating in the program (and if the Department does not do this, it may be compelled to do so either by the Executive branch Oversight authority or the Congressional oversight committee established under the Act).

[4] A regular auction is where the seller puts an item out on the market and then potential buyers bid for it. The seller then takes the highest price. In a reverse auction, the buyer puts out a notice of what item he or she wants to buy and then sellers compete to supply this item. The buyer then chooses the lowest price. Reverse auctions are the way a lot of private companies and government entities manage their procurement processes.

[5] The rest of this subsection includes as an example of such unjust enrichment the sale of a troubled asset to the Treasury at a higher price than what the seller paid to acquire it. But this language is not exclusive. Congress, the public or the media could construe unjust enrichment also to include sales of securities at prices above those implicitly or explicitly carried by the institution on its books.

[6] The Treasury asset purchase plan would also a provide a valuable service by speeding the de-leveraging process. As we described earlier, banks are leveraged and hold capital that is only a fraction of their assets or liabilities. When they take a hit to their capital base, they must either replenish the capital or scale back their balance sheets. When it became impossible to sell the assets except at fire-sale prices, they were not able to do this. Selling the asset to the Treasury will help them scale down. To get bank lending going again, however, we want them to be able to make new lending, not to just scale back.

[7] Speech made at the National Association of Business Economists conference, Washington DC, October 6, 2008.

Downloads

      
 
 




ag

Can the US sue China for COVID-19 damages? Not really.

       




ag

A modern tragedy? COVID-19 and US-China relations

Executive Summary This policy brief invokes the standards of ancient Greek drama to analyze the COVID-19 pandemic as a potential tragedy in U.S.-China relations and a potential tragedy for the world. The nature of the two countries’ political realities in 2020 have led to initial mismanagement of the crisis on both sides of the Pacific.…

       




ag

Contemplating COVID-19’s impact on Africa’s economic outlook with Landry Signé and Iginio Gagliardone

       




ag

Debunking the Easterlin Paradox, Again


I’ve written here before about my research with Betsey Stevenson showing that economic development is associated with rising life satisfaction. Some people find this result surprising, but it’s the cleanest interpretation of the available data. Yet over the past few days, I’ve received calls from several journalists asking whether Richard Easterlin had somehow debunked these findings. He tried. But he failed.

Rather than challenge our careful statistical tests, he’s simply offered a new mishmash of statistics that appear to make things murkier.

For those of you new to the debate, the story begins with a series of papers that Richard Easterlin wrote between 1973 and 2005, claiming that economic growth is unrelated to life satisfaction. In fact, these papers simply show he failed to definitively establish such a relationship. In our 2008 Brookings Paper, Betsey and I systematically examined all of the available happiness data, finding that the relationship was there all along: rising GDP yields rising life satisfaction. More recent data reinforces our findings. Subsequently, Easterlin responded in of papers circulated in early 2009. That’s the research journalists are now asking me about. But in a paper released several weeks ago, Betsey, Dan Sacks and I assessed Easterlin’s latest claims, and found little evidence for them.

Let’s examine Easterlin’s three main claims.

1. GDP and life satisfaction rise together in the short-run, but not the long-run. False. Here’s an illustrative graph. We take the main international dataset — the World Values Survey — and in order to focus only on the long-run, compare the change in life satisfaction for each country from the first time it was surveyed until the last, the corresponding growth in GDP per capita. Typically, this is a difference taken over 18 years (although it ranges from 8 to 26 years). The graph shows that long-run rises in GDP are positively associated with growth in life satisfaction.

Image

This graph includes the latest data, and Dan generated it just for this blog post. In fact, Easterlin was responding to our earlier work, which showed each of the comparisons one could make between various waves of this survey: Wave 1 was taken in the early ‘80s; Wave 2 in the early ‘90s; Wave 3 in the mid-late ‘90s; Wave 4 mostly in the early 2000s. And in each of these comparisons, you see a positive association — sometimes statistically significant, sometimes not.

Image

What should we conclude from this second graph? Given the typically-significant positive slopes, you might conclude that rising GDP is associated with rising life satisfaction. It’s also reasonable to say that these data are too noisy to be entirely convincing. But the one thing you can’t conclude is that these data yield robust proof that long-run economic growth won’t yield rising life satisfaction. Yet that’s what Easterlin claims.

2. The income-happiness link that we document is no longer apparent when one omits the transition economies. Also false. One simple way to see this is to note that in the first graph the transition countries are shown in gray. Even when you look only at the other countries, it’s hard to be convinced that economic growth and life satisfaction are unrelated. To see the formal regressions showing this, read Table 3 of our response. (Aside: Why eliminate these countries from the sample?)

Or we could just look to another data source which omits the transition economies. For instance, the graph below shows the relationship between life satisfaction and GDP for the big nine European nations that were the members of the EU when the Eurobarometer survey started. Over the period 1973-2007, economic growth yielded higher satisfaction in eight of these nine countries. And while we’re puzzled by the ninth — the increasingly unhappy Belgians — we’re not going to drop them from the data! And if you think Belgium is puzzling, too, then we’ve done our job.

Image


3. Surveys show that financial satisfaction in Latin American countries has declined as their economies have grown. Perhaps true. But how are surveys of financial satisfaction relevant to a debate about life satisfaction? And why focus on Latin America, rather than the whole world? In fact, when you turn to the question we are actually debating — life satisfaction —these same surveys suggest that those Latin American countries which have had the strongest growth have seen the largest rise in life satisfaction. This finding isn’t statistically significant, but that’s simply because there’s not a lot of data on life satisfaction in Latin America! (Given how sparse these data are, we didn’t report them in our paper.)

What’s going on here?

Now it’s reasonable to ask how it is that others arrived at a different conclusion. Easterlin’s Paradox is a non-finding. His paradox simply describes the failure of some researchers (not us!) to isolate a clear relationship between GDP and life satisfaction.

But you should never confuse absence of evidence with evidence of absence. Easterlin’s mistake is to conclude that when a correlation is statistically insignificant, it must be zero. But if you put together a dataset with only a few countries in it — or in Easterlin’s analysis, take a dataset with lots of countries, but throw away a bunch of it, and discard inconvenient observations — then you’ll typically find statistically insignificant results. This is even more problematic when you employ statistical techniques that don’t extract all of the information from your data. Think about it this way: if you flip a coin three times, and it comes up heads all three times, you still don’t have much reason to think that the coin is biased. But it would be silly to say, “there’s no compelling evidence that the coin is biased, so it must be fair.” Yet that’s Easterlin’s logic.

There’s a deeper problem, too. The results I’ve shown you are all based on analyzing data only from comparable surveys. And when you do this, you find rising incomes associated with rising satisfaction. Instead, Easterlin and co-authors lump together data from very different surveys, asking very different questions. It’s not even clear how one should make comparisons between a survey (in the US) asking about happiness, a survey (in Japan) asking about “circumstances at home,” surveys of life satisfaction in Europe based on a four-point scale, and global surveys based on a ten-point scale. Easterlin’s non-result appears only when comparing non-comparable data.

If you want to advocate against economic growth — and to argue that it won’t help even in the world’s poorest nations — then you should surely base such radical conclusions on findings rather than non-findings, and on the basis of robust evidence.

A final thought

Why not look at the levels of economic development and satisfaction? The following graph does this, displaying amazing new data coming from the Gallup World Poll. There’s no longer any doubt that people in richer countries report being more satisfied with their lives.

Image

Is this relevant? Easterlin argues it isn’t — that he’s only concerned with changes in GDP. But the two are inextricably linked. If rich countries are happier countries, this begs the question: How did they get that way? We think it’s because as their economies developed, their people got more satisfied. While we don’t have centuries’ worth of well-being data to test our conjecture, it’s hard to think of a compelling alternative.

Authors

Publication: The New York Times Freakonomics blog
Image Source: © Omar Sobhani / Reuters
     
 
 




ag

Happy Peasants and Frustrated Achievers? Agency, Capabilities, and Subjective Well-Being

Abstract

We explore the relationship between agency and hedonic and evaluative dimensions of well-being, using data from the Gallup World Poll. We posit that individuals emphasize one well-being dimension over the other, depending on their agency. We test four hypotheses including whether: (i) positive levels of well-being in one dimension coexist with negative ones in another;and (ii) individuals place a different value on agency depending on their positions in the well-being and income distributions. We find that: (i) agency is more important to the evaluative well-being of respondents with more means; (ii) negative levels of hedonic well-being coexist with positive levels of evaluative well-being as people acquire agency; and (iii)both income and agency are less important to well-being at highest levels of the well-being distribution. We hope to contribute insight into one of the most complex and important components of well-being, namely,people’s capacity to pursue fulfilling lives.

Downloads

Authors

Publication: Human Capital and Economic Opportunity Global Working Group
      
 
 




ag

This Happiness & Age Chart Will Leave You With a Smile (Literally)


In "Why Aging and Working Makes us Happy in 4 Charts," Carol Graham describes a research paper in which she and co-author Milena Nikolova examine determinants of subjective well-being beyond traditional income measures. One of these is the relationship between age and happiness, a chart of which resembles, remarkably, a smile.


As Graham notes:

There is a U-shaped curve, with the low point in happiness being at roughly age 40 around the world, with some modest differences across countries. It seems that our veneration of (or for some of us, nostalgia, for) youth as the happiest times of our lives is overblown, the middle age years are, well, as expected, and then things get better as we age, as long as we are reasonably healthy (age-adjusted) and in a stable partnership.

The new post has three additional charts that showcase other ways to think about factors of happiness.


Graham, the author of The Pursuit of Happiness: An Economy of Well-Being, appeared in a new Brookings Cafeteria Podcast.

Authors

  • Fred Dews
      
 
 




ag

Ivy League Degree Not Required for Happiness


Editor’s Note: Admission rates this year are at an all-time low, while anxiety about the college admission process remains high. Carol Graham and Michael O’Hanlon write that an Ivy League degree does not necessarily determine happiness or success.

This year's college admission process in the United States was by most measures tougher than ever. Only about 5 percent of applicants were accepted at Stanford and many admission rates at other schools were comparably daunting. Meanwhile, our nation's teenagers are exposed to a background of noise about America's supposed economic decline, which would seem only to increase the pressure to get a head start on that declining pool of available high-paying and highly satisfying careers. In the Washington, D.C. area, this sense of malaise was compounded this year by a spate of suicides at a prestigious local high school, with the common thread reportedly being a sense of anxiety about the future among the teenagers.

Of course, some of this story is timeless, and reflects the inevitable challenges of growing up in a competitive society. But much of it is over-hyped or simply wrong. We need to help our college-bound teenagers maintain a sense of perspective and calm as they face what is among life's most exciting but also most stressful periods. As two proud Princeton grads, we recognize the value of a high-quality education and the social and professional networks that come with an Ivy League degree. But we also know from intuition and experience that a similar kind of experience is achievable in many, many other places in our country, fielding as it does the best ecosystem of higher education institutions in the history of the planet. And increasingly, there is a strong body of research to back this claim up.

Higher Education Is Important

First, though, it is worth noting one incontrovertible fact: higher education is important. Sure, there can be exceptions, and some people may not have the opportunity at a given point in life to pursue either a two-year or four-year college degree or graduate education. But it is a reality in America's modern economy, due to trends with globalization and automation. Those with college degrees continue to do better than previous generations in this country; those without have seen their incomes stagnate or even decline on average for a generation now, as our colleague Belle Sawhill has shown. Another Brookings colleague, Richard Reeves, cites evidence that college graduates have higher marriage rates, higher wages, better health, greater job security, more interesting work and greater personal autonomy.

However, where you go to college matters less than if you go, by any number of measures. This is not to say it is unimportant. But whether you are interested in happiness while in college, satisfaction later in life or even raw monetary income, the correlation between gaining a Harvard degree and achieving nirvana is less than many 18-year-olds may be led to believe.

Begin with the question of happiness--a new and scientifically measurable arena of social science. It turns out you can learn a lot about how happy people are by asking them, and then applying common-sense statistical methods to a pool of data. For one of us, this has been the focus of research for over a decade. While money matters to happiness, after a certain point more money does not increase many dimensions of well-being (such as how people experience their daily lives), and in general, it is less important than good health or fulfillment at the workplace, on the home-front and in the community. Happier people, meanwhile, tend to care less about income but are more likely to value learning and creativity. And they are also likely to have more positive outlooks about their own futures, outlooks which in turn lead to better labor market and health outcomes on average.

An Atmosphere For Success

Yale or Amherst graduates are no more likely to find happiness than those who attended less prestigious schools. A new Gallup poll, inspired largely by Purdue president Mitch Daniels, finds that the most important enduring effects of the college experience on human happiness relate to personal bonds with professors and a sense of ongoing intellectual curiosity, not to GPA or GRE scores.

America can provide this kind of stimulation and this kind of experience at thousands of its institutions of higher learning. To be sure, elite universities, with their higher percentage of dedicated and outstanding students, create an atmosphere that can be more motivating. Yet it can also be much more stressful. Students at somewhat less notable institutions may need a bit more self-motivation to excel in certain cases, but they may also find professors who are every bit as committed to their education as any Ivy Leaguer and perhaps more available on average.

It is true that networks of fellow alums from the nation's great universities are often hugely helpful to one's career prospects. But a surprising number of institutions in our country have such networks of committed graduates, professors and other patrons. And while Harvard grads may be a dime a dozen in a place like D.C., those hailing from somewhat less known or prestigious places arguably watch out for each other even more, compensating to a large extent for their smaller numbers.

Even on the narrower subject of financial success, the issue is not cut and dried. Sure, the big and prestigious universities tend to be richer, and their graduates on average make more money. But much of that is because the more motivated and gifted students tend to choose the elite schools in the first place, driving up the average regardless of the quality of education. For the 18-year-old who was just turned down by his or her top couple of college choices and having to settle for a "safety" school, it is not clear that this turn of fate really matters for long-term financial prospects. Assuming comparable degrees of drive and motivation, students appear to do just as well elsewhere. In 2004, Mathematica economist Stacy Dale compared students who willfully went to less prestigious schools with their cohorts at the most prestigious universities and showed little discernible income differential.

America is blessed by a wonderful new generation of young people; as parents of five of them, we see this every day. Maybe those of us who have been through some of life's ups and downs need to work harder to help them take down the collective stress level a notch or two. No graduating child should be unhappy because they are going to their second or third choice of college next fall. With the right attitude and encouragement, they will likely do well—and be happy—wherever they go.

Image Source: © Eduardo Munoz / Reuters
      
 
 




ag

What does “agriculture” mean today? Assessing old questions with new evidence.


One of global society’s foremost structural changes underway is its rapid aggregate shift from farmbased to city-based economies. More than half of humanity now lives in urban areas, and more than two-thirds of the world’s economies have a majority of their population living in urban settings. Much of the gradual movement from rural to urban areas is driven by long-term forces of economic progress. But one corresponding downside is that city-based societies become increasingly disconnected—certainly physically, and likely psychologically—from the practicalities of rural livelihoods, especially agriculture, the crucial economic sector that provides food to fuel humanity.

The nature of agriculture is especially important when considering the tantalizingly imminent prospect of eliminating extreme poverty within a generation. The majority of the world’s extremely poor people still live in rural areas, where farming is likely to play a central role in boosting average incomes. Agriculture is similarly important when considering environmental challenges like protecting biodiversity and tackling climate change. For example, agriculture and shifts in land use are responsible for roughly a quarter of greenhouse gas emissions.

As a single word, the concept of “agriculture” encompasses a remarkably diverse set of circumstances. It can be defined very simply, as at dictionary.com, as “the science or occupation of cultivating land and rearing crops and livestock.” But underneath that definition lies a vast array of landscape ecologies and climates in which different types of plant and animal species can grow. Focusing solely on crop species, each plant grows within a particular set of respective conditions. Some plants provide food—such as grains, fruits, or vegetables—that people or livestock can consume directly for metabolic energy. Other plants provide stimulants or medication that humans consume—such as coffee or Artemisia—but have no caloric value. Still others provide physical materials—like cotton or rubber—that provide valuable inputs to physical manufacturing.

One of the primary reasons why agriculture’s diversity is so important to understand is that it defines the possibilities, and limits, for the diffusion of relevant technologies. Some crops, like wheat, grow only in temperate areas, so relevant advances in breeding or plant productivity might be relatively easy to diffuse across similar agro-ecological environments but will not naturally transfer to tropical environments, where most of the world’s poor reside. Conversely, for example, rice originates in lowland tropical areas and it has historically been relatively easy to adopt farming technologies from one rice-growing region to another. But, again, its diffusion is limited by geography and climate. Meanwhile maize can grow in both temperate and tropical areas, but its unique germinating properties render it difficult to transfer seed technologies across geographies.

Given the centrality of agriculture in many crucial global challenges, including the internationally agreed Sustainable Development Goals recently established for 2030, it is worth unpacking the topic empirically to describe what the term actually means today. This short paper does so with a focus on developing country crops, answering five basic questions: 

1. What types of crops does each country grow? 

2. Which cereals are most prominent in each country? 

3. Which non-cereal crops are most prominent in each country? 

4. How common are “cash crops” in each country? 

5. How has area harvested been changing recently? 

Readers should note that the following assessments of crop prominence are measured by area harvested, and therefore do not capture each crop’s underlying level of productivity or overarching importance within an economy. For example, a local cereal crop might be worth only $200 per ton of output in a country, but average yields might vary across a spectrum from around 1 to 6 tons per hectare (or even higher). Meanwhile, an export-oriented cash crop like coffee might be worth $2,000 per ton, with potential yields ranging from roughly half a ton to 3 or more tons per hectare. Thus the extent of area harvested forms only one of many variables required for a thorough understanding of local agricultural systems. 

The underlying analysis for this paper was originally conducted for a related book chapter on “Agriculture’s role in ending extreme poverty” (McArthur, 2015). That chapter addresses similar questions for a subset of 61 countries still estimated to be struggling with extreme poverty challenges as of 2011. Here we present data for a broader set of 140 developing countries. All tables are also available online for download.

Downloads

Authors

     
 
 




ag

International volunteer service and the 2030 development agenda


Event Information

June 14, 2016
9:00 AM - 12:50 PM EDT

Falk Auditorium
Brookings Institution
1775 Massachusetts Avenue NW
Washington, DC 20036

Register for the Event
A 10th anniversary forum


The Building Bridges Coalition was launched at the Brookings Institution in June 2006 to promote the role of volunteer service in achieving development goals and to highlight research and policy issues across the field in the United States and abroad. Among other efforts, the coalition promotes innovation, scaling up, and best practices for international volunteers working in development.

On June 14, the Brookings Institution and the Building Bridges Coalition co-hosted a 10th anniversary forum on the role of volunteers in achieving the United Nation’s Sustainable Development Goals for 2030 and on the coalition’s impact research. General Stanley McChrystal was the keynote speaker and discussed initiatives to make a year of civilian service as much a part of growing up in America as going to high school.

Afterwards, three consecutive panels discussed how to provide a multi-stakeholder platform for the advancement of innovative U.S.-global alliances with nongovernmental organizations, faith-based entities, university consortia, and the private sector in conjunction with the launch of the global track of Service Year Alliance.

For more information on the forum and the Building Bridges Coalition, click here.

Video

Audio

Transcript

Event Materials

      
 
 




ag

Encrypted messaging apps are the future of propaganda

In recent years, propaganda campaigns utilizing disinformation and spread on encrypted messaging applications (EMAs) have contributed to rising levels of offline violence in a variety of countries worldwide: Brazil, India, Mexico, Myanmar, South Africa, Sri Lanka, the United States, and Venezuela. EMAs are quickly becoming the preferred medium for complex and covert propaganda campaigns in…

       




ag

Passages to India: Reflecting on 50 years of research in South Asia


Editors’ Note: How do states manage their armed forces, domestic politics, and foreign affairs? Stephen Cohen, senior fellow with the India Project at Brookings, has studied this and a range of other issues in Southeast Asia since the 1960s. In a new book, titled “The South Asia Papers: A Critical Anthology of Writings,” Cohen reflects on more than a half-century of scholarship on India, describing the dramatic changes he has personally witnessed in the field of research. The following is an excerpt from the book’s preface.

[In the 1960s, questions about how states manage their armed forces] were not only unasked in the South Asian context by scholars; they were also frowned on by the Indian government. This made preparation both interesting and difficult. It was interesting because a burgeoning literature on civil–military relations in non-Western states could be applied to India. Most of it dealt with two themes: the “man on horseback,” or how the military came to power in a large number of new states, and how the military could assist in the developmental process. No one had asked these questions of India, although the first was relevant to Pakistan, then still governed by the Pakistani army in the form of Field Marshal Ayub Khan.

***

During my first and second trips [in the 1960s] my research was as a historian, albeit one interested in the army’s social, cultural, and policy dimensions. I discovered, by accident, that this was part of the movement toward the “new military history.” Over the years I have thus interacted with those historians who were interested in Indian military history, including several of my own students. 

While the standard of historians in India was high in places like the University of Calcutta, military history was a minor field, just as it was in the West. Military historians are often dismissed as the “drums and trumpets” crowd, interested in battles, regiments, and hardware, but not much else. My own self-tutoring in military history uncovered something quite different: a number of scholars, especially sociologists, had written on the social and cultural impact of armed forces, a literature largely ignored by the historians. While none of this group was interested in India, the connection between one of the world’s most complicated and subtle societies, the state’s use of force, and the emergence of a democratic India was self-evident. 

***

A new generation of scholars and experts, many of them Indians (some trained in the United States) and Indian Americans who have done research in India, have it right: this is a complex civilizational-state with expanding power, and its rise is dependent on its domestic stability, its policies toward neighbors (notably Pakistan), the rise of China, and the policies of the United States. 

The literature that predicts a conflict between the rising powers (India and China), and between them and America the “hegemon,” is misguided: the existence of nuclear weapons by all three states, plus Pakistan, ensures that barring insanity, any rivalries between rising and established states will be channeled into “ordinary” diplomatic posturing, ruthless economic competition, and the clash of soft power. In this competition, India has some liabilities and many advantages, and the structure of the emerging world suggests a closer relationship between the United States and India, without ruling out much closer ties between China and India. 

There remain some questions: Can the present Indian leadership show magnanimity in dealing with Pakistan, and does it have the foresight to look ahead to new challenges, notably environmental and energy issues that require new skills and new international arrangements? Importantly, some of the best work on answering these questions is being done in India itself, and the work of Kanti Bajpai, Amitabh Mattoo, Harsh Pant, C. Raja Mohan, Rajesh Basrur, and others reveals the maturity of Indian thinking on strategic issues. It has not come too soon, as the challenges that India will face are growing, and those of Pakistan are even more daunting.

     
 
 




ag

What might the drone strike against Mullah Mansour mean for the counterinsurgency endgame?


An American drone strike that killed leader of the Afghan Taliban Mullah Akhtar Mohammed Mansour may seem like a fillip for the United States’ ally, the embattled government of Afghanistan’s President Ashraf Ghani. But as Vanda Felbab-Brown writes in a new op-ed for The New York Times, it is unlikely to improve Kabul’s immediate national security problems—and may create more difficulties than it solves.

The White House has argued that because Mansour became opposed to peace talks with the Afghan government, removing him became necessary to facilitate new talks. Yet, as Vanda writes in the op-ed, “the notion that the United States can drone-strike its way through the leadership of the Afghan Taliban until it finds an acceptable interlocutor seems optimistic, at best.”

[T]he notion that the United States can drone-strike its way through the leadership of the Afghan Taliban until it finds an acceptable interlocutor seems optimistic, at best.

Mullah Mansour's death does not inevitably translate into substantial weakening of the Taliban's operational capacity or a reprieve from what is shaping up to be a bloody summer in Afghanistan. Any fragmentation of the Taliban to come does not ipso facto imply stronger Afghan security forces or a reduction of violent conflict. Even if Mansour's demise eventually turns out to be an inflection point in the conflict and the Taliban does seriously fragment, such an outcome may only add complexity to the conflict. A lot of other factors, including crucially Afghan politics, influence the capacity of the Afghan security forces and their battlefield performance.

Nor will Mansour’s death motivate the Taliban to start negotiating. That did not happen when it was revealed last July’s the group’s previous leader and founder, Mullah Mohammad Omar, had died in 2013. To the contrary, the Taliban’s subsequent military push has been its strongest in a decade—with its most violent faction, the Haqqani network, striking the heart of Kabul. Mansour had empowered the violent Haqqanis following Omar’s death as a means to reconsolidate the Taliban, and their continued presence portends future violence. Mansour's successor, Mawlawi Haibatullah Akhundzada, the Taliban’s former minister of justice who loved to issue execution orders, is unlikely to be in a position to negotiate (if he even wants to) for a considerable time as he seeks to gain control and create legitimacy within the movement.

The United States has sent a strong signal to Pakistan, which continues to deny the presence of the Afghan Taliban and the Haqqani network within its borders. Motivated by a fear of provoking the groups against itself, Pakistan continues to show no willingness to take them on, despite the conditions on U.S. aid.

Disrupting the group’s leadership by drone-strike decapitation is tempting militarily. But it can be too blunt an instrument, since negotiations and reconciliation ultimately depend on political processes. In decapitation targeting, the U.S. leadership must think critically about whether the likely successor will be better or worse for the counterinsurgency endgame.

Authors

     
 
 




ag

Mr. Modi goes to Washington (again)


Next week, Americans will be looking westward to the Tuesday Democratic primary in California. Meanwhile, in Washington, President Obama and then the U.S. Congress will host someone very familiar with electoral politics: Indian Prime Minister Narendra Modi.

This will be the third Modi-Obama summit since the Indian prime minister took office two years ago. Since their first phone call on May 16, 2014, the two leaders have also met multiple times at regional and global gatherings or on the sidelines of those summits. This frequency has been a departure from the past and has even led some—particularly in the Indian media—to ask: why is Modi visiting the United States again? A simple answer would be “because he was invited,” and there are a few reasons why the White House extended that invitation and why Modi accepted.

At a time when [Obama] is being criticized for not having done enough or for doing the wrong thing on foreign policy, he can point to the U.S.-India relationship as a success.

Achievements logged

For President Obama, there’s the legacy issue. At a time when he is being criticized for not having done enough or for doing the wrong thing on foreign policy, he can point to the U.S.-India relationship as a success, particularly in the context of the rebalance to the Asia-Pacific. U.S. popularity is up in India according to polls and three-quarters of those surveyed in India last year expressed confidence in Obama on world affairs. 

President George W. Bush left office after having signed the historic civil nuclear deal with India. Obama can claim to have put quite a few more runs on the board. At a recent Senate Foreign Relations Committee hearing, Assistant Secretary of State for South and Central Asia Nisha Biswal indeed laid out some key developments in the relationship in the Obama era: 

  • the launch of the annual U.S.-India Strategic Dialogue (now the U.S.-India Strategic and Commercial Dialogue); 
  • the long list of functional and regional issues on which the two countries now have dialogues or working groups; 
  • the signing of the Joint Strategic Vision for the Asia-Pacific and Indian Ocean Regions and the deepening cooperation under that framework; 
  • the increase in trade from $60 billion in 2009 to $107 billion in 2015; 
  • the number of jobs that American exports to India have created in the United States; 
  • the tripling of foreign direct investment from India into the United States; and 
  • U.S. defense sales to India increasing from $300 million less than a decade ago to $14 billion today. 
  • Strengthening friendships

    For Prime Minister Modi and the Indian government, the visit represents another chance to strengthen India’s partnership with a country that Modi has called “a principal partner in the realization of India’s rise as a responsible, influential world power.” The United States is India’s largest trading partner and a crucial source of capital, technology, knowledge, resources, remittances, and military equipment. It can also help ensure multi-polarity in Asia, which is a crucial goal for Indian policymakers. 

    The visit is also an opportunity for Modi to engage with legislators and the American private sector—two key constituencies that can help determine the pace of progress in the relationship. House Speaker Paul Ryan has invited the Indian leaders to address a joint session of the U.S. Congress, and Modi will be the fifth Indian prime minister to do so (India’s first prime minister Jawaharlal Nehru gave back-to-back speeches to the House and Senate separately in 1949). But it’ll likely hold special significance for the prime minister and his supporters, given that from 2005 to 2014, then Gujarat Chief Minister Modi was denied entry into the United States. 

    A busy calendar

    Modi’s has a packed schedule in Washington. On June 6, he’ll visit Arlington National Cemetery, meet with the heads of think tanks, and participate in an event involving the recovery and return of stolen Indian antiquities. On June 7, he’ll meet with President Obama, who will also host a lunch for him, and then Defense Secretary Ashton Carter. That will be followed by meetings with business leaders and an address to the U.S.-India Business Council. Expect to see Modi highlight and defend his government’s two-year record on the economy and make a pitch for U.S. businesses to increase their involvement in India—and particularly some of Modi’s flagship initiatives such as Make in India and Digital India.

    Expect to see Modi highlight and defend his government’s two-year record on the economy and make a pitch for U.S. businesses to increase their involvement in India.

    June 8 will be devoted to Congressional engagement, including the joint address, a lunch hosted by Speaker Ryan, and a reception hosted by the House and Senate Foreign Relations Committees, as well as the India Caucus. Modi will acknowledge the legislature’s role and significance in developing the U.S.-India relationship, and will likely highlight the democratic values the two countries share, as well as how India and Indians have contributed to the United States, global growth, and the international order. Importantly, in an election year, Modi will likely note the bipartisan nature of the relationship—there’s no indication yet that he will or wants to meet any of the presidential candidates on this visit, though the sessions potentially offer opportunities for him to do so. Republican members of Congress will also seek to highlight their role in the development of the partnership. The interactions on Capitol Hill will also be a chance for Modi to address some Congressional concerns—such as human rights, Iran, non-proliferation, the investment climate—and for Modi to call for the two countries to “accommodat[e] each other’s concerns.”

    Do not, however, expect to hear the word “Pakistan”—the Indian government wants to avoid hyphenation and get Americans to think of India beyond India-Pakistan terms. Nor should you expect to hear the word “China,” though there might be subtle attempts to note the contrast with that other Asian giant and make the case for the United States to support the rise of a large Asian democracy that can demonstrate that democracy and development aren’t mutually exclusive.

    Parting glance between Modi and Obama

    And what’s on the agenda for the Modi-Obama meeting? In one sense, the last few years have signaled a regularization of U.S.-India leader-level summits (with bilateral meetings in 2013, 2014, 2015, and 2016). Over the last two years, high-level meetings have been effective as action-forcing events. This time, officials have been managing expectations, broadly describing the visit as “part of consolidating and celebrating the relationship.” So this is a chance to recognize the steps that the other side has taken to increase the run-rate of the relationship—particularly on defense and security fronts—and tie up some loose ends with an eye towards sustaining momentum into the next administration (without necessarily tying its hands). 

    In terms of focus areas, the governments have emphasized (to varying degrees) economic ties, energy and climate change, as well as defense and security cooperation. The Obama administration would like to India ratify the Paris agreement, for instance—unlike in the United States, India doesn’t require legislative approval. Indian officials recognize the importance of this issue to Obama, but are also concerned about U.S. policy continuity given the presumptive Republican nominee’s stand on the issue. Delhi, in turn, is partly using the shared desire for India to meet its clean energy commitments to make the case for an American full-court press to facilitate Indian membership in the Nuclear Suppliers Group (NSG)—similar to the Bush administration’s efforts to help India get an NSG waiver in 2008. The U.S. position has been that India is ready for NSG membership and meets requirements for membership of the Missile Technology Control Regime, and it has supported Indian application and eventual membership of both, as well as two other nonproliferation and export control regimes. Asked if Modi would ask Obama to “go to bat for India” with others on this, the Indian foreign secretary didn’t answer directly but noted: “countries that feel we’re doing the right thing...if they take it upon themselves to…articulate their positions and talk to others, this is what friends do for each other.” Modi himself will visit two other NSG members (Switzerland and Mexico) just before and after the U.S. visit partly to make the case for India’s membership.

    The visit will also be a chance to cement and highlight cooperation in and on the Indo-Asia-Pacific region. In addition, observers will be watching to see whether the two countries will sign the Logistics Exchange Memorandum of Agreement (LEMOA)—the logistics support agreement that the Indian defense minister said in April that Secretary Carter and he had “agreed in principle to conclude”—or whether there’ll be further announcements with regard to the Defense Technology and Trade Initiative. There’ll also be interest in whether the countries get serious talks restarted on a Bilateral Investment Treaty, and whether Westinghouse and the Nuclear Power Corporation of India can finalize an agreement to set up reactors in India. Overall, there is a desire to take the relationship to the “next level” but not necessarily in terms of a big deal; rather there’s a search for ways to deepen, operationalize, and institutionalize cooperation—such as through arrangements to share information in the counterterrorism space—and facilitate interaction between an increasing number of stakeholders.

    While highlighting areas of convergence, both sides will likely also discuss the divergences that remain—perhaps including the east-west divergence related to Pakistan, the north-south divergence related to Russia, the security-economic divergence with more progress in the partnership on the former than the latter, and the potential expectations-reality divergence. And while the direction of the U.S.-India relationship is likely to remain the same in the near future, how the two countries deal with these divergences will determine the trajectory and the pace of the relationship.

    Authors

         
     
     




    ag

    Gayle Smith’s agenda for USAID can take US development efforts to the next level


    The development community issued a collective sigh of relief last week when the U.S. Senate, after a seven-month delay, finally confirmed a new Administrator of the United States Agency for International Development (USAID). In addition to dealing with the many global development issues, Gayle Smith also has the task of making good on the Obama administration’s commitment to make USAID a preeminent 21st century development agency.

    While a year might seem a short time for anyone to make a difference in a new government position, Gayle Smith assuming the lead in USAID should be seen more as the capstone of a seven-year tenure guiding U.S. global development policy.  She led the interagency process that produced the 2010 Presidential Policy Determination on Development (PDD), and has been involved in every administration development policy initiative since, including major reforms inside USAID.

    The five items below are suggestions on how Smith can institutionalize and take to the next level reforms and initiatives that have been part of the development agenda of which she has been a principal architect.

    Accountability: Transparency and evaluation

    The PPD lays out key elements for making our assistance programs more accountable, including “greater transparency” and “more substantial investment of resources in monitoring and evaluation.”

    USAID staff have designed a well thought out Cost Program Management Plan to advance the public availability of its data and to fulfill the U.S. commitment to the International Assistance Transparency Initiative (IATI). What this plan needs is a little boost from the new administrator, her explicit endorsement and energy, and maybe the freeing-up of more resources so phases two and three to get more and better USAID data into the IATI registry can be completed by the end of 2016 rather than slipping over into the next administration. In addition, the fourth and final phase of the plan needs to be approved so data transparency is integrated into the planned Development Information Solution (DIS), which will provide a comprehensive integration of program and financial information. 

    Meanwhile, in January 2011 USAID adopted an evaluation policy that was praised by the American Evaluation Association as a model for other government agencies. In FY 2014, the agency completed 224 evaluations. The new administrator could provide leadership in several areas that would raise the quality and use of USAID’s evaluations. She should weigh in on the sometimes theological debate over what type of evaluation works best by being clear that there is no single, all-purpose type of evaluation. Evaluations need to fit the context and question to be addressed, from most significant change (focusing solely on the most significant change generated by a project), to performance evaluation, to impact evaluation.   

    Second, evaluation is an expertise that is not quickly acquired. Some 2,000 USAID staff have been trained, but mainly through short-term courses. The training needs to be broadened to all staff and deepened in content. This will contribute to a cultural change whereby USAID staff learn not just how to conduct evaluations, but how to value and use the findings.

    Third, evaluations need to be translated into learning. The E3 Bureau (Bureau for Economic Growth, Education and Environment) has set the model of analyzing and incorporating evaluation findings into its policies and programs, and a few missions have bought evaluations into their program cycle. This needs to be done throughout the agency. Further, USAID should use its convening power to share its findings with other U.S. government agencies, other donors, and the broader development community.

    Innovation and flexibility

    Current USAID processes are considered rigid and time-consuming. This is not uncommon to large institutions, but in recent years the agency has been seeking more innovative, flexible instruments. The USAID Global Development Lab is experimenting with what is alternatively referred to as the Development Innovation Accelerator (DIA) or Broad Agency Announcement (BAA), whereby it invites ideas on a specific development problem and then selects the authors of the best, most relevant, to join USAID staff in co-creating solutions—something the corporate sector has been calling for—to be involved at the beginning of problem-solving. Similarly, the Policy, Planning, and Learning Bureau is in the midst of redesigning the program cycle to introduce adaptive management, allowing for greater collaboration and real-time response to new information and evolving local circumstances. Adaptive management would allow for more customized approaches and learning based on local context.

    Again, the PPD calls for “innovation.” As with accountability, an expression of interest and support from the new administrator, and an articulation of the need to inculcate innovation into the USAID culture, could move these endeavors from tentative experiment to practice.

    The New Deal for Fragile States

    Gayle Smith has been immersed in guiding U.S. policy in unstable, fragile states. She knows the territory well and cares. The U.S. has been an active participant and leader in the New Deal for Fragile States. The New Deal framework is a thoughtful, comprehensive structure for moving fragile states to stability, but recent analyses indicate that neither members of the G7+ countries nor donors are following the explicit steps. They are not dealing with national and local politics, which are the essential levers through which to bring stability to a country, and are not adequately including civil society. Maybe the New Deal structures are too complicated for a country that has minimal governance. Certainly, there has been insufficient senior-level leadership from donors and buy-in from G7+ leaders and stakeholders. With her deep knowledge of the dynamics in fragile states, Smith could bring sorely needed U.S. leadership to this arena.

    Policy and budget

    The PPD calls for “robust policy, budget, planning, and evaluation capabilities.” USAID moved quickly on these objectives, not just in restoring USAID former capabilities in evaluation, but also in policy and budget through the resurrection of the planning and policy function (Policy, Planning, and Learning Bureau, or PPL) and the budget function (Office of Bureau and Resource Management, or BRM). PPL has reestablished USAID’s former policy function, but USAID’s budget authority has only been partially restored.

    Gayle Smith needs to take the next obvious step. Budget is policy. The integration of policy and budget is an essential foundation of evidence-based policymaking. The two need to be joined so these functions can support each other rather than operating in isolated cones. Budget deliberations are not just about numbers; policies get set by budget decisions, so policy and budget need to be integrated so budget decisions are informed by strategy and policy knowledge.

    I go back to the model of the late 1970s when Alex Shakow was head of the Policy, Planning, and Coordination Bureau (PPC), which encompassed both policy and budget. Here you had in one senior official someone who was knowledgeable about policy and budget and understood how the two interact. He was the go-to-person the agency sent to Capitol Hill. He could deal with the range of issues that always unexpectedly arise during congressional committee hearings and markups. He could effectively deal with the State Department and interagency meetings on a broad sweep of policy and program matters. He could represent the U.S. globally, such as at the Development Assistance Committee (DAC) and other international development meetings.

    With the expansion of the development agenda and frequency of interagency and international meetings, such a person is in even greater need today. USAID needs three or four senior officials—administrator, deputy administrator, associate administrator, and the head of a joined-up policy/budget function —to cover the demand domestically and internationally for senior USAID leadership with a deep knowledge of the broad scope of USAID programs.    

    Food aid reform

    The arguments for the need to reform U.S. food assistance programs are incontrovertible and have been hashed hundreds of times, so no need to repeat them here. But it is clearly in the interests of the tens of millions of people globally who each year face hunger and starvation for the U.S. to maximize the use of its resources by moving its food aid from an antiquated 1950s model to current market realities. There is leadership for this on the Hill in the Food for Peace Reform Act of 2015, introduced by Senators Bob Corker and Chris Coons. Gayle Smith could help build the momentum for this bill and contribute to an important Obama legacy, whether enactment happens in 2016 or under a new administration and Congress in 2017.

    Gayle knows better than anyone the Obama development agenda. These ideas are humbly presented as an outside observer’s suggestions of how to solidify key administration aid effectiveness initiatives. 

    Authors

         
     
     




    ag

    USAID's public-private partnerships: A data picture and review of business engagement


    In the past decade, a remarkable shift has occurred in the development landscape. Specifically, acknowledgment of the central role of the private sector in contributing to, even driving, economic growth and global development has grown rapidly. The data on financial flows are dramatic, indicating reversal of the relative roles of official development assistance and private financial flows. This shift is also reflected in the way development is framed and discussed, never more starkly than in the Addis Abba Action Agenda and the new set of Sustainable Development Goals (SDGs). The Millennium Development Goals (MDGs), which the SDGs follow, focused on official development assistance. In contrast, while the new set of global goals does not ignore the role of official development assistance, they reorient attention to the role of the business sector (and mobilizing host country resources).

    The U.S. Agency for International Development (USAID) has been in the vanguard of donors in recognizing the important role of the private sector to development, most notably via the agency’s launch in 2001 of a program targeted on public-private partnerships (PPPs) and the estimated 1,600 USAID PPPs initiated since then. This paper provides a quantitative and qualitative presentation of USAID’s public-private partnerships and business sector participation in those PPPs. The analysis offered here is based on USAID’s PPP data set covering 2001-2014 and interviews with executives of 17 U.S. corporations that have engaged in PPPs with USAID.

    The genesis of this paper is the considerable discussion by USAID and the international development community about USAID’s PPPs, but the dearth of information on what these partnerships entail. USAID’s 2014 release (updated in 2015) of a data set describing nearly 1,500 USAID PPPs since 2001 offers an opportunity to analyze the nature of those PPPs.

    On a conceptual level, public-private partnerships are a win-win, even a win-win-win, as they often involve three types of organizations: a public agency, a for-profit business, and a nonprofit entity. PPPs use public resources to leverage private resources and expertise to advance a public purpose. In turn, non-public sectors—both businesses and nongovernmental organizations (NGOs)—use their funds and expertise to leverage government resources, clout, and experience to advance their own objectives, consistent with a PPP’s overall public purpose. The data from the USAID data set confirm this conceptual mutual reinforcement of public and private goals.

    The goal is to utilize USAID’s recently released data set to draw conclusions on the nature of PPPs, the level of business sector engagement, and, utilizing interviews, to describe corporate perspectives on partnership with USAID.

    The arguments regarding “why” PPPs are an important instrument of development are well established. This paper presents data on the “what”: what kinds of PPPs have been implemented and in what countries, sectors, and income contexts. There are other research and publications on the “how” of partnership construction and implementation. What remains missing are hard data and analysis, beyond the anecdotal, as to whether PPPs make a difference—in short, is the trouble of forming these sometimes complex alliances worth the impact that results from them?

    The goal of this paper is not to provide commentary on impact since those data are not currently available on a broad scale. Similarly, this paper does not recommend replicable models or case studies (which can be found elsewhere), though these are important and can help new entrants to join and grow the field. Rather, the goal is to utilize USAID’s recently released data set to draw conclusions on the nature of PPPs, the level of business sector engagement, and, utilizing interviews, to describe corporate perspectives on partnership with USAID.

    The decision to target this research on business sector partners’ engagement in PPPs—rather than on the civil society, foundation, or public partners—is based on several factors. First, USAID’s references to its PPPs tend to focus on the business sector partners, sometimes to the exclusion of other types of partners; we want to understand the role of the partners that USAID identifies as so important to PPP composition. Second, in recent years much has been written and discussed about corporate shared value, and we want to assess the extent to which shared value plays a role in USAID’s PPPs in practice.

    The paper is divided into five sections. Section I is a consolidation of the principal data and findings of the research. Section II provides an in-depth “data picture” of USAID PPPs drawn from quantitative analysis of the USAID PPP data set and is primarily descriptive of PPPs to date. Section III moves beyond description and provides analysis of PPPs and business sector alignment. It contains the results of coding certain relevant fields in the data set to mine for information on the presence of business partners, commercial interests (i.e., shared value), and business sector partner expertise in PPPs. Section IV summarizes findings from a series of interviews of corporate executives on partnering with USAID. Section V presents recommendations for USAID’s partnership-making.

    Downloads

    Authors

         
     
     




    ag

    USAID’s public-private partnerships and corporate engagement


    Brookings today releases a report USAID’s Public-Private Partnerships: A Data Picture and Review of Business Engagement, which will be the subject of a public discussion on March 8 featuring a panel of Jane Nelson (Harvard University), Ann Mei Chang (U.S. Agency for International Development (USAID)), Johanna Nesseth Tuttle (Chevron Corp.), and Sarah Thorn (Wal-Mart Stores Inc.).

    The report is based on USAID’s database of 1,481 public-private partnerships (PPPs) from 2001 to 2014 and a series of corporate interviews.

    The value of those partnerships totals $16.5 billion, two-thirds from non-U.S. government sources – private companies, nongovernmental organizations (NGOs), foundations, and non-U.S. public institutions. Over 4000 organizations have served as resource partners in these PPPs.  Fifty-three percent are business entities, 32 percent are from the non-profit world, and 25 percent are public institutions. Eighty-five organizations have participated in five or more PPPs, led by Microsoft (62), Coca Cola (36), and Chevron (33).

    The partnerships are relatively evenly distributed among three major regions—Africa, Latin American/Caribbean, and Asia—but 36 percent of the value of all PPPs is from partnerships that are global in reach.

    In analyzing the data, the researchers found that 77 percent of PPPs included one or more business partner, and that 83 percent of these partnerships are connected to a business partner’s commercial interest (either shared value or more indirect strategic interest). In almost 80 percent of those PPPs, the business partner contributes some form of corporate expertise to the partnership.

    The purpose of the March 8 panel discussion is to examine the report but also to go beyond by addressing outstanding questions like: how should the impact of public-private partnerships be identified, measured, and evaluated? Is shared value the Holy Grail linking corporate interest to public goods and achieving sustainable results? Where do public-private partnerships fit in USAID’s strategy for engaging the private sector in development, particularly in light of the emphasis on the role of business in advancing the new set of Sustainable Development Goals?

    We hope you can join us for what should prove to be an engaging discussion.

    Authors

         
     
     




    ag

    What the EU-Turkey agreement on migrants doesn’t solve


    The EU and Turkey have reached agreement on the broad outlines of a coordinated strategy to respond to the migration crisis. According to the plan, discussed at an emergency summit on Monday in Brussels, all migrants crossing from Turkey into the Greek islands would be returned. For every migrant Turkey readmits, the EU would resettle one registered refugee from a U.N.-administered camp, effectively establishing a single legal migration pathway.

    The deal, which has not been finalized, includes a pledge to speed up disbursement of a 3-billion-euro fund ($3.3 billion) aimed to help Turkey shelter the roughly 2.5 million Syrian refugees currently on its soil, and to decide on additional support. Turkish Prime Minister Ahmet Davutoğlu has requested that Europe double its funding to 6 billion euro ($6.6 billion) over three years. He also called on European leaders to speed up the timetable on lifting visa requirements for Turkish citizens and to kick-start stalled accession talks.

    Rough road ahead

    Establishing a framework is an important step forward in the effort to forge a common approach to the mounting crisis. German Chancellor Angela Merkel—facing discontent at home over her open door policy—welcomed the tentative deal as a potential breakthrough. So did Britain’s Prime Minister David Cameron.

    However, key details remain unresolved: First, it is not clear that all EU countries would agree to take part in such a relocation scheme, given strong opposition to compulsory migrant quotas. On Monday night, Hungarian Prime Minister Viktor Orbán vowed to veto any commitment to resettle asylum seekers. 

    [K]ey details remain unresolved.

    Second, Ankara’s demands regarding EU membership and visa waivers are likely to be contested. Turkey’s bid for accession has long been controversial, and will only be made more so by the court-ordered seizure of the opposition newspaper Zaman late last week. Visa-free access for Turkish citizens is likewise contentious. Already, leaders of Germany’s conservative Christian Social Union party have vowed “massive resistance” to any such measure.

    Third, human rights groups have called into question the plan’s legality. The U.N. High Commissioner for Refugees raised concerns about its legitimacy under EU and international law, expressing unease over the blanket return of foreigners from one country to another. Amnesty International called the proposal a “death blow” to refugee rights. While Europe believes the legal questions can be resolved by declaring Turkey a “safe third country,” Amnesty has cast doubt on the concept. 

    And so?

    Talks will continue ahead of the EU migration summit, which will take place on March 17 and 18. Meanwhile, NATO will begin carrying out operations in the territorial waters of Greece and Turkey to locate migrant boats. According to Secretary General Jens Stoltenberg, those efforts will focus on “collecting information and conducting monitoring” in an endeavor to stop the smuggling.

    In recent weeks, as many as 2,000 migrants each day have been arriving on Greece’s shores. They join more than 35,000 migrants already stranded there, unable to travel north due to border closures along the Western Balkans route. Those closures cast in doubt the future of the continent’s open border regime—and with it, the unity of Europe.

    Authors

         
     
     




    ag

    Can the US sue China for COVID-19 damages? Not really.

           




    ag

    Encouraging transformations in Central Asia

    Nearly 30 years ago, the countries of Central Asia emerged from decades of Soviet domination. The rapid disintegration of production and trade linkages established in the Soviet Union led to deep recessions, with per capita incomes falling to about half of their pre-independence levels by the middle of the 1990s. In 1997, the private sector…

           




    ag

    Tackling the Mortgage Crisis: 10 Action Steps for State Government

    Introduction

    During 2006, the United States saw a considerable upswing in the number of new mortgage defaults and foreclosure filings. By 2007, that upswing had become a tidal wave. Today, national homeownership rates are falling, while more than a million American families have already lost their homes to foreclosure. Across the country, boarded houses are appearing on once stable blocks. Some of the hardest hit communities are in older industrial cities, particularly Midwestern cities such as Cleveland, Detroit, and Indianapolis.

    Although most media attention has focused on the role of the federal government in stemming this crisis, states have the legal powers, financial resources, and political will to mitigate its impact. Some state governments have taken action, negotiating compacts with mortgage lenders, enacting state laws regulating mortgage lending, and creating so-called “rescue funds.” Governors such as Schwarzenegger in California, Strickland in Ohio, and Patrick in Massachusetts have taken the lead on this issue. State action so far, however, has just begun to address a still unfolding, multidimensional crisis. If the issue is to be addressed successfully and at least some of its damage mitigated, better designed, comprehensive strategies are needed.

    This paper describes how state government can tackle both the immediate problems caused by the wave of mortgage foreclosures and prevent the same thing from happening again. After a short overview of the crisis and its effect on America’s towns and cities, the paper outlines options available to state government, and offers ten specific action steps, representing the most appropriate and potentially effective strategies available for coping with the varying dimensions of the problem.

    Downloads

    Authors

    • Alan Mallach
         
     
     




    ag

    Class Notes: Unequal Internet Access, Employment at Older Ages, and More

    This week in Class Notes: The digital divide—the correlation between income and home internet access —explains much of the inequality we observe in people's ability to self-isolate. The labor force participation rate among older Americans and the age at which they claim Social Security retirement benefits have risen in recent years. Higher minimum wages lead to a greater prevalence…