ed

Is the World Bank Retreating from Protecting People Displaced by its Policies?


Over 30 years ago, the World Bank began to develop policies to safeguard the rights of those displaced by Bank-financed development projects. The safeguard policy on involuntary resettlement initiated in turn a series of follow up policies designed to safeguard other groups and sectors affected by Bank investments, including the environment and indigenous people. Since its adoption in 1980, the Bank’s operational policy on involuntary resettlement has been revised and strengthened in several stages, most recently in 2001. The regional development banks – African Development Bank, Asian Development Bank, InterAmerican Development Bank, the European Bank for Reconstruction and Development, and the International Finance Corporation (IFC) – have all followed the World Bank’s lead and developed policies for involuntary resettlement cause by development projects financed by these multilateral banks.

While the policies are complex, the basic thrust of these safeguard policies on involuntary resettlement has been to affirm:

  • Involuntary resettlement should be avoided where feasible.
  • Where it is not feasible to avoid resettlement, the scale of displacement should be minimized and resettlement activities should be conceived and executed as full-fledged sustainable development programs on their own relying on commensurate financing l and informed participation with the populations to be displaced.
  • Displaced persons should be assisted to improve, or at least restore their livelihoods and living standards to levels they enjoyed before the displacement.[1]

Even with these safeguards policies, people displaced by development projects risk – and very large numbers have actually experienced – a sharp decline in their standards of living.[2] Michael Cernea’s Impoverishment Risks and Reconstruction model identifies the most common and fundamental risks of such displacement and resettlement processes: landlessness, joblessness, homelessness, marginalization, food insecurity, increased morbidity and mortality, loss of access to common property, and social disintegration.[3] If insufficiently addressed, these embedded risks convert into actual processes of massive impoverishment. And particular groups may be especially affected, as noted in the World Bank’s Operational Policy: “Bank experience has shown that resettlement of indigenous people with traditional land-based modes of production is particularly complex and may have significant adverse impacts on their identity and cultural survival.” (OP 4.12, para.9)

These safeguards policies are an important instrument to minimize and overcome the harm suffered by those displaced by development projects. It should be noted, however, that there have always been problems in the implementation of these policies due to the evasive implementation by borrowers or the incomplete application by World Bank staff. The Bank’s interest in researching the impacts of compulsory resettlement triggered by its projects has been sporadic. In particular, World Bank has not carried out and published a comprehensive evaluation of the displacements caused by its massive project portfolio for the last 20 years. The last full resettlement portfolio review was conducted two decades ago, in 1993-1994. In2010, with the approval of the Bank’s Board, the Bank’s Independent Evaluation Group (IEG) undertook a broad review on how not only the policy on involuntary resettlement, but all social safeguards policies have or have not been implemented. Reporting on its findings, the Independent Evaluation Group (IEG) publicly faulted World Bank management for not even keeping basic statistics of the number of people displaced and not making such statistics available for evaluation.[4] Similar analytical syntheses are missing from other multilateral development agencies, such as, IADB and EBRD. There is a strong sense within the community of resettlement specialists that successful cases are the exception, not the norm. In sum, projects that are predicated on land expropriation and involuntary resettlement are not only forcibly uprooted large numbers of people, but leaving them impoverished, disenfranchised, disempowered, and in many other aspects worse off than before the Bank-financed project.

While the Bank’s safeguard policies were in need of review and many argued for a more explicit incorporation of human rights language into the policies, the Bank took a different approach. The Bank’s team tasked with “reviewing and updating” eliminated many robust and indispensable parts of the revised existing safeguards, watered down other parts, and failed to incorporate important lessons from the Bank’s own experiences as well as relevant and important new knowledge from social, economic, and environmental sciences.

At the end of July 2014, the Bank published a “draft” of the revised safeguards’ policies which were not based on consultation with civil society organizations (CSOs) as had been promised. Rather the newly proposed policies were held close and stamped “strictly confidential.” The numerous CSOs and NGOs involved for two years in what they thought was a consultative process learned only from a leak about plans by Bank management for proposals to the Bank’s Board and its Committee for Development Effectiveness (CODE). Because of this secrecy, the Bank’s Board and the CODE itself were not made aware of the civil society’s views about the Environmental and Social Safeguards draft policy, before CODE had to decide about endorsing and releasing it for a new round of “consultation.”

As is well known, the process shapes the product. These bizarre distortions in the way the World Bank conducted what should have been a transparent process of genuine consultation resulted in some deep flaws of the product as manifest in the current draft ESS.

The backlash was inevitable, strong, and broad, coming from an extensive array of constituencies:’ from CSOs, NGOs, and various other groups representing populations adversely affected by Bank financed projects, professional communities , all the way to various organisms of the United Nations. More than 300 civil society organizations issued a statement opposing the Bank’s plans and at World Bank meetings in mid-October 2014, civil society organizations walked out of a World Bank ‘consultative meeting’ on the revised policies. The statement argued that the consultative process had been inadequate and that the safeguards were being undercut even at a time when the Bank is seeking to expand its lending to riskier infrastructure and mega-project schemes. While the Review and Update exercise was expected to strengthen the provisions of existing policies, instead the policies themselves were redrafted in a way that weakened them. The civil society statement notes that the revised draft “eliminates the fundamental development objective of the resettlement policy and the key measures essential to preventing impoverishment and protecting the rights of people uprooted from their homes, lands, productive activities and jobs to make way for Bank projects.”[5] Not only did the revised policy not strengthen protections for displaced people, but each of its “standards” represents a backwards step in comparison to existing policies. According to the draft revised policies the Bank could now finance projects which would displace people without requiring a sound reconstruction plan and budget to “ensure adequate compensation, sound physical resettlement, economic recovery and improvement.” Moreover, the application of some safeguards policies would now become optional. Although the regional development banks have not – so far – begun to take actions to weaken their own safeguard policies, there is fear that they will follow the Bank’s lead.

Just as humanitarian response to internally displaced persons seems to be sliding backward, so too the actions of development agencies – or at least the World Bank – seem to be reversing gains made over the past three decades.


[1] This is from the Introduction by James Wolfensohn to Operational Policies OP4.12 Involuntary Resettlement, New York: World Bank Operational Manual, p. 1.
[2] See for example, Michael M. Cernea, “Compensation and Investment in Resettlement: Theory, Practice, Pitfalls, and Needed Policy Reform” in vol. Compensation in Resettlement: Theory, Pitfalls, and Needed Policy Reform, ed. by M. Cernea and H.M. Mathur, Oxford: Oxford Univ. Press 2008, pp. 15-98; T. Scudder, The Future of Large Dams: Dealing with Social, Environmental, Institutional and Political Costs, London and Sterling VA: Earthscan, 2005;
[3] Michael M. Cernea “Risks, Safeguards and Reconstruction: A Model for Population Displacement and Resettlement,” in M. Cernea and McDowell, eds., Risks and Reconstruction: Experiences of Resettlers and Refugees, Washington, DC: World Bank, 2000, pp. 11-55. and Michael Cernea, Public Policy Responses to Development-Induced Population Displacements, Washington, DC: World Bank Reprint Series: Number 479, 1996
[4] Independent Evaluation Group, “Safeguards and Sustainability Policies in a Changing World: An Independent Evaluation of World Bank Group Experience”. Washington DC: World Bank. 2010, p. 21. The report indicates verbatim that: “IEG was unable to obtain the magnitude of project-induced involuntary resettlement in the portfolio from WB sources and made a special effort to estimate this magnitude from the review sample.” The resulting estimates, however, have been based on a small sample and have been met with deep skepticism by many resettlement researchers. The IEG report itself has not explained why the World Bank had stopped for many years keeping necessary data and statistics of the results of its projects on such a sensitive issue, although more than three years have already passed from the date of the IEG report to the writing of the present paper. Astonishingly, the World Bank Senior Management has not taken an interest in producing for itself, as well as for the public, the bodies of data signaled by IEG as missing and indispensable. Nor has the Bank’s Management accounted for taking an action-response to its IEG’s sharp criticisms, of the quality, or for whether it took specific corrective measures to overcome the multiple weaknesses signaled by the IEG report.
[5] Civil society statement, p. 2
Image Source: © Nathaniel Wilder / Reuters
     
 
 




ed

Is the United States positioned to lead in the Arctic?


As the United States readies to assume chairmanship of the Arctic Council today, it is timely to assess where the United States stands in terms of its ability and commitment to lead in the region. While there are many important elements of Arctic leadership outlined in the U.S. National Arctic Strategy, the ultimate metric of state leadership comes not from policy alone but also willingness to commit the resources needed to advance national interests and shape favorable global norms for peace, stability, and responsibility. In this context, the United States has yet to demonstrate a strong commitment to 21st century Arctic leadership. Nowhere is this more apparent than in the decaying state of the U.S. heavy icebreaking “fleet”—currently consisting of just one operational heavy icebreaker—and the lack of a credible national strategy to expand, much less sustain, this capacity.

Although the Arctic Council framework focuses specifically on shared, non-military interests, it would be a mistake to assume the region will be immune from future incidents, whether from eventual increases in tourism and shipping, energy development, or even limited geopolitical conflict. The United States must sustain heavy icebreaking capability to assure year-round access to the region and to be ready to respond in the event of a safety, security, or environmental threat.

Commercial activity in the Arctic

While commercial activity in the Arctic remains limited today, signs of increased economic investment are on the rise, including Royal Dutch Shell’s announcement of intent to resume Arctic drilling later this year and Crystal Cruises’ planned 2016 traverse of the Northwest Passage with its 820 foot, 1,000 passenger cruise ship Crystal Serenity. The Arctic’s vast untapped resources and opening sea lanes are beginning to drive previously-unheard of levels of human activity.

Some have suggested companies like Shell can and will invest in their own icebreaking and emergency response capabilities for Arctic drilling, rendering a U.S. government asset superfluous. This is a shortsighted view that fails to recognize the fundamental risks associated with abdicating prevention and response capabilities solely to the private sector.

While a single icebreaker obviously has neither the capacity nor capability to clean up a large oil spill in the Arctic, or anywhere else for that matter, in certain scenarios it could help prevent a spill from happening in the first place, mitigate the severity of a spill, and provide a means to ensure on-scene government oversight and command of any incident.

In the case of Arctic tourism, it is important to recognize that a mass rescue operation involving hundreds of passengers on a cruise ship—already one of the most difficult scenarios for search and rescue professionals—becomes exponentially more difficult in the remote and harsh Arctic environment.

Finally, although unlikely in the near-term, a future scenario can also be envisioned in which U.S. Navy surface ships need access to the Arctic, and icebreaking capacity is necessary to execute the mission. This is perhaps a distant possibility in the context of today’s Arctic but is a contingency for which the nation should be prepared in the future.

Access to the polar regions

The Coast Guard’s nearly 40-year old and recently reactivated Polar Star is the only U.S. icebreaker with the size and horsepower to provide unfettered access to the polar regions. The reactivation of this vessel, built in the 1970s, cost nearly $60 million and is estimated to have extended its lifetime by only 7 to 10 years. This presents a difficult and unique challenge in an emergency; if for example, the aging Polar Star has a machinery failure and gets stuck in the ice, the United States does not have the means to extract it and may have to resort to assistance from a foreign country. Coast Guard Commandant Admiral Paul Zukunft recently put it bluntly, saying the Coast Guard “has no self-rescue for its Arctic mission, for its Antarctic mission."

While dozens of small and medium sized icebreakers operate successfully in other parts of the world, heavy icebreakers—generally classified as those that exceed 45,000 horsepower—are needed to assure unrestricted access to the Arctic at any time of the year. Additionally, for the United States, heavy icebreaking capacity is also needed for missions like the annual resupply of McMurdo Station in Antarctica, an operation sponsored by the National Science Foundation and executed by the Coast Guard.

Sustaining the capability to access any region of the globe has been a fundamental tenet of U.S. national security policy for decades, and the Arctic should be no exception. The United States is falling behind other Arctic nations, like Russia, that have demonstrated an enduring commitment to maintaining access to the Arctic with heavy icebreakers.

These investments may be considered consistent with the size of Russia’s Arctic coastline and associated Exclusive Economic Zone, both of which are substantially larger than those of the United States or any other Arctic Nation. Indeed, there is certainly room to debate how many heavy icebreakers the United States will ultimately need in the future. A 2011 Coast Guard study concluded that meeting the tenets of the 2010 Naval Operations Concept—which calls for constant, year-round presence in both polar regions—would require six heavy and four medium icebreakers. Likewise, the study indicated three heavy and three medium icebreakers are needed for Arctic presence. Putting the debate in perspective, the Obama administration’s special representative for the Arctic, retired Coast Guard Commandant Admiral Robert Papp, recently gave a keynote address at the Brookings Institution stressing that “we should at least build one,” acknowledging the critical state of U.S. capability.

Replacing the Polar Star presents a unique challenge. Such vessels have not been built in four decades in the United States, and most estimates suggest a 10-year, $1 billion program to build just one in a U.S. shipyard under the federal government’s arduous acquisitions process. This places delivery of a new heavy icebreaker beyond the Polar Star’s remaining service life and adds to the urgency of the current situation.

U.S. engagement in the Arctic

In short, the United States must have the ability to access and engage in the polar regions on its own terms. No entity is better positioned to fulfill this national security imperative than the United States Coast Guard, which has the authority and organizational ethos to advance high latitude safety, security, and environmental interests without a corresponding threat of excessive militarization. The Coast Guard also remains one of few governmental entities capable of collaborative engagement with the Russians, built on years of maritime cooperation with their border guard.

While the case for icebreaker investment is clear, the Coast Guard lacks the resources to move forward on its own. For the Coast Guard, a new icebreaker is at best a distant runner-up to other recapitalization imperatives within the chronically underfunded service. The Coast Guard’s Medium Endurance Cutters are the cornerstone of the service’s offshore presence in the Western Hemisphere and are even older than the icebreakers. Replacing these 1960s-era cutters is justifiably the service’s top acquisition priority. The question here is not whether the Coast Guard wants new icebreaking capability, but rather how a new icebreaker stacks up against other, more urgent priorities in the context of current budget constraints.

The most appropriate funding solution is one that reflects the full breadth of inherently governmental interests in the Arctic, including safety, security, environmental protection, facilitation of maritime commerce and responsible economic development, national defense, and scientific research. In other words: funding from across the government to deliver a national, multi-mission asset.

The United States is considered an “Arctic Nation,” a term proudly used by policymakers to highlight our intrinsic national interests in the region and a profoundly basic yet important acknowledgement that Alaska and its associated territory above the Arctic Circle are indeed part of the United States. Unfortunately, the United States has yet to advance from this most basic construct of high latitude stakeholder to a proactive leadership and investment posture for the future. Not because of a lack of “skin in the game,” the United States has a legacy of well-documented interests in the Arctic, but a lack of consensus to make it a national priority in the context of the current budget environment.

Whether via national crisis or a comprehensive budget deal, polar icebreakers must eventually become the subject of serious resource discussions, and should ultimately garner broad bipartisan support. At that time, additional funding should be appropriated to the Coast Guard to support the acquisition of the much-needed heavy icebreakers, but not at the expense of its other, more pressing recapitalization programs. Until then, let’s be more realistic about our ability and commitment to lead in the Arctic.

Authors

  • Jason Tama
  • Heather Greenley
  • David Barata
Image Source: © STR New / Reuters
      
 
 




ed

Obama walking a razor’s edge in Alaska on climate change


In the summer of 1978, my grandfather George Washington Timmons, my cousin George, and I took the train from the Midwest across Canada and the ferry up the Pacific coast to Alaska. There we met up with my brother Steve, who was living in Anchorage. It was the trip of a lifetime: hiking, and fishing for grayling, salmon and halibut in Denali park, on the Kenai peninsula, Glacier Bay, and above the Arctic Circle in a frontier town called Fort Yukon, camping everywhere, and cooking on the back gate of my brother’s pickup truck. 

That Gramps had a Teddy Roosevelt moustache and a gruff demeanor gave the adventure a “Rough Riders” flavor. Like Teddy, the almost-indomitable GWT had given me a view of how experiencing a majestic land was a crucial part of becoming a robust American man. When we got home, he was diagnosed with lung cancer and died just a few months later.

We project all kinds of cultural images and values on the green screen of the American landscape. Those endless late June sunsets in the Crazy Mountains and the sun on the ragged peaks of the Wrangell Mountains represent for me a sense of the vastness of the state of Alaska and the need to balance preservation there with the needs of its people for resources and income. Certainly there is enough space in Alaska to drill for oil and protect large swaths in wildlife refuges and national parks. As leaders of the Inupiat Eskimo corporation put it in a letter to Obama, “History has shown us that the responsible energy development, which is the lifeblood of our economy, can exist in tandem with and significantly enhance our traditional way of life.”

Unfortunately, this view is outdated: that was the case in Alaska, but there is a new, global problem that changes the calculus. As President Obama wraps up his historic visit to Alaska and meeting with the Arctic climate resilience summit (GLACIER Conference), he is walking a razor’s edge, delivering a delicately crafted missive for two audiences. Each view is coherent by itself, but together they create a contradictory message that reflects the cognitive dissonance of this administration on climate change.

Balancing a way of life with the future

For the majority of Alaska and for businesses and more conservative audiences, Obama is proclaiming that Alaskan resources are part of our energy future. With oil providing 90 percent of state government revenues, that’s the message many Alaskans most ardently want to hear.

For environmentalists and to the nations of the world, Obama is making another argument. His stops were chosen to provide compelling visual evidence now written across Alaska’s landscape that climate change is real, it is here, Alaskans are already suffering, and we must act aggressively to address it. “Climate change is no longer some far-off problem; it is happening here, it is happening now … We’re not acting fast enough.”

This is a razor’s edge to walk: the Obama administration is criticized by both sides for favoring the other. Those favoring development of “all of the above” energy sources say that Obama’s Clean Power Plan has restricted coal use in America and that future stages will make fossil fuel development even tougher in future years.  These critics believe Obama is driving up energy costs and hurting America’s economic development, even as oil prices drop to their lowest prices in years.

“Climate hawks” on the other hand worry that we are already venturing into perilous territory in dumping gigatons of carbon dioxide and other gases causing the greenhouse effect into the atmosphere. The scientific consensus has shown for a decade that raising global concentrations of CO2 over 450 parts per million would send us over 3.6 degrees F of warming (2 degrees C) and into “dangerous climate change.” The arctic is warming twice as fast as this global average, and though we are still below 1.8 degrees F of warming, many systems may be reaching tipping points already.

Already melting permafrost in Alaska releases the potent greenhouse gas methane, and wreaks havoc for communities adapted to that cold. Foundations collapse and roads can sink and crumble. The melting of offshore ice makes coastal communities more vulnerable to coastal erosion, and allows sunbeams to warm the darker water below, leading to further warming.

The difficulty is that we have a limit to how much greenhouse gases we can pump into the atmosphere before we surpass the “carbon budget” and push the system over 3.6 degrees F. Which fossil reserves can be exploited and how much of which ones must be kept in the ground if we are to stay within that budget? Realistic and credible plans have to be advanced to limit extraction and combustion of fossil fuels until we have legitimate means of capturing and sequestering all that surplus carbon somewhere safe. It is a dubious and risky proposition to say that we can continue to expand production here in America, and that only other countries and regions should cap their extraction.

Obama got elected partly due to his not rejecting natural gas and even coal development. He kept quiet about climate change during his entire first term and he and Mitt Romney had a virtual compact of silence on the issue during the 2012 campaign. But in his second term, Obama has become a global leader on the issue, seeking to inspire other countries to make and keep commitments to sharply reduce emissions. This work has yielded fruit, with major joint announcements with China last November, with Mexico in March, and a series of other nations coming in with pledges. The administration has been seeking to push the pledging process to keep our global total emissions below 3.6 degrees F.

However a just-released UNEP report shows that all the pledges so far—representing 60 percent of all global emissions—add up to 4-8 gigatons of carbon reduction in what would have been emitted. That’s progress, but the report goes on to show that we are still 14 gigatons short of where we need to be to stay under 3.6 degrees F. Indeed, Climateactiontracker.org reports that we are still headed to 5.5 degrees F of warming (3.1 C) with these pledges, down from 7 degrees without the pledges.

Each on their climate change razor

This puts the administration and U.N. officials in the position of having to decide which message to put out there—the hopeful message that emissions are being reduced, or the more frustrating one that they are not being reduced nearly enough. Environmentalists are in a similar position with Obama in Alaska—do they criticize him for allowing Shell to drill in the Arctic, or praise him for being generally constructive in this year’s effort to reach a meaningful treaty in Paris in December? Is it possible to kiss Obama on one cheek while slapping him on the other?

This is the delicate political moment in which we find ourselves. Fossil fuel projects continue to be built that will lock us in to carbon emissions for decades to come. They will certainly push us over the “carbon budget” we know exists and beyond which human civilization may be untenable on this planet. But these projects are advanced by extremely strong economic actors with mighty lobbying and public relations machines, and flatly opposing them is likely to lead to one’s portrayal as a Luddite seeking to send humanity back to the stone age. Clean energy alternatives exist, and they are increasingly affordable and reliable. Logically, we need to be spending the remaining carbon budget to make the transition to a net zero emissions economy, not to continuing the wasteful one we have now.

Players on both sides of this debate will seek to deploy Alaska’s majestic landscape to win their case. I’m fairly sure on which side my grandfather George Washington Timmons would have stood: he was a building contractor and would sometimes estimate the number of 2x4s one could harvest from a giant tree. But he didn’t know about the global carbon budget—he loved his children and grandchildren, and I think he would have supported living within our means if he was fully aware of this problem. The original Rough Rider Teddy Roosevelt himself went from avid hunter to devoted conservationist as he learned of the damage over-cutting was causing American forests. As Obama said in Alaska, “Let’s be honest; there’s always been an argument against taking action … We don’t want our lifestyles disrupted. The irony, of course, is that few things will disrupt our lives as profoundly as climate change.”

That is the political razor’s edge the president—and all of us—have to walk today, as we make the inevitable transition away from fossil fuel development.

Authors

      
 
 




ed

The U.S. still needs Arctic energy


Editors' Note: America has fallen behind its economic competitors—namely Russia and China—in Arctic resource and infrastructure investment. Charles Ebinger argues that the United States must better define its resource development policies and priorities in order to ensure U.S. leadership in the Arctic. This piece was originally published on Forbes

The recent decision by the United States to allow energy exploration drilling to re-commence in the Alaskan Arctic’s Chukchi Sea this summer is a welcome development. Here’s why: Federal waters in offshore Alaska are estimated to hold roughly 27 billion barrels of oil and 132 trillion cubic feet of natural gas, the vast majority of which is located in the Arctic. Experts believe that the Chukchi in particular, which holds more resources than any other undeveloped U.S. energy basin, may represent one of the world’s largest sources of untapped oil and gas.

Until now America has regrettably been on the sidelines of Arctic resource and infrastructure investment while our economic competitors—Russia and China included—have moved forward. This policy vacuum was highlighted in a recent National Petroleum Council (NPC) report to the U.S. Secretary of Energy in which I participated and which warned that if we effect no policy changes on an urgent basis we will not stay ahead of or even keep pace with our foreign rivals, remain globally competitive, or provide global leadership and influence in this critical region.

America is more energy self-sufficient than it has ever been

The report comes at a time when the U.S. has cut imports, drastically transforming our nation into the biggest producer of oil and natural gas by tapping huge reserves in shale rock formations across the country. As such, America is more energy self-sufficient than it has ever been. Even so, as evidenced by strong public support for Arctic offshore development in states ranging from Alaska to Iowa, South Carolina and New Hampshire, the American people recognize that we cannot rely solely on shale oil and gas to meet our energy needs.

To that point, as the NPC noted, if we fail to develop the enormous trove of reserves in Arctic waters off Alaska, the U.S. risks a renewed reliance on overseas energy in the future and will have missed a prime opportunity to keep domestic production high and imports and consumer costs low.

As President Obama rightly stated shortly after the Chukchi drilling plan was conditionally approved in May, “When it can be done safely and appropriately, U.S. production of oil and natural gas is important. I would rather us—with all the safeguards and standards that we have—be producing our oil and gas, rather than importing it, which is bad for our people, but is also potentially purchased from places that have much lower environmental standards than we do.

We have to take actions that allow exploration to commence now 

Indeed, given the long lead time necessary to develop resources in this region, the NPC study stressed that it is vital for the U.S. to take actions now that allow exploration in Alaskan Arctic waters to commence. In that regard, the recent approval for Arctic offshore drilling to occur this summer was a win for both Alaska, which is dependent on the petroleum industry to fund approximately 90 percent of its coffers, and the country at large, which leans on Alaskan energy to meet our daily needs, especially on the West Coast.

To ensure the long-term feasibility of offshore development in the region, Interior Department regulations for the U.S. Arctic in part must facilitate the use of proven technologies and also encourage innovation by providing the flexibility to incorporate future technologies as advances occur and their capacities are demonstrated. In addition, and all the more significant given our accession to chairmanship of the Arctic Council in May, U.S. policies governing natural resource development in the Arctic must be defined and streamlined.

Questions Washington has to answer if the U.S. wants to ensure its leadership in the Arctic

For example, what is the country’s official position on the development of oil, gas, mineral and fishery resources in the Arctic? Does it align with Alaska’s policies? How will resource development affect standards of living for those residing in the region?

In addition to resource development policies, other important questions must be addressed to ensure U.S. leadership in the Arctic. With Prudhoe Bay production in serious decline and the Trans-Alaska Pipeline System running at historically low throughput levels, how will the U.S. ensure access to new sources like Alaska’s Arctic offshore that can help all Americans? With just one heavy icebreaker in operation, and the cost of another tallying at least $700 million, what actions are we prepared to take to build a fleet capable of meeting the demands in an increasingly active region?

These are just a few of the questions and concerns that Washington, D.C. will have to answer soon if the U.S. stands a chance of catching up to or surpassing other nations that have so far leapt ahead to the front of the Arctic line. Will President Obama rise to the occasion and make the right decisions?

      
 
 




ed

With Russia overextended elsewhere, Arctic cooperation gets a new chance


Can the United States and Russia actually cooperate in the Arctic? It might seem like wishful thinking, given that Russian Prime Minister Dmitry Medvedev asserted that there is in fact a “New Cold War” between the two countries in a speech at the Munich Security Conference. Many people—at that conference and elsewhere—see the idea as far-fetched. Sure, Russia is launching air strikes in what has become an all-out proxy war in Syria, continues to be aggressive against Ukraine, and has increased its military build-up in the High North. To many observers, the notion of cooperating with Russia in the Arctic was a non-starter as recently as the mid-2015. There have been, however, significant changes in Russia’s behavior in the last several months—so, maybe it is possible to bracket the Arctic out of the evolving confrontation.

These and other matters were the subject of discussion at a recent conference at the Harriman Institute of Columbia University in New York, in which we had the pleasure to partake last week.

Moscow learns its limitations

Russia steadily increased its military activities and deployments in the High North until autumn 2015, including by creating a new Arctic Joint Strategic Command. There have been, however, indirect but accumulating signs of a possible break from this trend. Instead of moving forward with building the Arctic brigades, Russian top brass now aim at reconstituting three divisions and a tank army headquarters at the “Western front” in Russia. News from the newly-reactivated airbases in Novaya Zemlya and other remote locations are primarily about workers’ protests due to non-payments and non-delivery of supplies. Snap exercises that used to be so worrisome for Finland and Norway are now conducted in the Southern military district, which faces acute security challenges. Russia’s new National Security Strategy approved by President Vladimir Putin on the last day of 2015 elaborates at length on the threat from NATO and the chaos of “color revolutions,” but says next to nothing about the Arctic.

The shift of attention away from the Arctic coincided with the launch of Russia’s military intervention in Syria, and was strengthened by the sharp conflict with Turkey. Deputy Prime Minister Dmitri Rogozin—who used to preside over the military build-up in the High North—is these days travelling to Baghdad, instead. Sustaining the Syrian intervention is a serious logistical challenge on its own—add low oil prices into the mix, which threw the Russian state budget and funding for major rearmament programs into disarray, and it’s clear that Russia is in trouble. 

The shift of attention away from the Arctic coincided with the launch of Russia’s military intervention in Syria, and was strengthened by the sharp conflict with Turkey.

The government is struggling with allocating painful cuts in cash flow, and many ambitious projects in the High North are apparently being curtailed. In the squabbles for dwindling resources, some in the Russian bureaucracy point to the high geopolitical stakes in the Arctic—but that argument has lost convincing power. The threats to Russian Arctic interests are in fact quite low, and its claim to expanding its control over the continental shelf (presented at the U.N. earlier this month) depends upon consent from its Arctic neighbors.

Let’s work together

Chances for cooperation in the Arctic are numerous, as we and our colleagues have described in previous studies. The current economic climate (i.e. falling oil prices, which makes additional energy resource extraction in most of the Arctic a distant-future scenario), geopolitical climate (sanctions on Russia targeting, amongst others, Arctic energy extraction), and budget constraints on both ends (Russia for obvious reasons, the United States because it chooses not to prioritize Arctic matters) urge us to prioritize realistically.

  • Improving vessel emergency response mechanisms. Though many analysts like to focus on upcoming resource struggles in the Arctic, the chief concern of naval and coast guard forces there is actually increased tourism. Conditions are very harsh most of the year and can change dramatically and unexpectedly. Given the limited capacity of all Arctic states to navigate Arctic waters, a tourist vessel in distress is probably the main nightmare scenario for the short term. Increased cooperation to optimize search and rescue capabilities is one way to prepare as much as possible for such an undesirable event. 
  • Additional research on climate change and methane leakage. Many questions remain regarding the changing climate, its effects on local flora and fauna, and long-term consequences for indigenous communities. Increasingly appreciated in the scientific community, an elephant in the room is trapped methane in permafrost layers. As the Arctic ice thaws, significant amounts of methane may be released into the atmosphere, further exacerbating global warming.
  • Expanding oil emergency response preparedness. The current oil price slump likely put the brakes on most Arctic exploration in the short term. We also believe that, unless all long-term demand forecasts are false, an additional 15 million barrels of oil per day will be needed by 2035 or so—the Arctic is still viewed as one of the last frontiers where this precious resource may be found. At the moment, Arctic states are ill-prepared to deal with a future oil spill, and more has to be learned about, for instance, oil recovery on ice and in snow. The Agreement on Cooperation on Marine Oil Pollution Preparedness and Response in the Arctic was an important first step.
  • Preparing Bering Strait for increased sea traffic. As the Arctic warms, increased sea traffic is only a matter of time. The Bering Strait, which is only 50 miles wide at its narrowest point, lacks basic communication infrastructure, sea lane designation, and other critical features. This marks another important and urgent area of cooperation between the United States and Russia, even if dialogue at the highest political level is constrained. 

Can the Arctic be siloed?

There is no doubt that the current cooled climate between Russia and the other Arctic states, in particular the United States, complicates an ongoing dialogue. It is even true that it may prohibit a meaningful conversation about certain issues that have already been discussed. 

Skeptics will argue that it is unrealistic to isolate the Arctic from the wider realm of international relations. Though we agree, we don’t think leaders should shy away from political dialogue altogether. To the contrary, in complicated political times, the stakes are even higher: Leaders should continue existing dialogues where possible and go the extra mile to preserve what can be preserved. Russia’s desire for expanding its control over the Arctic shelf is entirely legitimate—and opens promising opportunities for conversations on issues of concern for many states, including China, for that matter.

Realists in the United States prefer to focus on expanding American military capabilities, their prime argument being that Russia has significantly more capacity in the Arctic. While we would surely agree that America’s current Arctic capabilities are woefully poor, as our colleagues have described, an exclusive focus on that shortcoming may send the wrong signal. 

We would therefore argue in favor of a combined strategy: making additional investments in U.S. Arctic capabilities while doubling down on diplomatic efforts to preserve the U.S.-Russian dialogue in the Arctic. That may not be easy, but given the tremendous success of a constructive approach in the Arctic in recent years, this is something worth fighting for. Figuratively speaking, that is.

      
 
 




ed

Rewarding Work: The Impact of the Earned Income Tax Credit in Chicago

The federal Earned Income Tax Credit (EITC) will boost earnings for over 18 million low-income working families in the U.S. by more than $30 billion this year. This survey finds that the EITC provided a $737 million boost to the Chicago regional economy in 1998, and lifted purchasing power in the city of Chicago by an average of $2 million per square mile. Large numbers of Low-income working families lived not only in inner-city Chicago neighborhoods, but also in smaller cities throughout the region like Aurora, Joliet, Elgin and Waukegan. The survey concludes by describing steps that state and local leaders could take to build on existing efforts to link working families to the EITC, such as increasing resources for free tax preparation services, helping EITC recipients to open bank accounts, and expanding and making refundable the Illinois state EITC.

 

EITC National Report
Read the national analysis of the Earned Income Tax Credit in 100 metropolitan areas. It finds that the EITC provided a $17 billion stimulus to these metro areas in 1998, and that the majority of EITC dollars flowed to the suburbs.
National Report 10/01
EITC Regional Reports
Read the local analysis of the Earned Income Tax Credit in 29 metropolitan areas. Using IRS data to analyze the spatial distribution of working poor families, the surveys find that the EITC is a significant federal antipoverty investment in cities and their regions.
29 Metro Area Reports  6/01

 

Downloads

Authors

      
 
 




ed

COVID-19 is a health crisis. So why is health education missing from schoolwork?

Nearly all the world’s students—a full 90 percent of them—have now been impacted by COVID-19 related school closures. There are 188 countries in the world that have closed schools and universities due to the novel coronavirus pandemic as of early April. Almost all countries have instituted nationwide closures with only a handful, including the United States, implementing…

       




ed

Coronavirus and challenging times for education in developing countries

The United Nations recently reported that 166 countries closed schools and universities to limit the spread of the coronavirus. One and a half billion children and young people are affected, representing 87 percent of the enrolled population.  With few exceptions, schools are now closed countrywide across Africa, Asia, and Latin America, putting additional stress on…

       




ed

Why Boko Haram in Nigeria fights western education

The terrorist group Boko Haram has killed tens of thousands of people in Nigeria, displaced millions, and infamously kidnapped nearly 300 schoolgirls in 2014, many of whom remain missing. The phrase “boko haram” translates literally as “Western education is forbidden.” In this episode, the author of a new paper on Boko Haram talks about her research…

       




ed

Mexico’s COVID-19 distance education program compels a re-think of the country’s future of education

Saturday, March 14, 2020 was a historic day for education in Mexico. Through an official statement, the Secretariat of Public Education (SEP) informed students and their families that schools would close to reinforce the existing measures of social distancing in response to COVID-19 and in accordance with World Health Organization recommendations. Mexico began to implement…

       




ed

Adapting approaches to deliver quality education in response to COVID-19

The world is adjusting to a new reality that was unimaginable three months ago. COVID-19 has altered every aspect of our lives, introducing abrupt changes to the way governments, businesses, and communities operate. A recent virtual summit of G-20 leaders underscored the changing times. The pandemic has impacted education systems around the world, forcing more…

       




ed

The fundamental connection between education and Boko Haram in Nigeria

On April 2, as Nigeria’s megacity Lagos and its capital Abuja locked down to control the spread of the coronavirus, the country’s military announced a massive operation — joining forces with neighboring Chad and Niger — against the terrorist group Boko Haram and its offshoot, the Islamic State’s West Africa Province. This spring offensive was…

       




ed

Trade Policy Review 2016: Russian Federation

Each Trade Policy Review consists of three parts: a report by the government under review, a report written independently by the WTO Secretariat, and the concluding remarks by the chair of the Trade Policy Review Body. A highlights section provides an overview of key trade facts. 15 to 20 new review titles are published each […]

      
 
 




ed

Indian Policy Forum 2004 - Volume 1: Editors' Summary

This inaugural issue of the India Policy Forum, edited by Suman Bery, Barry Bosworth and Arvind Panagariya, includes papers on the trade policies that would do the most to enhance India’s future growth prospects, analyses of recent developments in India’s balance of payments and an examination of the performance of the Indian banking system. The editors' summary appears below, and you can download a PDF version of the volume, purchase a printed copy, or access individual articles by clicking on the following links:

Download India Policy Forum 2004 - Volume 1 (PDF) »
Purchase a printed copy of India Policy Forum 2004 - Volume 1 »

Download individual articles:


 

EDITORS' SUMMARY

The India Policy Forum (IPF) is a new journal, jointly promoted by the National Council of Applied Economic Research (NCAER), New Delhi, and the Brookings Institution, Washington, D.C., that aims to present high-quality empirical analysis on the major economic policy issues that confront contemporary India. The journal is based on papers commissioned by the editors and presented at an annual conference. The forum is supported by a distinguished advisory panel and a panel of active researchers who provide suggestions to the editors and participate in the review and discussion process. The need for such real-time quantitative analysis is particularly pressing for an economy like India’s, which is in the process of rapid growth, structural change, and increased involvement in the global economy. The founders of the IPF hope it will contribute to enhancing the quality of policy analysis in the country and stimulate empirically informed decisionmaking. The style of the papers, this editors’ summary, and the discussants’ comments and general discussions are all intended to make these debates accessible to a broad nonspecialist audience, inside and outside India, and to present diverse views on the issues. The IPF is also intended to help build a bridge between researchers inside India and researchers abroad, nurturing a global network of scholars interested in India’s economic transformation.

The first India Policy Forum conference took place at the NCAER in Delhi on March 26–27, 2004. In addition to the working sessions, the occasion was marked by a public address given by Stanley Fischer, vice chairman with Citigroup International and a member of the IPF advisory panel. This inaugural issue of the IPF includes the papers and discussions presented at that conference. The papers focus on several contemporary policy issues. The first two papers provide alternative perspectives on the trade policies that would do the most to enhance India’s future growth prospects in the context of ongoing developments in the global trading system. The three papers that follow are devoted to an analysis of recent developments in India’s balance of payments and their implications for the future exchange rate regime, the integration of exchange rate policy with other aspects of macroeconomic policy, and capital account convertibility, respectively. The sixth paper is devoted to an examination of the performance of the Indian banking system and the implications of the dominant role of government-run banks.

India's Trade Reform, by Arvind Panagariya

The first paper, by Arvind Panagariya, provides a broad review of India’s external sector policies; the impact of these policies on trade flows, efficiency, and growth; and the future direction trade policies should take. Since trade policies are a means to an end, namely faster growth and improved efficiency, and since trade policies support other domestic policies, Panagariya’s review necessarily ranges into these areas as well. Finally, to place India’s performance in perspective, Panagariya makes extensive comparisons throughout between Indian and Chinese outcomes over the past two decades (1980–2000), a period when both economies have chosen to reintegrate into the world economy.

India’s growth experience since 1950 falls in two phases. The first thirty years were characterized by steady growth of around 3.5 percent; thereafter growth has tended to stay in the 5 to 6 percent range. Panagariya links this differential growth performance with the imposition and subsequent relaxation of microeconomic controls, particularly in the external sector. In turn he divides these external sector policies into three phases. Between 1950 and 1975 the trend was toward virtual autarky, particularly after a balance of payments crisis in 1956–57. This was succeeded by a period of “ad hoc liberalization” starting around 1976, when reform of quantitative restrictions on trade was complemented by deregulation of industrial licensing in certain sectors. A further balance-of-payments crisis in the period from late 1990 to early 1991, concurrent with a general election, provided the background for a switch to deeper and more systematic liberalization, which, in fits and starts, continues today.

In the merchandise trade area the focus of reform has been to reduce tariff levels, particularly on nonagricultural goods. This has been done by gradually reducing the peak rate and reducing the number of tariff bands. In 1990–91 the peak rate stood at 355 percent, while the simple average of all tariff rates was 113 percent. By early 2004 the peak rate on individual goods was down to 20 percent, though there were notable exceptions, such as chemicals and transport equipment. Similarly, there has been less than ideal progress in reducing end-user and other exemptions. In nonindustrial areas there has been substantial liberalization of trade (and investment) in services, but following the OECD example, less in agriculture.

Panagariya next reviews the impact of this liberalization on trade flows, on efficiency, and on growth, in many cases using China as a benchmark. India’s share in world exports of goods and services—which had declined from 2 percent at Indian independence in 1947 to 0.5 percent in the mid-1980s—bounced back to 0.8 percent in 2002, implying that for roughly twenty years India’s trade has grown more rapidly than world trade. In addition, the deeper reforms of the 1990s yielded a pick-up of almost 50 percent over the previous decade, from 7.4 percent to 10.7 percent. Encouraging though these numbers are in light of India’s past performance, they pale in comparison with the Chinese record over the same period. Aside from any issues that may arise in the measurement of Chinese GDP at a time of rapid institutional and economic change, the combined share of exports and imports of both goods and services rose in China from 18.9 percent in 1980 to 49.3 percent in 2000, according to World Bank data. For India, the comparable numbers were 15.9 percent (in 1980) and 30.6 percent (in 2000).

The increase in India’s trade intensity has been accompanied by significant shifts in composition. The most dramatic has been the increased share of service exports in the 1990s. Within industry, exporting sectors with above-average growth tended to be skill- or capital-intensive rather than labor-intensive, while on the import side the share of capital goods imports declined sharply. In the area of services, rapid growth was exhibited by software exports and recorded remittances from overseas Indians. However, tourism receipts remain below potential. With regard to trade partners, the main shift over the 1990s was a move away from Russia toward Asia, particularly developing Asia. An interesting recent development has been the rapid expansion of India’s trade with China.

Panagariya then reviews the evidence on the impact of liberalization on static efficiency and on growth. One common approach is to use a computable general equilibrium (CGE) model to estimate the effects of the removal of trade distortions. The one study cited estimates the impact as raising GDP permanently by 2 percentage points. Additional domestic liberalization could raise this figure to 5 percentage points. Panagariya argues, however, that such models miss some key sources of gains. He cites two in particular: the disappearance of inefficient sectors and improvements in product quality. In addition, disaggregated analysis at the five-digit SITC level reveals far more dynamism in product composition of both exports and imports than is revealed at the two-digit level. This suggests greater gains from trade and improved welfare from enhanced choice than is captured in more aggregate models.

The links between liberalization and aggregate growth—or growth in total factor productivity (TFP)—have been controversial both in India and elsewhere in the emerging economies of Asia. In the case of India, the focus has been almost exclusively on manufacturing. After reviewing several studies, which admittedly differ in methodology and data quality, Panagariya judges that the weight of the evidence indicates that trade liberalization has led to productivity gains. Notwithstanding this reasonably positive assessment, Panagariya reminds us that overall, Indian industry’s performance in the 1980s and 1990s has been pedestrian, particularly compared with that of services.

The poor performance of Indian industry and the stronger growth performance of Chinese industry form the backdrop for Panagariya’s final section, on future policy. He discusses four issues: domestic policies bearing on trade; autonomous liberalization; regional trade agreements; and India’s participation in multilateral negotiations. With regard to the first, the central question for Panagariya is why Indian industry’s response to liberalization has been more sluggish than China’s. Panagariya attributes this in part to differences in economic structure but also to differences in the two countries’ domestic policies. He argues that it is easiest to expand trade in industrial products, and it is easier to do so if the industrial sector represents a large share of national value added. As far back as 1980, the share of industry in China was 48.5 percent, while in India it was half that, at 24.2 percent. Two decades later things are not very different. Panagariya makes a further interesting point: a relatively small industrial sector also reduces the capacity of the economy to absorb imports, leading to a tendency toward exchange rate appreciation (although even China has not been immune from this tendency). He concludes that it is imperative to stimulate industrial growth and cites reform in three areas as being essential: reduction of the fiscal deficit; reduction and ultimately elimination of the list of manufactured products “reserved” for small-scale industry; and reform of the country’s labor laws, which make reassignment or retrenchment of workers prohibitively difficult in the so-called formal or organized sector.

Turning next to autonomous trade reform, Panagariya is critical of the view, widely held in India, that the tariff structure ought to favor final goods over intermediates. He also notes that the current tariff structure remains riddled with complexity. He urges the authorities to move quickly to a single uniform tariff of 15 percent for nonagricultural goods and to move to a uniform tariff of 5 percent by the end of the decade. With regard to agriculture, Panagariya points out that India stands to gain from autonomous tariff liberalization given its potential as an agricultural exporter. He also addresses the issue of “contingent protection,” wherein India’s liberal use of antidumping regulations has clearly had protectionist intent. Panagariya urges changes in the antidumping procedures currently in place and also greater use of safeguard measures, as they are applied on a nondiscriminatory basis to all trading partners.

While India has traditionally taken comfort in a multilateral rule-based system of international trade, it has more recently embarked on an ambitious program of regional trade negotiations. It has signed free trade area (FTA) agreements with Sri Lanka and Thailand and is in the advanced stages of negotiating an FTA with Singapore. Panagariya analyzes the global, regional, and domestic factors that have brought about this shift in strategy—essentially the weakening of the U.S. commitment to multilateral negotiations, together with political imperatives. Panagariya observes that for a relatively protected economy, trade diversion and the associated revenue loss should be important concerns. He is also concerned that preoccupation with FTAs diverts attention from both unilateral liberalization and multilateral negotiations, each of which yields greater return for the effort expended. However, Panagariya concedes that there is a strategic case for FTAs, both to exert leverage in the multilateral sphere and to create a template that reflects India’s interests in future bilateral and multilateral negotiations. In this context he is critical of the template developed in the agreement on the South Asian Free Trade Area (SAFTA), which, in his view, is cluttered with many nontrade issues. In the specific case of a U.S.-India FTA, he believes that there is a strong case for an agreement in services, with mutually beneficial exchange of market access.

The paper ends with a discussion of India’s interests in ongoing multilateral trade negotiations. Panagariya’s main point is that India has a strong interest in successful conclusion of the Doha Round and could agree to the U.S. proposal aimed at eliminating tariffs on industrial goods by 2015. As noted before, India also has interests in improved market access in agriculture; given the considerable water in its bound tariffs, some concessions should be possible, particularly if accompanied by reductions in subsidies by rich countries.

Should a U.S.-India FTA Be Part of India's Trade Strategy, by Robert Z. Lawrence and Rajesh Chadha

The 1990s and the new millennium have seen a massive proliferation of preferential trade arrangements (PTAs), which typically lead to free trade among two or more countries, as, for example, under the North American Free Trade Agreement (NAFTA). Until recently, Asian countries had more or less stayed away from these arrangements, but this is changing rapidly, with many countries in the region now forging free trade areas. In their paper, Robert Lawrence and Rajesh Chadha assess the likelihood and benefits of the negotiation of a free trade area between India and the United States. Like Panagariya, Lawrence also embeds his discussion of India’s trade policy within the framework of the larger Indian reform effort.[1] Following Ahluwalia, he characterizes Indian reform since 1991 as incremental, not radical.[2] While there has been deepening consensus about the broad direction of reform within the policy elite, excessive clarity on endpoints and on the pace of transition is seen to be politically risky. Trade policy reform has been an important part of this liberalization effort, and it has been similarly characterized by a clear direction but fitful implementation and shifting promises as to endpoints.

Lawrence accepts that this strategy has been relatively successful in producing steady growth without major policy reversals or financial crises over the last decade. Yet, like Panagariya, he notes that trade reform is a job only half done. India’s tariff rates remain among the world’s highest, and there remain significant barriers to foreign investment. Within India, there continues to be political resistance to liberalization. Lawrence asks what the best trade and reform strategy for India is now, given the tasks yet to be accomplished.

Lawrence articulates three options available to India at this time: continued incremental unilateralism dictated, as in the past, by domestic concerns and feasibility; more active engagement with multilateral negotiations through the World Trade Organization (WTO); and what he calls a multitrack approach, whereby deeper bilateral free trade agreements complement the first two channels. Within this larger context the specific question he explores in depth is what role might be played by an FTA between India and the United States. He recognizes that consideration of such an FTA is at best at a nascent stage in official circles and that it is far from being an idea whose time has come. Nonetheless, his core thesis is that given India’s domestic reform goals, a multitrack approach centered on a U.S.-India FTA would be superior to excessive reliance on the WTO, given likely outcomes under the ongoing Doha Round. This is the argument that the paper attempts to substantiate.

Lawrence first considers a purely defensive motive for such a FTA. From this perspective, the key issue is to establish a legal and institutional framework for keeping trade in information technology (IT) services free. Noting the rapid growth in India’s export of such services, Lawrence cites studies that suggest that this trade is still in its infancy. Given that the United States is currently the destination of two-thirds of India’s IT services exports—and that this share could well be maintained—trade between the United States and India has the potential to become one of the most dynamic examples of trade in global commerce.

Will this growth be allowed to take place? Protectionist pressures in the United States already are strong. Outsourcing is headline news in the United States, and federal and state governments are taking politically visible stands to restrict the practice under government contracts. While some of this is undoubtedly election year politics, preserving access for India in the U.S. market is a genuine challenge. Lawrence explores various options available to India to preserve its access, including through the General Agreement on Trade in Services (GATS) agreement within the WTO. He notes that GATS operates on a positive list approach, which can create some ambiguity as to what forms of market access have been bound. By contrast, services liberalization in U.S. bilateral agreements already uses a negative list approach: trade is allowed unless it has specifically been prohibited.

Lawrence then explores the possibility, from the U.S. perspective, of an FTA with India. He notes that the United States first moved away from exclusive reliance on multilateral negotiations as far back as the 1980s, when it signed FTAs with Canada and Israel, followed by NAFTA in 1993. Under the Bush administration the pace of negotiation of bilateral agreements has accelerated dramatically. Agreements with Chile, Singapore, and Jordan have been implemented; those involving the Central American Free Trade Area (CAFTA), Morocco, and Australia have been completed; and numerous others are either under active negotiation or planned.

In this environment Lawrence believes that an FTA with India would be seen by the U.S. authorities as being of great strategic interest in the larger U.S. negotiating strategy but also politically difficult to achieve, given the current mood in Congress. But he is skeptical of the possibility that such an agreement could be restricted to services alone—as proposed, for example, by Panagariya and by a recent task force of the Council on Foreign Relations. The United States is unlikely to forgo the opportunity of obtaining preferential access for the exports of its goods to the Indian market. In addition, dropping all goods trade in an agreement with India would create a difficult precedent for the United States in its other FTA negotiations, in which, with few exceptions, there have not been sectoral opt-outs.

Accordingly, in his discussion Lawrence deals with the case for a comprehensive U.S.-India FTA with most of the features of those that the United States already has concluded. These include a negative list for services; investment provisions with a few sectoral exclusions; full national treatment for U.S. companies; intellectual property rules that might be more comprehensive than those in the WTO; and additional provisions relating to labor, environmental standards, technical barriers, and government procurement. While the phase-in periods may differ for the two sides, once the agreement was fully implemented (generally in fifteen years), the obligations would be symmetric.

Lawrence readily concedes that willingness to sign an FTA agreement of this scope with the United States would be a radical departure for India in a number of respects. While much Indian trade liberalization has been unilateral, India has so far been a strong advocate of multilateral trading rules, but there too its efforts have concentrated on obtaining special and differential treatment for developing countries. As Panagariya has also noted, India has only lately entered the game of bilateral FTAs, so far with countries in Asia, but even in terms of goods trade these have not been comprehensive. A U.S.-India FTA would have major implications for India’s trade and domestic policies. It is the positive (or offensive) case for such a radical shift that Lawrence next examines.

He starts by offering some hypotheses on the political economy of liberalization. At the beginning, an opportunistic and piecemeal approach may be necessary to create constituencies for liberalization. But unilateralism carries the risk of reversal, and such policy uncertainty can inhibit the private investment decisions needed to shift the economy in the direction of its comparative advantage. Trade agreements, whether bilateral, regional, or multilateral, can impart credibility to commitments by the home government, making it more likely that liberalization will be successful. Such enhanced credibility is not costless, however. In contrast to an incremental approach, a comprehensive agreement means that many political battles have to be conducted simultaneously. This drawback can be offset by the fact of reciprocity, which can be used to develop coalitions of exporters who favor the trade reform. A further set of allies is provided by proponents of domestic reform, who can argue that the domestic reforms necessary for domestic growth can also deliver improved access to international markets. Lawrence believes that such a strategy was followed by the Chinese in connection with their accession to the WTO.

If these are some of the benefits of comprehensive reciprocal agreements, the question of what type of reciprocal agreements, multilateral or bilateral, remains. This is the choice addressed by Lawrence in the remainder of the paper. In making his assessment, Lawrence uses as a yardstick the impact of each of the two routes in assisting India to undertake changes in its own interest while avoiding constraints that have the potential to damage its welfare.

In order to assess the impact of a U.S.-India FTA, Lawrence examines some of the FTAs that the United States has recently negotiated. His review makes it clear that the institutional changes needed in the Indian economy would indeed be deep but in most areas they would prod Indian policymakers to move in directions that are inherently desirable. A particular concern of Indian policymakers is the introduction of labor and environmental standards through an FTA, and Lawrence clears up several misconceptions in this area. Recent bilateral agreements place the emphasis on each government enforcing its own domestic environmental and labor laws and not weakening those laws or reducing protections to encourage trade or investment. While these obligations are backed by the dispute settlement provisions of the agreements, trade measures may not be used to retaliate. On balance, implementing a U.S.-India FTA at this time would probably help to bolster and accelerate many dimensions of economic reform, but Lawrence notes that the benefits depend crucially on taking a range of complementary actions. Failure to do so could lead to conditions that were worse than before.

Lawrence then examines whether a successful conclusion to the Doha Round could deliver equivalent benefits to the cause of Indian reform. In so doing he notes that those who argue for exclusive reliance on multilateral liberalization compare actual FTAs with an idealized version of multilateral liberalization. But actual achievement under multilateral liberalization is heavily conditioned by the specific rules of trade negotiations, which may not actually result in significant domestic liberalization at all. As a developing country, India benefits from the “special and differential treatment” provisions of the General Agreement on Tariffs and Trade (GATT), while benefiting from the most-favored nation provisions of the multilateral system. An additional institutional feature is the gap between applied and bound tariffs, which is particularly large where agricultural goods are concerned. A final feature is what Lawrence (following Jagdish Bhagwati) calls “first difference” reciprocity, where the offers made by each nation are measured against their protection levels at the beginning of the round.

Taking these elements into account and reviewing the actual performance of past rounds in reducing industrial tariffs, Lawrence comes to the strong conclusion that the current WTO system actually impedes a developing country like India from using WTO agreements to support meaningful liberalization; he also believes that the diffuse reciprocity involved in the most-favored nation system is not a strong catalyst for rallying exporter interests in favor of import liberalization.

Having provisionally concluded that an FTA would be of greater assistance than exclusive reliance on multilateral negotiations, Lawrence then explores the benefits to India of blending the two approaches in what he calls a multitrack approach. In his view, a U.S.-India FTA would certainly make India a more attractive negotiating partner for third countries hoping to match the access obtained by U.S. firms. Equally, assuming that it preceded the conclusion of the Doha Round, willingness to sign an FTA with the United States would also improve India’s negotiating credibility in the multilateral sphere. India could then challenge developed countries to improve their own offers dramatically by indicating a willingness to engage in extensive multilateral liberalization itself. A comprehensive FTA with India would also be of strategic importance to the United States in its current policy of competitive liberalization. This would strengthen India’s hand in its negotiations with the United States, while strengthening the U.S. hand in negotiating with other significant but reluctant partners.

The paper ends with some quantitative welfare simulations undertaken by Lawrence’s coauthor, Rajesh Chadha of the NCAER, using a computable general equilibrium model of world production and trade developed by the NCAER and the University of Michigan. The simulations deal only with the impact of liberalization on trade in goods. The model is designed to capture the long-run impact of an agreement. More crucially, it is a real model that holds employment and the trade balance constant; as such it captures the second-round adjustments needed to restore full employment in the economy following an initial trade shock.

A U.S.-India FTA is compared first with the current situation and then with a number of counterfactuals. The results reveal that aggregate welfare gains are greatest under multilateral liberalization, next greatest under unilateral liberalization in each country, and least under a bilateral FTA, but they note that even in the last case the effects are positive. The results also point out asymmetries between the United States and India in unilateral and multilateral liberalization, given the differences in the openness of the two economies. Indian and world welfare both rise significantly when India liberalizes unilaterally, while for the United States the greatest welfare gains flow from multilateral liberalization.

Lawrence concludes that the more difficult decision facing India today is whether to opt for reciprocal approaches in lieu of the unilateral approach that it has traditionally pursued. There are gains in credibility to be achieved, but these could entail reduced policy space and require a significant agenda of complementary reform to achieve their full effect. Should India choose to pursue the reciprocal route, he suggests a U.S.-India FTA as worthy of serious consideration, precisely because of its comprehensive and deep character.

Foreign Inflows and Macroeconomic Policy in India, by Vijay Joshi and Sanjeev Sanyal

India has had a turnaround in its balance of payments in recent years, with a swing in the current account from a deficit to a surplus and rapid growth in the capital account surplus. It has used those inflows to build up substantial holdings of foreign exchange reserves that now stand at $120 billion. While the initial reserve accumulation was welcome insurance against the risk of unanticipated future outflows, the current level is adequate to meet any foreseeable challenge, and policymakers need to develop an exchange policy that goes beyond simple reserve accumulation. Should India accelerate the process of capital account liberalization, perhaps allowing the export of capital by residents? Should it allow an appreciation of the exchange rate or speed up the liberalization of the trade regime? Above all, how should the exchange policy be integrated with the broader concerns of domestic economic policy?

In their paper, Vijay Joshi and Sanjeev Sanyal provide a broad review of the external aspects of Indian macroeconomic policy over the past decade. They use that review as the backdrop for a discussion of the policy options open to India in the future, posing the question of how economic policy should respond to the continuation of the strong balance-of-payments position of recent years. In their answer, they argue in favor of a combination of accelerated import liberalization on the external side and domestic fiscal consolidation. In particular, they view trade liberalization, which provides a means of absorbing continued capital inflows without constraining the competitiveness of the export sector, as an alternative to exchange rate appreciation.

In reviewing the economic events of the 1990s, they emphasize the degree to which India relied on an extensive system of capital controls. Foreign direct investment and portfolio investment inflows were gradually liberalized and foreign investors could freely repatriate their investments, but capital outflows by residents were prohibited. Offshore borrowing and lending by Indian companies and banks were also strictly limited. The capital controls allowed Indian monetary policy to maintain a relatively fixed exchange rate regime with minimal conflict with domestic economic policy. India’s restrictive measures on the capital account, reluctance to permit short-term foreign borrowing, and strong accumulation of foreign exchange reserves allowed it to escape any serious consequences from the Asian financial crises.

By accumulating foreign reserves over the decade, India passed up the opportunity to use capital inflows to finance a larger current account deficit. Joshi and Sanyal argue that this policy imposed relatively small costs in terms of forgone investment and growth. The reserve accumulation averaged 1.2 percent of GDP annually, and even if all of the accumulation had been used alternatively to purchase investment goods, the incremental impact on economic growth would have been small. This conclusion is in sharp contrast to the claims of others that foreign reserve accumulation imposed large costs in terms of forgone growth.

Overall, Joshi and Sanyal believe that the external aspects of Indian economic policy were well executed during the 1990s. However, the ample level of foreign exchange reserves and the continuation of strong capital inflows present a more difficult policy choice going forward. The current policy of sterilized intervention in exchange markets has outlived its usefulness, and further additions to reserves will impose rising fiscal costs with few benefits. At the same time, the authors oppose exchange rate appreciation because of its negative impact on export competitiveness. An intermediate policy of continued intervention in the foreign exchange market but without any attempt at sterilization would translate into an easing of domestic monetary policy and higher growth in the short run. However, they fear that it would quickly lead to increased inflationary pressures, and the resulting rise in the real exchange rate would be as unattractive from the export perspective as outright nominal appreciation.

Instead, Joshi and Sanyal argue for a mixed strategy that combines a faster rate of import liberalization on the external side with domestic fiscal consolidation. A rise in imports would provide a means of absorbing the excess capital inflows with no loss of export competitiveness. Since India’s tariff structure is among the world’s highest, the policy would also intensify the competitive pressures on the import-competing industries and strengthen incentives to raise productivity. The constraining factor is the negative public revenue impact of reductions in tariffs, but that is consistent with greater reliance on an expanded value-added tax to meet the revenue needs of both the central government and the states.

They stress the importance of action on the fiscal side because of fear that maintaining the large deficit will crowd out investment and slow the pace of growth in future years. A combination of fiscal contraction and monetary expansion would produce lower interest rates with strong incentives for growth. The greater foreign and public saving would provide the resources necessary to support the higher rate of investment and growth.

Finally, Joshi and Sanyal reflect a strong shift in professional sentiment in their lack of enthusiasm for further liberalization of the capital account. They argue against liberalization of the restrictions on capital outflows by residents, based on the risks they pose in the event of adverse future shocks. In fact, they conclude with a willingness to use Chilean-type taxes in the event that inflows of foreign capital should intensify.

India's Experience with a Pegged Exchange Rate, by Ila Patnaik

In a paper that is largely devoted to a positive analysis of the experience with exchange rate management in India, Ila Patnaik examines the reactions of the monetary authority to the changing external environment. The exchange rate plays a central role in the economic policy of most emerging economies, as monetary policy is torn between a focus on stabilizing the domestic economy and maintaining an exchange rate that is consistent with export competitiveness. In a world of capital controls, it is possible to manage both of these goals simultaneously, but once the economy is fully open to the free inflow and outflow of capital, monetary policy must choose between the external and the internal balance. Over the 1990s, Indian monetary policy operated in a transitional phase, as it only gradually reduced its restrictions on capital account transactions. Since 1993, the external value of the rupee has been determined by market forces, but the central bank intervenes extensively to maintain a stable rate vis-à-vis the U.S. dollar. The continuation of partial controls on capital flows provides some room for an independent monetary policy.

Patnaik focuses on two periods of substantial net capital inflows that necessitated large-scale intervention by the central bank to prevent currency appreciation. The first was a relatively short episode extending from June 1993 to November 1994; the second lasted from August 2001 until at least the middle of 2004. Despite official protestations to the contrary, Patnaik’s empirical analysis demonstrates that India is best characterized as operating a tightly pegged exchange rate over the full period. Her paper explores the extent to which the focus on the exchange rate limited the operation of a monetary policy directed at stabilizing the domestic economy.

The first period began with an easing of the restrictions on inflows of portfolio capital in early 1993. The result was a sharp surge of capital inflows and private expectations of a rise in the exchange rate. However, the Reserve Bank of India (RBI) chose to purchase a large portion of the inflow to prevent appreciation. The bank also acted to sterilize a portion of the inflow, financing some purchases through the sale of government debt. However, the lack of liquidity in the bond market restricted the efforts at sterilization and led the bank to finance much of its purchases through an expansion of reserve money. It attempted to offset the inflationary effects of a rapid growth in the monetary base through a series of increases in the cash reserve ratio. However, the net result was still a significant acceleration of growth in the money supply and, at least in the early months, a decline in interest rates. Despite the small size of the external sector and the limited openness of the capital account, the episode represented India’s first experience with the partial loss of monetary policy autonomy, dictated by the need to intervene in the currency market.

The second episode, beginning in the summer of 2001, was triggered by a swing in the current account from deficit to surplus. Increased capital inflows played a significant role only in later years. Again, the RBI intervened to prevent appreciation, and the exchange rate actually depreciated slightly up to mid-2002. This time around, the market for debt was considerably more developed. The bank was able to finance nearly all of its purchases of foreign currency through the sale of government debt instruments, avoiding use of the currency reserve ratio. There was little or no acceleration of growth in reserve money, and the growth of a broad-based measure of the money supply (M3) actually slowed. However, the RBI did not attempt to hold the exchange rate completely fixed after the summer of 2002, opting instead for a small but steady appreciation. Capital inflows also began to accelerate at the same time, perhaps motivated by currency speculation.

The two episodes differ in the extent to which the RBI was able to engage in sterilizing interventions to avoid any conflict with its policies for domestic stabilization. Patnaik’s review suggests that controls on the capital account are still sufficient to permit considerable discretion in the conduct of domestic monetary policy. To date, Indian policymakers have opted to prevent the capital inflow from translating into a current account deficit. However, the sustainability of the bank’s interventions in future years is debatable because the fiscal costs of accumulating additional reserves are rising.

Liberalizing Capital Flows in India: Financial Repression, Macroeconomic Policy, and Gradual Reforms, by Kenneth Kletzer

The paper by Kenneth Kletzer offers a third perspective on India’s exchange rate regime, focusing on the issue of capital account convertibility. Should India accelerate the pace of its liberalization of capital account transactions? Kletzer views this as a particularly critical decision in light of a history of severe repression of domestic financial markets. He points to numerous international examples in which liberalization led to large financial inflows followed by equally abrupt outflows and financial crisis. In his paper, he lays out the conditions necessary to achieve a successful policy for capital account liberalization.

Kletzer begins with a review of the potential benefits and costs of capital mobility. On the benefits side, he points to five factors. First, there are gains from trade in commodities across time, just as there are gains from contemporaneous trade in goods and services. Second, international financial integration, which brings direct foreign investment, may raise the growth rate by raising productivity growth. Third, such integration allows the sharing of risk between savers and investors. Domestic residents are able to diversify risk, which may raise the saving rate. Fourth, the presence of these flows may reduce output and consumption volatility. Finally, capital account liberalization may provide a means for forcing an end to financially repressive policies. The ability of resources to move across borders in response to unsustainable fiscal or financial policies may impose discipline on public authorities.

The principal cost of an open capital account is the possibility that a crisis may occur in the form of capital flight, leading to large depreciation, large-scale bank failures, or both. For example, under a pegged exchange rate regime, a realization or expectation of monetization of public sector budget deficits that is inconsistent with the pegged rate of currency depreciation forces its abandonment sooner or later in a sudden outflow of international reserves. Such depreciations may then spill over into bank failures if the banks have large, unhedged foreign currency–denominated liabilities and home currency–denominated assets.

To date, the international empirical evidence on the growth effects of capital account liberalization for emerging markets is inconclusive. The bottom line is that countries tend to benefit from liberalization when they can better absorb capital inflows by having higher levels of human capital, more developed domestic financial markets, and greater transparency in financial and corporate governance and regulation. On the other hand, the opening of the capital account in the presence of significant macroeconomic imbalances reduces net gains and raises the prospects of subsequent crisis.

Turning to India, Kletzer notes that India had a relatively unrestricted financial system until the 1960s. Starting in the 1960s, interest rate restrictions and liquidity requirements were adopted and progressively tightened. The government established the State Bank of India, a public sector commercial bank, and went on to nationalize the largest private commercial banks toward the end of the decade. Through the 1970s and into the 1980s, credit directed to “priority” sectors constituted a rising share of domestic lending and interest rate subsidies became common for targeted industries. With the start of economic reforms in 1985, steps were taken toward internal financial liberalization, mainly in banking. The government began to reduce financial controls by partially deregulating bank deposit rates, though that step was partially reversed in 1988. However, in later years the government simultaneously began to relax ceilings on lending rates of interest. Progressive relaxation of restrictions on both bank deposit and lending rates of interest and the reduction of directed lending was under way by 1990.

Liberalization accelerated after the 1991 crisis, when important steps were taken toward external liberalization. Specifically, both direct foreign investment and portfolio investment were progressively opened. A major development was full current account convertibility of the rupee under IMF Article 8 in August 1994. In the subsequent years, sectoral caps on direct foreign investment and restrictions on portfolio borrowing and foreign equity ownership were relaxed. Currently, foreign investment income is fully convertible to foreign currency for repatriation. External commercial borrowing has been relaxed, but it is regulated with respect to maturities and interest rate spreads. Effective restrictions continue on the acquisition of foreign financial assets by residents and on currency convertibility for capital account transactions.

According to Kletzer, there remain four macro-cum-financial vulnerabilities that must be considered in evaluating the case for full capital account convertibility: high public debt and fiscal deficit; financial repression; weakness in the banking sector; and a tendency to peg the exchange rate. India’s external debt is low in relation to its foreign exchange reserves, so there is less to fear on that front.

Using two alternative measures of the real interest rate, Kletzer evaluates the sustainability of the current public debt as a proportion of GDP and concludes that without a major reduction in the primary deficit (fiscal deficit minus interest payment on the debt) it cannot be stabilized at its current level of 82 percent. Based on one measure, the current primary deficit of 3.6 percent must be turned into a primary surplus of 0.8 percent for the debt to be sustained at its current level. On the deficit, Kletzer points out that the combined central and state government budget balances understate total public sector liabilities. Unfunded pension liabilities, various contingent liabilities, and guarantees on the debt issued by loss-making public enterprises (most notably state electricity boards) must also be taken into account.

High levels of public debt and deficits have been sustained partially through financial repression, which has been a central aspect of the Indian fiscal system for decades. Capital controls provide the public sector with a captive capital market and allow lower-than-opportunity rates of interest for government debt. Kletzer estimates that the implicit subsidy to the government averaged 8.2 percent of GDP from 1980 to 1993 and 1.6 percent from 1994 to 2002. Thus the liberalization of the 1990s is clearly reflected in the substantial reversal, though not elimination, of financial repression. In the same vein, the government collected seignorage revenues that averaged 2 percent over the entire 1980–2002 period, but 1.4 percent from 1997 to 2002. The decrease in public sector revenue from financial repression is large, indicating some significant progress in financial policy reform.

Policies of financial repression hamper domestic financial intermediation and raise the vulnerability of the banking system to crisis as international financial integration increases. At the end of March 2003, according to the Reserve Bank of India, the gross nonperforming assets of the commercial banks were 9.5 percent of bank advances; taking provisions into account, this figure drops to around 4.5 percent. Directed credit to priority sectors accounted for 31 percent of commercial bank assets but about 40 percent of nonperforming assets of the banks. At 2 percent of GDP, nonprovisioned and nonperforming assets are not large. But some researchers estimate that the actual figure may be twice as large as the official one. Banks also suffer from unhedged interest rate exposure arising from the large holdings of government debt (currently 40 percent of their total assets) and the liberalization of deposit rates.

Finally, capital controls allow policymakers to manage the nominal exchange rate and influence domestic rates of interest as independent objectives of monetary policy. Past exchange rate management in India displays resistance to currency appreciation. The adoption of a floating exchange rate, albeit managed relatively tightly, reduces crisis vulnerability. The government can resist exchange rate movements while not offering any exchange parity guarantee, as under a pegged exchange rate (or crawling peg or narrow target zone). The uncertainty that is induced, especially for short-term rates of change in the exchange rate, could lead to private sector hedging against currency risk. A possible source of concern is the revealed tendency of the government to lean against exchange rate movements that could result in sudden losses of reserves and capital account reversals under an open capital account.

Kletzer concludes that the initial conditions for capital account convertibility in India are strong, with the exception of public finance. India’s very low short-maturity foreign debt exposure, low overall foreign debt, large stock of foreign reserves, and flexible exchange rate place the Indian economy in a strong position by international standards. The average maturities of foreign and public debt could be expected to fall with international financial integration, but a prospective rise in short-term debt does not in itself justify capital controls. The stock of foreign reserves exceeds the current level of short-term external debt several fold. Liberalization and further opening of the banking system requires regulatory improvement, but the present level of nonperforming assets in the banking system is not excessive in comparison with the emerging markets.

In concluding, Kletzer notes two aspects of fiscal vulnerability relevant to financial integration. First, the primary deficit and the need to amortize public debt constitute the government borrowing requirement that would need to be financed on international terms under an open capital account. Second, the banking system holds the overwhelming majority of the public debt; with international financial integration, these become risky assets. Any gain to the government from currency depreciation or rising interest spreads on public debt would be matched by losses by the banks. These holdings pose a threat to the banking system, and a capital account crisis could begin with the exit of domestic depositors. In this case, deposit insurance could reduce the exposure of the banking system to crisis. Limiting the contingent liability of the government created by deposit insurance so that it just offsets public sector capital gains requires institutional reform to ensure successful prudential regulation.

Banking Reform in India, by Abhijit Banerjee, Shawn Cole and Ester Duflo

The final paper, by Abhijit Banerjee, Shawn Cole, and Esther Duflo, addresses some of the concerns raised above about India’s domestic financial system. In comparison with its peers at similar stages of development, India has an advanced and extensive banking system, with branches throughout rural and urban areas, providing credit not only to industry but also to a significant number of farmers. As in many other developing countries, publicly held banks are by far the largest players, and financial sector reforms have become major policy goals. The authors evaluate the performance of India’s banking sector in terms of its provision of financial intermediation and its contribution to the achievement of a variety of “social goals.” They also offer a comparison of the performance of public and private sector banks.

The paper begins with an overview of banking in India, including the two episodes of bank nationalization in 1969 and 1980. Because the Indian government used a strict policy rule (based on the asset base of banks) to determine which banks were nationalized and which were left in the private sector, India offers an ideal case study in the relative performance and behavior of public and private sector banks.

A primary rationale for bank nationalization was to increase the flow of credit, both in general and to targeted “priority sectors” such as agriculture and small-scale industry. In the first section of the analysis, Banerjee and colleagues use detailed records from a public sector bank to determine whether there is “under-lending” to priority sector firms in the Indian financial system. They define under-lending as a situation in which the marginal product of capital for a firm is higher than the rate of interest it is currently paying. A change in lending regulations that increased the amount of credit issued by banks to one group of firms but not another allowed them to estimate the effect of additional credit on output and profits. They find a strong, positive effect of the change, suggesting that the firms are indeed credit constrained.

Enhancing credit supply was a primary goal of nationalization: while the performance of this public sector bank was not impressive, perhaps private sector banks fared worse? Using a regression discontinuity approach, the authors compared the propensity of public and private banks to lend to borrowers in several sectors of the economy: agriculture, small-scale industry, and the composite sector called trade, transport, and finance. They find that public sector banks did lend substantially more to agricultural borrowers than did private sector banks. Contrary to popular wisdom, however, they find that once bank size is taken into account, public sector banks lend no more to small-scale industry than do private sector banks.

Nor does bank nationalization appear to have increased the overall speed of financial development. The authors find that in the period 1980–91, nationalized and private banks of similar asset size grew at about the same rate. However, in the more liberalized period of 1992–2000, old private sector banks grew 8 percent more than public sector banks. (The lack of attention to new private sector banks is explained by the fact that there are simply not enough data at this stage to allow meaningful analysis.)

To gain further insight into under-lending and a low level of financial development, the authors again study the loan information from the same public sector bank. Under government regulations, loan officers are required to calculate credit limits on the basis of firm size (as measured by turnover) rather than profitability; though the rules do allow for some flexibility on the part of the loan officer, the authors find that in most cases loan officers simply reapproved the previous year’s limit. Because of inflation, real credit thus typically shrinks. Firms that are growing rapidly or that have profitable opportunities are not rewarded with additional credit, nor are poorly performing firms cut off.

The authors then turn to potential explanations for the reluctance of loan officers to lend. Public employees are subject to strict anticorruption legislation, and bank officers have expressed concern that if they issue a new loan that subsequently goes bad, they could be charged with corruption, denied promotion, fired, or even put in jail. The authors test this hypothesis by examining whether a corruption charge against a bank employee in a specific bank led to a reduction in overall lending by all loan officers in that bank. They find that it did: corruption charges led to a reduction in lending of approximately 3 percent compared with lending of other banks. That decline lasted approximately twenty-four months.

Critics of public enterprises are quick to point out that since employees tend not to have a stake in the performance of the enterprise, they may tend to exert less effort. For public bankers, this may mean making guaranteed safe loans to the government rather than spending time and energy on screening new clients and monitoring existing ones. To test this possibility, the authors compare how public sector banks in low- and high-growth states responded to a change in spread between lending rates and the rate at which the government was willing to borrow. They find that banks in lowgrowth states were more inclined to make “low-effort” loans to the government when the spread increased.

The final exercise was to examine the contentious issue of nonperforming assets, bank failures, and bailouts. The official rates of nonperforming loans in public sector banks tend to be higher than those in private sector banks, but because those numbers are notoriously unreliable, the authors instead compare the fiscal costs of bailing out failed private banks with the costs of recapitalizing poorly performing public sector banks. Using data starting from the first nationalization, they identify twenty-one cases of bank failure between 1969 and 2000 and compute the costs imposed on the government in rupees at 2000 prices. That sum is compared with the substantial cost of recapitalization of public sector banks in the 1990s. Controlling for size, the cost of the bank failures appears to be slightly higher than recapitalization, implying a small advantage for public sector banks. However, since recapitalization expenses are recurring, in all likelihood the public sector banks represent a greater cost to the treasury.

The authors conclude by arguing that the evidence suggests a tentative case for privatizing public sector banks. Privatization is not a panacea, however, and both public and private sector banks could benefit from significant internal reform. Liberalization and privatization should be accompanied by strong regulation to ensure the continued existence of social banking. But in net terms, the reduction in agency problems, the increased flexibility, and the reliance on private rather than public incentives to limit corruption and NPAs should make for a more dynamic banking system that is more responsive to borrowers’ needs.



FOOTNOTES

[1] As indicated in the paper, Rajesh Chadha is responsible primarily for measuring the quantitative aspects of a possible India-China free trade arrangement and is not responsible for the qualitative views expressed in the paper. Accordingly, in this summary only Lawrence is referred to, except when the simulations are discussed.
[2] M. S. Ahluwalia. “Economic Reforms in India since 1991: Has Gradualism Worked?” Journal of Economic Perspectives 16, no. 3 (2002): 67–88.

Publication: The Brookings Institution and National Council of Applied Economic Research

ed

India Policy Forum 2005/06 - Volume 2: Editors' Summary

The second volume of the India Policy Forum, edited by Suman Bery, Barry Bosworth and Arvind Panagariya, addresses issues of government fiscal and monetary policy, reviews developments in labor markets and the distribution of income since the initiation of large-scale economic reforms in 1991 and contains a critical assessment of policies aimed at promoting universal access to telecommunications services.The editors' summary appears below, and you can download the IPF conference agenda, a PDF version of the volume, purchase a printed copy, or access individual articles by clicking on the following links:

Download the 2005 India Policy Forum conference agenda (PDF) »
Download The India Policy Forum 2005-06 - Volume 2 (PDF) »
Purchase a printed copy of The India Policy Forum 2005-06 - Volume 2 »

Download individual articles:




EDITORS' SUMMARY

This is the second volume of the India Policy Forum. The journal is jointly promoted by the National Council for Applied Economic Research (NCAER) in New Delhi and the Brookings Institution in Washington, D.C., with the objective of presenting high-quality empirical research on the major economic policy issues that confront contemporary India. The forum is supported by a distinguished advisory panel and a group of active researchers who participate in the review and discussion process and offer suggestions to the editors and the authors. Our objective is to make the policy discussion accessible to a broad nonspecialist audience inside and outside India. We also hope that it will assist in the development of a global network of scholars interested in India’s economic transformation.

The five individual papers included in this volume were selected by the editors and presented at a conference in Delhi on July 25–26, 2005. In addition to the working sessions, John Williamson, a member of the advisory panel, gave a public address on the topic “What Follows the Era of the USA as the World’s Growth Engine?” The papers focus on several issues of great relevance to India’s current economic situation. The first three papers involve issues of government fiscal and monetary policy: the implications of a large and sustained fiscal budget deficit, India’s experience with tax reform, and the relevance of the inflation-targeting framework for Indian monetary policy. The fourth paper provides a detailed review of developments in labor markets and the distribution of income since the initiation of large-scale economic reforms in 1991. The last paper provides a critical assessment of policies aimed at promoting universal access to telecommunications services.

Excessive Budget Deficits, a Government-Abused Financial System, and Fiscal Rules, by Willem H. Buiter and Urjit R. Patel

In their paper, Willem Buiter and Urjit Patel explore the mechanisms by which India’s continuing high fiscal deficits (at both the federal and state levels) affect the sustainable growth of the economy. In their view, the abuse of a financial system heavily dominated by the government represents a key channel by which the fiscal position influences economic growth and vulnerability; accordingly, their paper also extends to an examination of the financial system.

Following a crisis in 1991, India has witnessed a turnaround on many indicators of macroeconomic performance. It has transited from an onerous trade regime to a market-friendly system encompassing both trade and current payments. The sum of external current payments and receipts as a ratio to gross domestic product (GDP) has doubled from about 19 percent in 1990–91 to around 40 percent currently. There has also been some liberalization of cross-border capital account transactions, although significant constraints remain in place on cross-border intertemporal trade and risk trading.

Although average annual real GDP growth over the postreform period has been only modestly higher than in the previous decade (6.2 percent from 1992–93 to 2004–05 compared with 5.7 percent from 1981–82 to 1990–91), India continues to be one of the fastest-growing economies in the world. India’s balance of payments has been strong and inflation has been moderate.

After a sharp initial adjustment in the early 1990s, India’s net public debt has risen steadily as a share of GDP, although at about 70 percent of GDP, it remains below the levels recorded at the time of the 1991 crisis. Following custom, Buiter and Patel consolidate the central bank into these estimates, but not the publicly owned commercial banks, on the grounds that to do so would be to assume that the (implicit) guarantee of liabilities in such banks is certain to be called. In addition to public debt of this magnitude, recognized and explicit guarantees in 2003 amounted to a further 11.3 percent of GDP.

By the standard of most emerging markets, including several that have experienced crisis, India’s public and publicly guaranteed debt is very high. The composition of this debt has changed significantly in the fifteen years since the crisis of 1991. Net external debt has declined sharply, shifting the burden of public debt onto the domestic market. This domestic debt is rupeedenominated. In addition, India continues to maintain selective (discretionary) capital controls, particularly those that keep arbitrage-type flows (external borrowing by domestic financial intermediaries, investment by foreign institutional investors in fixed-income securities, and short-term borrowing by practically anyone) in check. While India faced a combined internal (fiscal) and external (foreign exchange) transfer problem during the years leading up to the crisis of 1991, the weakening of the fiscal position since the late 1990s represents an exclusively internal resource transfer problem.

Given repeated and costly crises in several emerging markets associated with possible public debt default, Buiter and Patel first conduct formal fiscal sustainability tests, revisiting an analysis they undertook a decade earlier. Although their fiscal sustainability tests are not conclusive, they find that government solvency may not be a pressing issue at this juncture. The reason India has been able to remain solvent despite the sustained fiscal deficits of the past twenty years is the combination of fast GDP growth and financial repression.

They note that globally, the level of risk-free interest rates at all maturities and credit-risk spreads are extraordinarily low at present. Continuation of the pattern of recent years—a steady increase in the debt–GDP ratio—will sooner or later raise the public debt to unsustainable levels. Political pressure to enhance government expenditure on social sectors and improve public (infrastructure or utility) services has increased in the aftermath of the 2004 general election.

Buiter and Patel then examine two potential channels for the impact of the government on the quantity and quality of capital formation in India. The first is financial crowding out—the negative effect of public borrowing on aggregate (private and public) saving. The second is the effect of government institutions, policies, actions, and interventions, including public ownership, regulation, taxes, subsidies, and other forms of public influence on private savers, private investors, and the financial markets and institutions that intermediate between them. A simple growth accounting framework is constructed to compare India’s investment efficiency with that of selected large countries. They find Indian investment inefficiency to be relatively high, China’s to be even higher.

Across the world, from the European Union’s (ill-fated) Stability and Growth Pact to the United Kingdom’s Golden Rule and Sustainable Investment Rule, there have been attempts to bind governments to fiscal rectitude through formal legal or constitutional devices. In September 1994 an agreement was reached between the Reserve Bank of India and the Central Exchequer to phase out ad hoc treasury bills, which hitherto facilitated automatic monetization of the budget deficit. The Indian Parliament, in August 2003, voted for the Fiscal Responsibility and Budget Management Act (FRBMA), which required that the central government’s fiscal deficit not exceed 3 percent of GDP and that the deficit on the revenue (current) account be eliminated.

The fiscal rules that India has embraced—perhaps in recognition of the serious systemic inefficiency that the fiscal stance has engendered—are evaluated. The requirement that the revenue budget be in balance or surplus is very likely to be the binding constraint on the central government. Even if the gross investment version of the golden rule (limiting debt issues to capital financing) is the operative one, the Indian central government’s gross capital formation program amounted to no more than 1.5 percent of GDP in 2003–04. Net central government capital formation is even less than that and may well be negative in years that economic depreciation is high. The authors judge that a great deal of current expenditure will be reclassified as capital expenditure if the golden rule were ever to be enforced seriously. Regarding the likelihood of the rules being enforced, they point to the absence of any features of the FRBMA that compel governments to act countercyclically during periods of above-normal economic activity or (as in India during these past three to four years) exceptionally low interest rates. Furthermore, the fiscal rules under the FRBMA do not address the key distortions imposed by the Indian state on the private sector through financial repression, misguided regulations, and inefficient ownership and incentive structures.

Trends and Issues in Tax Policy and Reform in India, by M. Govinda Rao and R. Kavita Rao

Tax reform has been a major component of the economic reform agenda in India during the last twenty years. In their contribution on this subject, Govinda Rao and Kavita Rao offer a comprehensive treatment of the evolution of the direct and indirect taxes in India, their shortcomings relative to an ideal tax system, the reforms undertaken so far, and their future course. They note that according to the theory of optimal taxation, revenue-raising taxes should consist exclusively of consumption taxes with the rates of taxation being dependent on various demand elasticities. In turn, the ideal consumption tax can be mimicked by a value-added tax (VAT) that taxes output at the desired rate but rebates the tax paid on the inputs, thereby only taxing the extra value added at each stage of production. In practice, the information on the demand elasticities required to implement the optimal VAT is rarely available. Moreover, its variegated structure is administratively complex, gives rise to tax disputes and tax evasion, and results in lobbying pressures becoming the main determinants of the tax structure. Therefore, a system characterized by greater uniformity in tax rates has gained popularity with policy analysts and policymakers in recent years.

Since the 1950s, India has relied on both direct and indirect taxes to raise revenue. Direct taxes include both the personal income tax and corporate profit tax. Indirect taxes include domestic commodity taxation and custom duties. Domestic commodity taxation initially took the form of excise duties that taxed output up to the manufacturing stage with no tax rebates on inputs and the sales tax by the states. In recent years, a modified value added tax (MODVAT) that rebates the tax paid on inputs at each stage of production up to the manufacturing stage has progressively replaced the excise tax. Custom duty revenues have principally been a by-product of import protection, and their share in total revenue increased especially rapidly in the 1980s when the government decided to replace the previous system of import quotas with enhanced input tariff rates. With the decline in protection after 1990, the importance of this source of revenue has been declining.

The reforms during the last two decades have focused on both the design as well as the administration of taxes. Marginal tax rates on personal income, which had reached near 100 percent levels in the early 1970s, have now been brought down to around 30 percent (with occasional surcharges). Simultaneously, the number of tax slabs has been reduced to three, and some progress has also been made toward eliminating numerous ad hoc exemptions. Similar steps have been taken in the area of corporate taxation.

The big push in the area of domestic commodity taxation has been toward the development of a genuine VAT and unification of the tax rates. Considerable success has been achieved in both tasks. Custom duties have been brought down substantially, and their dispersion has been considerably reduced. Improvement in tax administration has been more pronounced in direct than indirect taxation.

Rao and Rao observe that the ratio of personal income tax to GDP has increased from 2.1 percent in 1985–86 to 4.3 percent in 2004–05. Reductions in indirect tax revenues as a proportion of GDP have more than offset this gain, however. Central government domestic indirect tax collection declined by 1.6 percentage points and the custom duty collection by 1.8 percentage points over the same period.

It is tempting to argue that the increase in the income tax–GDP ratio represents the operation of the so-called Laffer curve whereby reduced rates by themselves lead to increased revenue. Rao and Rao offer evidence to the contrary, however, and argue that the increase in the revenues from the personal income tax resulted from a more rapid growth of the organized industrial sector that is covered by the tax net; deepening of the financial sector, which makes transactions easier to track; and administrative measures including the spread of tax deduction at source.

Rao and Rao also find that contrary to suggestions in some of the recent literature, personal income tax reform has resulted in increased equity. Granted, the reduction in the dispersion of effective tax rates has led to the richest individuals being subject to lower tax rates. But the reform has also brought into the tax net many relatively rich individuals who previously did not pay taxes. This is reflected in a significant increase in the number of income tax payers and the doubling of revenues from the personal income tax.

Despite substantial rationalization of various components of the tax system, indirect tax revenues remain highly concentrated in terms of commodities. Just five groups of commodities—petroleum products, chemicals, basic metals, transport vehicles, and electrical and electronic goods— contribute 75 percent of the total central domestic commodity tax revenue. Petroleum products alone, which have tripled their share over a thirteenyear period, contribute over 40 percent. Almost 60 percent of custom duty is collected from just three commodity groups: machinery (26.6 percent), petroleum products (21 percent), and chemicals (11 percent). This concentration exceeds the concentration of output or of imports across commodities.

Rao and Rao recommend further rationalization of central taxes through a reduction in the number of tax rates and the elimination of exemptions. In the area of corporation tax, they argue in favor of reducing the depreciation allowance to more realistic levels. They also point to a need for aligning the corporate profit tax rate with the highest marginal tax rate on personal income tax. With regard to import duties, the authors recommend a minimum tariff of 5 percent on all imports as a step toward harmonizing duty rates across commodities.

In the area of domestic commodity taxation, the goal must be a single, unified goods and services tax. The achievement of this goal has several components. All specific duties must be converted into ad valorem rates and the tax on services must be widened substantially. The sales tax must be harmonized across states and, for collection purposes, integrated with the central VAT, which should eventually cover all goods and services. This unification will also allow the adoption of the destination-based sales tax on all interstate trade. Keeping in view revenue needs, Rao and Rao recommend that the total burden of taxation on goods and services should be 20 percent. Of this, 8 percent should be borne by the center and 12 percent by the states.

The state of tax administration, resulting partially from the virtual absence of data on both direct and indirect taxes, has been a major reason for low levels and high costs of compliance. The absence of information has also led to the evolution of a compliance system in which tax payments are negotiated between the payer and the government. The recent initiatives for administrative reform that include the development of a computerized information system and procedural changes such as expanded coverage of tax deduction at source and systematized audit procedures have alleviated this problem to some degree. Within direct taxes, efforts include outsourcing of issue of permanent account numbers, a tax information network established by the National Securities Depository Limited with special focus on tax deductions at the source; and the Online Tax Accounting System. Within indirect taxes, a few examples of new information systems are the customs e-commerce gateway, known as ICEGATE, and the Customs Electronic Data Interchange system. Further initiatives are under way, including a systematic approach to compiling relevant data from a variety of relevant sources. Rao and Rao believe that, as a part of this initiative, it is critical that mechanisms be set up for data sharing between direct and indirect tax authorities, as well as between central and state tax authorities.

Inflation targeting has emerged as one of the most significant developments in the theory and practice of monetary policy. Disenchantment with the outcomes of the activist monetary policies of the 1970s and 1980s led many economists and policymakers to advocate a simplified and more rulesbased approach to monetary policy, one in which attaining and sustaining price stability is given a clear priority. Many countries, however, have experienced difficulties in attempting to use the growth in monetary aggregates or the exchange rate as a guide to such a policy. An inflation-targeting framework (ITF), which consists of setting an inflation target and aligning monetary policy to ensure its attainment in a transparent and accountable manner, is increasingly advocated as a best-practice approach to controlling inflation.

In the long run, the inflation rate is the only outcome that monetary policy can influence. However, because there is a short-run cost of disinflation, a trade-off between inflation and unemployment, the optimum path of future inflation implies a gradual return to the desired rate. At the heart of the ITF is a specific view of the inflation-generating process as a largely demanddetermined phenomenon, a conviction that the most efficient way of dealing with inflation is through an interest rate rule, and the belief that the public’s inflation expectations can be managed. From this follows the prescription that the central bank, as the custodian of interest-rate policy, should play a dedicated and dominant role in promoting the inflation objective. Initially, inflation targeting was adopted by several industrial countries, but it has recently spread to some emerging markets. At present, much of the focus on monetary policy is on credit growth, not interest rates. Is the ITF practical in the absence of a large role for market-determined interest rates?

How Applicable Is the Inflation-Targeting Framework for India?, by Sheetal K. Chand and Kanhaiya Singh

In their paper, Sheetal Chand and Kanhaiya Singh ask whether such a framework might be applicable to developing economies. In particular, is the ITF suitable for guiding the monetary policy of India? Earlier discussions focused on the difficulties that developing countries would have in adopting a policy rule that assigns absolute priority to the control of inflation. They often have less-developed financial institutions (requiring a more nurturing approach by the central bank), an aversion to large exchange rate fluctuations, or a need to be accommodative of some changes in fiscal policy. Widespread public knowledge of these constraints implies that a policy based on inflation targeting would lack credibility.

Chand and Singh examine the issue from a different perspective, however, arguing that the inflation process in India differs in significant respects from that commonly assumed to hold for the industrial economies. The paper first tests a standard formulation of the ITF, relying on a paper by Lars Svensson. This formulation explicitly incorporates a short-run tradeoff between inflation and the deviation of output from full employment (a Phillips-curve type relationship). In their tests of the Indian experience from 1970–71 to 2002–03, Chand and Singh find that the output gap is not a significant determinant of inflation. Thus, they argue that Svensson’s derivation of the optimal policy rule is not satisfactory in the Indian context.

However, this does not necessarily imply that demand factors have negligible effects on inflation. The authors develop an alternative specification that defines excess demand as the difference between the nominal GDP growth rate and the growth rate of potential output valued at the preceding year’s rate of inflation. They find that this alternative version accords better with conditions in India. However, the demand-side effects are supplemented by a substantial role for variations in input prices. In the final model, the coefficients on the measures of demand conditions indicate some effect, but the dominant role is that of supply-side factors.

The authors interpret the large role for supply-side shocks in the generation of inflation as arguing against reliance on the ITF approach. In addition, the nominal interest rate appears to be a less powerful instrument with which to influence the inflation rate. They are also concerned about the potential for undesirable side effects that might result from large variations in interest rates, such as large and persistent swings in exchange rates or asset values.

Chand and Singh favor a more balanced approach that employs both monetary and fiscal policy as instruments to control inflation and that is reflective of supply-side phenomenon. The more active role for fiscal policy is justified by their finding of a shorter transmission lag between an expenditure stimulus and the inflation rate than is typical for the advanced countries. However, they agree that more research is needed to establish fully the role that fiscal policy should play.

Within the monetary policy sphere, they advocate the use of multiple instruments rather than relying solely on interest rates. Examples would be adjustments in liquidity requirements to regulate the supply of credit that finances investment expenditures and direct controls on capital inflows. They perceive these measures as having fewer adverse effects on asset valuations. With regard to interest rate policy, the Reserve Bank of India might seek to maintain a desired real interest rate, with the nominal interest rate being adjusted whenever the underlying inflation rate deviates from target. From time to time, shifts in liquidity preference will result in asset transactions that push interest rates above or below the target long-term level. Accommodating liquidity preference shifts through appropriate open market operations would help keep interest rates stable. All this implies that it may be more prudent and welfare enhancing for India to pursue a strategy other than the standard ITF to control inflation.

The performance of the Indian economy following the initiation of an economic reform program in 1991 has been a subject of intense intellectual debate. There are sharp differences of view on whether the economic situation of Indian workers improved in the postreform years. Some commentators characterize the postreform period as a largely jobless expansion with a marked slowing of real wage growth, particularly in rural areas.

Pre- and Post-Reform India: A Revised Look at Employment, Wages, and Inequality, by Surjit S. Bhalla and Tirthatanmoy Das

Surjit Bhalla and Tirthatanmoy Das undertake a detailed review of the available survey data on employment, unemployment, agricultural wages, and income inequality over the past thirty years to examine several of these controversial propositions. Much of the evaluation of the effects of the economic reforms is confounded by the low frequency of detailed survey data on the economic situation of Indian workers. The discussion has centered on the results from large-scale quinquennial surveys of their employment status conducted in 1983, 1987–88, 1993–94, and 1999–2000. Bhalla and Das construct a more expansive time series of available data by including two surveys from the 1970s and twelve smaller annual surveys from the 1980s and 1990s. The major advantage of the additional data is that it allows a better alignment of the data on labor market conditions with the initiation of the reforms in 1991. Because 1991 was also a year of economic crisis in India, the precise dating of the end of the prereform period and the beginning of the reform era plays a crucial role.

On the employment front, Bhalla and Das conclude that employment growth slowed between 1991 and 2003 to 1.7 percent a year, compared with a 2.6 percent rate in the 1983–91 period. They attribute a large portion of the slowdown during the 1990s to a slower rate of growth of the population of labor force age and to a decline in the labor force participation rate related in part to a rise in the proportion of persons who remained out of the labor force while enrolled in educational institutions. They argue that the slow employment growth of the 1990s is not therefore a reflection of weak labor market conditions.

Labor market surveys in India produce three alternative measures of employment status. First, usual status classifies individuals among employed, unemployed, and not in the workforce on the basis of the principal activity status of the individuals over the prior 365 days. Current weekly status follows international conventions of classifying those who worked at least one hour in the prior week as employed, and distinguishing between unemployed and out of the workforce on the basis of whether they were available for work in the prior week. A third concept of “current daily status” is also determined in the quinquennial surveys. Individuals are asked to report their activities over a seven-day period and to distinguish half days in determining the activity status. Those who work four or more hours are considered employed for the full day, and one to four hours is considered a half day. Similarly, persons who did not work but were available for four or more hours are considered to be unemployed for the full day, and those who were available for one to four hours are reported as unemployed for half a day.

Bhalla and Das point to a general perception that unemployment has increased in the postreform years as the primary rationale for a new government program aimed at providing job guarantees for rural families. They argue, however, that the measures of unemployment based on usual and weekly status show significantly lower rates of unemployment in the years after 1991 relative to the experience of the 1970s and 1980s. This conclusion also accords with their earlier interpretation that the slowing of employment growth in the 1990s was not indicative of a weak labor market. They also point out that the educational level of the unemployed is high; this is consistent with a view that much of the unemployment is the result of the more skilled members of the workforce spending longer in search of better job matches.

Third, the authors examine the patterns of real wage change in the postreform era. That analysis is faced with a severe shortage of high frequency surveys of wage developments. The quinquennial surveys provide the only information on economywide wages, and annual measures are available only for agricultural wages. The quinquennial surveys do suggest an acceleration of real wage growth after 1993, from an annual rate of 2.5 percent between 1983 and 1993–94 to 4.5 percent in the period of 1993–93 to 1999–2000. That pattern is apparent in the wage data for both urban and rural workers.

Bhalla and Das undertake a more detailed analysis of the annual data on agricultural worker wages, a subgroup of the rural workforce. This is also the group for which wage growth is alleged to have slowed sharply after the introduction of economic reforms in 1991. They compare two basic measures: the Survey of Agricultural Wages in India (AWI), and wage data from a lesser-used Survey on the Cost of Cultivation (CoC) of major crops. The AWI survey was terminated after 1999–2000 and the last available year for the CoC is 2000–01. They use a new survey to extend the other wage measures through 2004–05. The measures of real wage growth do grow at different rates over some subperiods and the year-to-year changes are erratic; but neither the AWI not the CoC measure supports the notion of significant deceleration of real wage growth after 1991.

Finally, the trend in income inequality during the 1990s is a subject that has generated great controversy among the group of researchers who have written on the subject. The analysis is largely limited to a comparison of data from the quinquennial surveys, and it is complicated by some changes in the survey methodology. Bhalla and Das believe that there may have been some increase in inequality after 1993–94 but that the change is small and largely limited to a widening of inequality at the very top of the distribution. It is also difficult to match the timing of the change with the introduction of economic reforms. In summary, Bhalla and Das maintain that the frequent assertion that the economic reforms have not helped Indian workers is not supported by the data.

Though telecommunications reform in India began in the 1980s, it achieved at best limited success in the initial decade. Beginning in the early 1990s, technological change and new government policies exhibited greater promise, with dramatic gains made in the quality of service as well as its availability in the new millennium. Telecommunications reforms represent a major success of the economic reforms in India in the last decade. Unsurprisingly, however, telecommunications access has increased more rapidly for wealthy and urban consumers than for poor and rural consumers. To address this gap, India has adopted so-called “universal service” policies, especially targeting rural villages. The philosophy behind the desire to spread the service to all is that certain services, such as electricity, water, and telecommunications, should be available to all.

Universal Telecommunications Service in India, by Roger G. Noll and Scott J. Wallsten

In their paper, Roger Noll and Scott Wallsten remind us that universal service policies are typically justified on three grounds. First, the presence of economies of scale may lead to the underprovision of the service. At best, the firm will price the service at the average cost, which is higher than the marginal cost when scale economies are present. If, in addition, the market turns imperfectly competitive due to a single supplier or a handful of suppliers, the service may be further undersupplied. Second, the government may view some services as “merit goods” that everyone should have, regardless of their willingness to pay. Finally, politics or regional development goals may induce government to transfer resources to rural or lowincome constituents.

The “merit good” argument is easier to justify for universal access to some types of infrastructure than to others. Water and sewerage, for example, involve large health externalities, and bringing these services to everyone can yield large social benefits. The provision of universal telecommunications service is more difficult to justify along these lines. Given the presence of a large proportion of the poor in the population, it can be argued that the government revenues are better spent on direct poverty alleviation programs. The issue of economies of scale points to the need for regulatory measures rather than universal service. It is true that the scale economy may take the form of an externality in the sense that the addition of new customers may lower the cost of supplying the service to the existing customers. But the firms, which are capable of figuring cost at various levels of supply, can readily internalize such externalities. Nevertheless, perhaps because of its political appeal, most countries in the world pursue the goal of universal access to telecommunications services in some form.

Noll and Wallsten also argue that the case for subsidizing the incumbent wire-line carrier, whether privatized or state-owned, to achieve the universal service objective is weak since it offers relatively little service in the poor areas in the initial equilibrium. In the era of state-owned monopolies, the telecom provider had little incentive to invest in telecommunications services in general, as witnessed by the long waiting period to obtain connections and the poor quality of service following installation. Telephone penetration and usage were low, even considering developing countries’ low incomes, with service to poor and rural areas virtually absent.

India’s first official universal service program was introduced as a part of the 1994 National Telecom Policy. That policy set the goal of providing certain “basic telecom services at affordable and reasonable prices” to all citizens. This policy was revised under the New Telecom Policy of 1999, which made the provision of telecom services in remote rural areas a higher priority and set certain specific goals to be achieved by 2002. When those goals were not met, the Department of Telecommunications adopted two objectives: providing public telephones in villages and providing household telephones in rural areas. The first objective was given higher priority.

A universal service fund was created based on the implicit assumption that competition among private providers would not generate adequate service in rural areas. The government also took the view that it could minimize the magnitude of the subsidy necessary to provide universal service by opting for only one firm in any given area. The government finances the subsidy through two taxes. The first, the universal service levy, which goes into the Universal Service Fund (USF), is a tax of 5 percent of adjusted gross revenues on all telecommunications providers except “pure value added service providers” such as Internet service providers. The second includes access deficit charges (ADCs), which are incorporated into interconnection charges and are paid directly to the incumbent state-owned enterprise Bharat Sanchar Nigam Limited (BSNL) to compensate it for providing belowcost service in rural areas.

The USF is intended to reimburse the net cost (total cost minus revenues) of providing rural telecom service. Telecommunications firms bid for subsidies to be received in return for providing service in rural areas in an auction. The firm bidding the lowest subsidy, subject to the bid being no higher than a benchmark established by information from the incumbent wire-line monopoly, is eligible to be reimbursed that amount from the fund. Any firm with a license to provide basic or cellular service in the relevant service area is eligible to bid. The winner receives a subsidy for seven years, subject to review after three years.

In nearly all service areas, only one firm bid: the incumbent BSNL. Not surprisingly, the BSNL bid exactly the benchmark amount, which was the maximum subsidy the government was prepared to provide. The failure to create genuine competition for rural public service arose from three problems. First, the benchmark subsidy was based on data provided by BSNL, whose accounts are aggregated in a way that makes it impossible to separate costs of its various operations. Second, BSNL receives nearly all of the ADC cross-subsidies. The incumbent has potential gains from manipulating how cost information is aggregated across service categories and across high-cost and low-cost areas, because these data not only determine the benchmark subsidy, but also the magnitude of the net deficit for all local access service. Allocating some ambiguous cost elements to subsidized areas can increase both the public telephone subsidy and the ADC subsidy. Third, the auction allowed only basic service operators already providing rural service in the area to bid. Given the existing service was in any case quite limited, there was no advantage to choosing the provider from among the existing operators. Therefore, the exclusion of the firms not already present had detrimental effect on new entry into rural services commensurate advantage of choosing an existing operator.

ADCs, the second major source of universal service, are paid by private entrants to the incumbent based on the premise that basic access providers face unprofitable social service obligations and should therefore be compensated for them by entrants who are free to seek out profitable customers. The assumption underlying the expectation of these losses is that regulated price ceilings on basic monthly access service charges applying to a large number of customers are below the cost of service.

The ADC fee structure is highly inefficient for two reasons. First, the price elasticity of demand is much greater for usage than for access. Hence, taxing usage to finance access substantially distorts the former for the relatively small gain in the latter. Second, applying the tax to only some calls creates another distortion. The regulatory authority had intended to impose ADC charges for five years and has recently reduced the fee so that it now represents about 10 percent of the sector’s revenue rather than 30 percent when it was first introduced

Noll and Wallsten argue that India’s universal service policies may unfortunately have had the unintended consequences of deterring investment in precisely the areas they had hoped to target. The subsidies discourage competition, and the most efficient operators are taxed to support the least efficient operator. Fortunately, most of the telecommunications market in India is sufficiently competitive and dynamic that growth may not been hampered significantly by these inefficient policies. Nonetheless, because telecommunications is such an important industry, it is crucial to minimize inefficiencies. Noll and Wallsten conclude that India’s best approach for achieving universal service is to ensure that its policies promote competition and do not favor any single firm over another.

Publication: National Council of Applied Economic Research and the Brookings Institution
      
 
 




ed

India Policy Forum 2006/07 - Volume 3: Editors' Summary

This third issue of the India Policy Forum, edited by Suman Bery, Barry Bosworth and Arvind Panagariya, covers India’s economic growth performance over the past quarter century and the impact of trade liberalization on the distribution of income and poverty; the distressingly poor performance of India’s elementary schools; the role of economic factors on the decline of the Indian birth rate; and the link between economic growth and environmental change by assessing the interaction between local living standards and forest degradation in the Indian mid-Himalayas. The editors' summary appears below, and you can download a PDF version of the volume, purchase a printed copy, or access individual articles by clicking on the following links: 

Download the 2006 India Policy Forum conference agenda (PDF) »
Download India Policy Forum 2006-07 - Volume 3 (PDF) »
Purchase a printed copy of India Policy Forum 2006-07 - Volume 3 »

Download individual articles:


 

EDITORS' SUMMARY

This is the third volume of the India Policy Forum. The journal is jointly promoted by the National Council for Applied Economic Research (NCAER) in New Delhi and the Brookings Institution in Washington, D.C., with the objective of presenting high-quality empirical research on the major economic policy issues that confront contemporary India. The forum is supported by a distinguished advisory panel and a group of active researchers who participate in the review and discussion process and offer suggestions to the editors and the authors. Our objective is to make the policy discussion accessible to a broad nonspecialist audience inside and outside India. We also hope that it will assist in the development of a global network of scholars interested in India’s economic transformation.

The five individual papers included in this volume were selected by the editors and presented at a conference in Delhi on July 31 and August 1, 2006. In addition to the working sessions, Pranab Bardhan, a member of the advisory panel, gave a public address on the topic of “Governance Matters in Economic Reform.” The papers cover a diverse set of macro and microeconomic topics of relevance to policymakers. The first two papers focus on India’s economic growth performance over the past quarter century and the impact of trade liberalization on the distribution of income and poverty. The third paper highlights the distressingly poor performance of India’s elementary schools. The fourth paper examines the role of economic factors on the decline of the Indian birth rate. The last paper explores the link between economic growth and environmental change by assessing the interaction between local living standards and forest degradation in the Indian mid-Himalayas.

Sources of Growth in the Indian Economy, by Barry Bosworth, Susan M. Collins, and Arvind Virmani

During the first three decades of its development, the Indian economy grew at the so-called Hindu rate of growth of 3 to 4 percent. But India has now turned a corner, growing at a much higher rate of 6 to 7 percent during the last two decades. How has this transition been achieved and what implications does it have for the future transformation from a primarily rural and agricultural economy to a more modern one? These are the key questions Bosworth, Collins, and Virmani address in their paper.

Bosworth et al. observe that answering these questions requires analyses of both the evolution of productivity in the three key sectors—agriculture, industry and services—and the implications for aggregate productivity growth of the reallocation of resources out of agriculture to more productive activities in industry and services. Consequently, they use a growth accounting framework to examine empirically the acceleration in economic growth that India has achieved over the past two decades. The analysis focuses on two dimensions in which India’s experience differs from that of China and other parts of Asia. First, instead of strong growth in the manufacturing sector and in exports, India’s success reflects rapid expansion of service-producing industries. Second, it has been associated with relatively modest levels of human and physical capital accumulation.

The authors construct accounts at the sectoral level, and identify the residual gains from resource reallocation across sectors. They then undertake further analysis of the role of capital accumulation—providing estimates of the returns to schooling for human capital, and reporting on trends in sectoral saving and investment in physical capital. The paper concludes with a discussion of some of the important issues for India’s growth experience and prospects for the future.

Throughout the analysis, the authors focus on the quality of the available data. The updated growth accounts incorporate recent data revisions, some of which are quite large. Extensive examination of the relevant underlying data series helps to clarify a number of issues related to how the data are constructed. In particular, the discussion highlights challenges faced by the Indian statistical agencies in preparing measures of output and employment, primarily because much of the non-agricultural workforce operates outside of standard reporting programs. Thus, India’s national accounts depend on quinquennial surveys (conducted in 1973, 1983, 1987, 1993, 1999, and 2004) for information on households and small enterprises. Researchers should have a reasonable degree of confidence in the GDP estimates for benchmark years that incorporate results from the surveys. However, for non-benchmark years, annual output data are based on interpolation and extrapolation of the labor input data required to construct output measures for India’s large unorganized sector. The lack of reliable annual series makes it impossible to pin down the precise timing of India’s growth acceleration.

A key finding of the paper is that services have shown very substantial productivity growth since the early 1980s—a result in sharp contrast to that obtained for other countries at a similar stage of development. Productivity gains in agriculture and industry have been modest, which is consistent with both the findings of prior studies of India and those for other comparable countries such as Korea and Taiwan in the 1960s and 1970s. What distinguishes the Indian case is the relatively small output growth in industry: the sector has not played a major role in reallocating workers out of agriculture where they are underutilized.

Considerable attention has been focused on the role of services— especially high-tech services—as the source of India’s growth. The growth accounts attribute 1.3 percentage points of the 3.8 percent per annum growth in GDP per worker during 1980–2004 to growth in total services productivity (versus 0.7 percentage points each to agriculture and industry and 1 percent to reallocation).

However, the authors argue that the frequent emphasis on business services as the driving force behind India’s economic expansion may be overblown. Despite its extraordinary growth, the industry comprises only a small share of India’s GDP and employment. Business services provide jobs primarily for the relatively small proportion of the workforce that is highly educated, and recent increases in the returns to higher education suggest that high-skill services industries are encountering labor shortages. Furthermore, the strong gains in service sector TFP are puzzling. One might expect this in sub-sectors such as finance and business services, but these sectors remain small—just 17 percent of total services output in 2004. In fact, the growth acceleration is quite widely dispersed across service sub-sectors and rapid productivity growth seems unlikely in the biggest, which are trade, transportation and community services. Though difficult to verify, the authors express concern that an underestimate of services price inflation, particularly in the more traditional sectors, may imply an overestimate of output growth. The available measures of employment suggest a less dramatic acceleration of overall growth and a somewhat smaller focus on services.

In any case, India’s growth expansion is not creating adequate job growth for the bulk of the population that is not particularly well-educated. Thus, it is important that India broaden the base of the current expansion by promoting programs that would increase India’s attractiveness as a source of manufactured goods for the world market. Growth of the manufacturing sector would also provide a strong match for the skills of India’s workforce.

The paper also offers additional discussion of education and physical investment, both of which have an important bearing on growth and productivity. The accounting decomposition finds that the growth contribution from increases in education has been quite modest. The paper also examines the evolution of India’s saving behavior. The authors conclude that saving is not constraining India’s growth. However, there is room for increased public and foreign savings.

Pulling together the findings of their analysis, the authors draw a number of implications for India’s growth in the coming decade. A key message is that India needs to broaden the base of its economic growth through the expansion of the industrial sector—especially manufacturing. In this context, China provides a useful model, in its emphasis on exports of manufactured goods as a primary driver of growth.

To accomplish this, India needs to create a more attractive economic environment for doing business—a location able to compete effectively with China. This will require strengthening its infrastructure—including a weak and unreliable power system, and poor land transportation in many states. However, India already enjoys relatively good institutions and is strong in the areas of finance and business services.

The liberalization of the international trade regime is believed to reduce poverty through its impact on both efficiency and distribution. Expansion of trade lowers the cost of goods and services consumed by the poor and freer trade should lead to an increased demand for and higher returns to unskilled labor in poor countries. However, those gains may not emerge if workers are not able to move to the sectors and areas of expanding demand. Thus, the ultimate effect of trade expansion on poverty is ambiguous and must be determined empirically.

Trade Liberalization, Labor-Market Institutions, and Poverty Reduction: Evidence from Indian States, by Rana Hasan, Devashish Mitra, and Beyza P. Ural

In their paper, Hasan, Mitra, and Ural examine the impact of India’s trade liberalization on poverty reduction using state and regional level data from the National Sample Survey (NSS) of households. Their measure of trade policy includes changes in both tariffs and non-tariff barriers (NTBs). They weight tariffs (and alternatively NTBs) by sectoral employment to arrive at a state-level measure of the trade exposure of the labor force, and they construct a second version that is based on a principal-components aggregation of the two policy instruments. They then allow the impact of trade policy on poverty to differ across states according to the flexibility of labor-market institutions. The classification of states with flexible and inflexible labor markets is based largely on a prior study by Besley and Burgess. To obtain a clearer picture of the effects on poverty, they also investigate the impact of another important, complementary component of economic reforms, namely product market deregulation, and look also at its interaction with labor-market institutions.

The measures of poverty are drawn from the NSS surveys of 1987–88, 1993–94, and 1999–2000, and are largely based on a methodology developed by Deaton and Drèze and their approach for adjusting the poverty estimates for a change in the design of the household survey in 1999–2000. However, Hasan et al. also check the robustness of their results with two alternative measures: one based on the official Government of India (GOI) estimates of poverty, and a longer time series of state-level poverty rates created by Ozler, Datt, and Ravallion. Another innovation in the paper is that they allow the transmission of changes in protection rates to domestic prices to vary across states since distance and the quality of the transportation system should influence the extent of change in local prices.

Their principal finding is that states whose workers are more exposed to foreign competition tend to have lower rural, urban and overall poverty rates (and poverty gaps), and this beneficial effect of greater trade openness is more pronounced in states that have more flexible labor market institutions. Trade liberalization has led to poverty reduction to a greater degree in states that are more exposed to foreign competition by virtue of their industrial composition. The results hold, at varying strengths and significance, for overall, urban and rural poverty.

For example, controlling for state as well as time fixed effects, they conclude that the reduction in tariff rates over the 1990s was associated with a reduction in poverty rates ranging from 16 percent to 40 percent. Reductions in tariff rates also were associated with a decline of about 15 percent in urban poverty in states with flexible labor market institutions relative to other states. They find some evidence that industrial delicensing has had a more beneficial impact on poverty reduction in states with flexible labor institutions.

Hasan et al. contrast their evidence on the linkages between trade and poverty with a prior study by Petia Topalova, whose investigation utilized district-level data. Topalova concluded that trade liberalization slowed the pace of poverty reduction in rural districts, with the strength of this effect being inversely related to the flexibility of labor-market institutions. She found that the linkage between trade liberalization and poverty reduction was also negative in urban areas, but that result was not statistically significant. The authors provide some reasons for the differences. First, Topalova restricted her analysis to one measure of employment-weighted tariffs. The current paper includes NTBs and a principal-components aggregate of tariffs and NTBs. Second, there are significant differences between the two studies in the methods used to construct the overall employment-weighted indexes of average tariffs. Topalova included nontradable goods industries, which are explicitly excluded from the measures used in the current study. Third, the Topalova paper did not allow for the effects of changes in trade protection on domestic prices to vary across districts. Finally, the authors explored the robustness of their own results by incorporating a greater variety of poverty measures and by extending the analysis to the regional level.

India’s public elementary education system faces enormous problems. Although enrollments have increased, a recent survey of rural areas found shockingly low levels of learning achievement, confirming the cumulating evidence of a dysfunctional system. There are many other indicators of distress—high levels of dissatisfaction of parents and students with teachers, the massive and on-going shift into private schooling, and the unhappiness of the public sector teachers themselves.

Teacher Compensation: Can Decentralization to Local Bodies Take India from the Perfect Storm Through Troubled Waters to Clear Sailing?, by Lant Pritchett and Rinku Murgai

In their paper, Pritchett and Murgai argue that the current system of teacher compensation in the public sector is at the heart of many of these problems. They argue that the system of compensation within any high performance organization should be designed to attract, retain and motivate workers who, on a day-to-day basis, pursue the goals of the organization. All four elements of a system of compensation (durability of the employment relationship, structure of pay across states of the world, assignment of workers to tasks, and cash versus benefits) should work together towards this goal.

Their paper highlights the extraordinary extent to which India’s system of teacher compensation departs from this norm. While there are many variations across states, the current system can aptly be described as a combination of high pay and zero accountability. The paper documents four facts about the system of teacher compensation: (1) there is little or no ability to terminate the employment of teachers—for any cause; (2) the average pay of public sector teachers is very high relative to alternatives (both private teaching and other private sector jobs); (3) the degree of overpayment is higher for public sector teachers at the early stages of a career; and (4) the pay of public sector teachers has very little variance even potentially related to performance—much less than either private sector teachers or other private sector salaried workers.

Each of these elements of the system of compensation reinforces the lack of accountability. There is nothing in the present system to attract people well matched to teaching, to retain the best and most committed teachers, or to motivate performance of good teachers (for that matter, prevent good teachers from becoming disillusioned, cynical, and embittered and yet stay until they are 60 years old). Moreover, the institutional context of basic schooling—all the other relationships of accountability—is also weak.

Pritchett and Murgai argue that this system of compensation plays a large role in producing the current “perfect storm” in public schooling: (a) the learning achievement of students is low, (b) absenteeism of teachers is very high, (c) the treatment by teachers of students is often abysmal, (d) parents and students are dissatisfied with government schools, and (e) families are voting with their feet and pocketbooks to move their children into private schools. Perhaps worst of all, the potentially good teachers within the public system are disenchanted, overburdened, and feel disrespected by parents and managements. The authors argue that any reform of teacher compensation needs to be pro-teacher in contrast with the current system which is dramatically anti-teacher.

In one study of schools in New Delhi, teachers in government schools were compensated at a rate seven times of that of teachers in unregistered schools, they were present less than half the time, and their students consistently scored far below those of students in the unregistered schools in all subject areas. Parents and students expressed higher levels of displeasure with teacher performance in the public schools. Even so, government teachers were dissatisfied with nearly every element of their jobs.

While accepting the common view that there is no possibility of significant reform of the compensation system under the present circumstances, Pritchett and Murgai argue that the devolution of education to Panchayati Raj Institutions (PRIs) provides a unique opportunity to restructure the system to be consistent with an accountable and performance-oriented public sector. Decentralization to PRIs, if done well, has the potential to break the political impetus behind business as usual by combining a reallocation of functions across tiers of government (states and PRIs) with allowing PRIs to develop systems of compensation that are aligned with the realities of public employment and the particularities of the practice of teaching.

Pritchett and Murgai suggest that the development of a future cadre of teachers should take place within a new system under district control. They propose a system with three phases for teachers’ careers, ranging from an initial apprentice phase up to a masters level, with each stage corresponding to increased pay and prestige. Promotion from one phase to another would be based on performance reviews with input from the local school, peers, and technical reviews. The objective is to develop a professional teacher cadre at the district level, but to leave control of school administration and the actual hiring of teachers from the eligible pool with the local authorities.

Fast growth of the population has been a central concern of policy makers in the developing countries with large populations such as India and China. Reductions in fertility have been seen as an important means to achieve rapid and sustained economic growth. And, many countries have adopted policies ranging from offering incentives for fertility reduction to outright restrictions on the size of the families. The advocates of such direct measures to reduce fertility are skeptical that economic growth alone can deliver the necessary reduction in fertility without at least a major expansion of education among women.

At one level, the controversy over the positive role of economic growth in driving down fertility would seem surprising. After all, richer, more developed economies have uniformly lower fertility rates than do poorer, less developed ones. Over time many formerly poor countries have become richer and simultaneously achieved sustained fertility declines.

But there also exist examples and patterns supporting the view that fertility responds to declining mortality and a transition in cultural perspective that need not be related to growth. For example, we have countries such as Cuba, Costa Rica, and Sri Lanka with traditionally high levels of education and health and correspondingly low levels of fertility. Likewise, China has lowered fertility through direct intervention at a relatively low level of income. There also exists evidence that the timing of a first sustained decline in fertility is not connected to a particular threshold level of economic development.

Does Economic Growth Reduce Fertility? Rural India 1971–99, by Andrew D. Foster and Mark R. Rosenzweig

In their paper, Andrew Foster and Mark Rosenzweig employ a newly available panel data set to assess the impact of economic factors on fertility. The data set offers a representative sample of rural India over the period 1971–99, and it allows an examination of the main factors responsible for the rural fertility decline that occurred in India in the 1980s and 1990s. The authors first construct a simple dynamic model of fertility choice that incorporates the opportunity cost of time, the trade-off between investments in the human capital of children and family size (the so-called qualityquantity trade-off), and increased access to health and family planning services as determinants of fertility. The model yields testable hypotheses relating the fertility decision to its various determinants.

The authors then go on to use the data set to test the hypotheses so derived. A key feature of the data is that it links the households across different rounds of the survey. This permits the elimination of the influence of time-persistent cultural and preference differences across Indian states and households that may be correlated with economic change. When these cultural and preference differences are ignored, the empirical results lead to the conclusion that neither agricultural productivity growth nor changes in the value of time matter for fertility change. Cross-sectional variations in fertility decisions depend only on the spatial differences in maternal education. This analysis supports the advocates of direct intervention to influence fertility decisions.

But once the authors take the cultural and preference differences into account, the results change dramatically. The corrected results show that increases in the opportunity cost of women’s time, as reflected in female wages and increased investments in child schooling, explain the lion’s share of the fertility decline. The results leave very little role for parental schooling, male or female.

The results show that the areas of high agricultural productivity growth not only experience declines in fertility but also increases in the schooling of children and in the time devoted by married women to non-household work. The quantitative estimates suggest that aggregate wage changes, dominated by increases in the value of female wages, explain 15 percent of the decline in fertility over the 1982–99 period. In combination, changes in agricultural productivity and agricultural wage rates explain fully 61 percent of the fertility decline. Health centers are found to have had a significant effect on fertility as well, but the aggregate increases in the diffusion of health centers in villages only explains 3.4 percent of the fall because during the period there was little change in the distribution of such centers. The results thus suggest that the process of economic growth has had a major impact on fertility in India over the last two decades. The authors conclude that given sustained economic growth that continues to raise wages and increase returns to human capital, the fall in fertility in India will continue for the foreseeable future.

Managing the Environmental Consequences of Growth: Forest Degradation in the Indian mid-Himalayas, by Jean-Marie Baland, Pranab Bardhan, Sanghamitra Das, Dilip Mookherjee, and Rinki Sarkar

Given their enormous populations, the rapid, sustained growth of India and China has heightened concerns on the environmental consequences of such growth. Yet there is no accepted professional consensus on the nature and intensity of these links. For some economists, growth is seen as continuing to raise the demand for the earth’s energy resources. For others poverty is seen as the root cause, implying that growth is itself at least part of the solution. The so-called ‘environmental Kuznets curve’ hypothesis represents an intermediate view: economic development may initially aggravate environmental problems, but beyond a threshold of economic development environmental conditions improve. Yet another viewpoint stresses the importance of local institutions such as monitoring systems and community property rights. Particularly where deforestation is concerned it is argued that assigning local communities effective control of forest resources would substantially reduce environmental pressures, leaving little need for external policy interventions.

Despite these different perspectives, there is remarkably little systematic micro-empirical evidence on their relative validity. Efforts to test these hypotheses have been cast mainly on the basis of macro cross-country regressions, with only a few recent efforts to use micro evidence concerning behavior of households and local institutions governing use of environmental resources. The paper by Baland and others attempts to fill this gap through a careful analysis of the determinants of firewood and fodder collection, the chief causes of forest degradation in the mid-Himalayan region of India. The study seeks to predict the deforestation implications of future growth in the region, assess the likely impact on future livelihoods of local residents, and evaluate some specific policies to arrest forest degradation.

The analysis is based on a stratified random sample of 3,291 households in 165 mid-Himalayan villages in the Indian states of Uttaranchal (recently renamed Uttarakhand) and Himachal Pradesh, complemented by detailed measurement of forest conditions in surrounding areas used for collection of firewood and for livestock grazing. Prior accounts of the state of these forests suggest significant externality problems at both local and transnational levels. The local externality problem arises from the dependence of the livelihood of local inhabitants on neighbouring forests. The forests are important for the collection of firewood (the principal source of household energy), fodder for livestock rearing, leaf-litter for generation of organic manure, timber for house construction, and collection of herbs and vegetables. Sustainability of the Himalayan forest stock also has significant implications for the overall ecological balance of the South Asian region. The Himalayan range is amongst the most unstable of the world’s mountains and therefore inherently susceptible to natural calamities. There is evidence that deforestation aggravates the ravaging effects of regular earthquakes, and induces more landslides and floods. This affects the Ganges and Brahmaputra river basins, contributing to siltation and floods as far away as Bangladesh.

On the basis of contemporary recall the paper finds considerable evidence of forest degradation (though not deforestation) over the last quarter century in forest areas accessed by villagers. Such degradation is evident in the presence of over-lopped trees and low rates of forest regeneration, and a 60 percent increase in the average time needed to collect a bundle of firewood—approximately six additional hours per week per household. Against this background, the first part of the paper assesses the likely impact of growth in household incomes and assets on firewood collection. Such growth both gives rise to wealth effects (which raise collections by increasing household energy demand) and substitution effects (which lower collections by raising the value of time of households; almost all firewood is directly collected by consuming households with negligible amounts purchased in markets). The econometric analysis shows that the substitution and wealth effects offset each other, so that firewood and fodder collection is inelastic with respect to improvements in living standards. The paper finds no evidence for any effects of poverty or growth on forest pressure, nor any Kuznets-curve patterns.

In contrast, the effects of growth in population are likely to be adverse: rising population will cause a proportional rise in collections at the level of the village, while leaving per capita collections almost unchanged. To the extent that household fragmentation induces a shift to smaller household sizes, the resulting loss of economies of scale within households will raise per capita collections even further. Hence anthropogenic pressures on forests are likely to be aggravated by demographic changes, rather than economic growth. Unless there is substantial migration out of the Himalayan villages, the pressure on forests is likely to continue to grow in the future.

The paper next estimates the effect of such further projected forest degradation on the future livelihoods of affected villagers, mainly via a further increase in collection times for firewood. This is done by estimating the effects of increased collection times by one hour, which is a plausible estimate for the next decade or two. The welfare impact of this externality turns out to be surprisingly low: the effect is less than 1 percent loss in household income across the entire spectrum of households. Moreover, there are no significant effects on child labor, nor on the total labor hours worked by adults. This indicates that the magnitude of the local externality involved in use of the forests is negligible, providing a possible explanation for lack of effort among local communities to conserve neighboring forests. The argument for external policy interventions then rests on the larger ecological effects of forest degradation, which are beyond the scope of the paper.

Should the ecological effects demand corrective action, the paper surveys the available policy options. The authors find that the principal fuel alternative to firewood, somewhat surprisingly, is LPG (liquefied petroleum gas); kerosene and electricity are still secondary (despite the region’s enormous abundance of hydropower reserves). Household firewood use exhibited considerable substitution with respect to the price and accessibility of LPG cylinders, suggesting the scope for LPG subsidies as a policy which could be used to induce households to reduce their dependence on forests for firewood. The authors estimate the effectiveness and cost of a Rs 100 and a Rs 200 subsidy for each gas cylinder. The latter is expected to induce a rise in households using LPG from 7 percent to 78 percent, reduce firewood use by 44 percent, and cost Rs 120,000 per village annually (about 4 percent of annual consumption expenditure). A Rs 100 subsidy per cylinder would be half as effective in reducing wood consumption, but would have a substantially lower fiscal cost (Rs 17,000 per village annually, approximately 0.5 percent of annual consumption).

The econometric estimates also show that firewood use was moderated when local forests were managed by the local community (van panchayats) in Uttaranchal. However, this effect is limited to those community-managed forests that were judged by local villagers to be moderately or fairly effectively administered, which constituted only half of all (van panchayat) forests. It is not clear how the government can induce local communities to take the initiative to organize themselves to manage the neighboring forests effectively, when they have not done so in the past. Moreover, the authors conclude that, even if all state-protected forests could be converted to van panchayat forests, firewood use would fall by only 20 percent, which is comparable to the effect of a Rs 100 subsidy per LPG cylinder.

Publication: National Council of Applied Economic Research and the Brookings Institution
      
 
 




ed

India Policy Forum 2007/08 - Volume 4: Editors' Summary

The fourth volume of the India Policy Forum features papers on schooling inequality, the duration of microfinance groups, sub-national fiscal flows, and reform of the power sector, land policies, and higher education. Suman Bery, Barry Bosworth, and Arvind Panagariya edited the volume. The editors' summary appears below, and you can download a PDF version of the volume: 


Download India Policy Forum 2007-2008 - Volume 4 (PDF) »
Download the 2007-2008 India Policy Forum conference agenda » 

Download individual articles:



EDITORS' SUMMARY

The India Policy Forum held its fourth conference on July 17 and 18 of 2007 in New Delhi. This issue of the journal contains the papers and the discussions presented at the conference. The first paper examines the fiscal relationship between the Central Government and the states of India. The next two papers focus on the Indian educational system, specifically the social implications of government policies governing access to primary and secondary schools, and the challenges facing the country’s system of higher education. The fourth paper evaluates the performance of an important component of India’s microfinance system. Finally, the fifth paper provides an assessment of recent efforts to reform the distribution segment of the electric power industry. In addition to the working sessions of the conference, T.N. Srinivasan of Yale University, a member of the advisory panel, delivered a public lecture on the topic of: “Economic Reforms, External Opening and Growth: China and India.”

The Political Economy of the Indian Fiscal Federation

Despite massive unfulfilled need and repeated rhetorical commitment to increase public spending, public expenditure in India on education and health has never exceeded more than 3.3 and 1.3 percent of GDP, respectively. Implementing such spending, and to a large degree paying for it, is the responsibility of India’s states. In her paper, Indira Rajaraman argues that an important explanation for this persistently low level of spending lies in the nature of fiscal transfer arrangements in India’s federal structure, particularly the unpredictable and discretionary nature of significant components of these transfers.

The assignment of expenditure responsibilities and revenue rights in India gives rise to a vertical fiscal gap at the sub-national (state) level. The closure of this gap is provided for by the appointment, every five years, of a constitutional body called the Finance Commission. The report of each Commission, once accepted by the government, prospectively defines the formula for statutory flows from the national government (the “Center”) for the succeeding quinquennium. Such statutory flows from the Center to the states are predictable in relation to the underlying tax base, are pre-defined both in aggregate and in their distribution between states, and are unconditional. In Rajaraman’s view, these are all desirable properties to permit states to make multi-year expenditure commitments of the kind needed for provision of primary education and health.

However, such statutory flows represent only part of the story. In the years before 2005, statutory flows never exceeded 60 percent of the total flow. The remaining Center–state transfers took place under a range of nonstatutory mechanisms, largely under the control of an extra-constitutional body called the Planning Commission, and were unpredictable in aggregate from year to year.

While initially entirely discretionary, in 1969–70 the inter-state allocation of a portion of these “Plan” transfers was in turn subjected to a periodically revised formula (commonly referred to as the “Gadgil Formula”). However, this formulaic distribution was accompanied by a shift from a full grant basis to one comprising 70 percent loans and 30 percent grant. This shift to borrowed funds rather than grants implicitly altered incentives away from health and education state-level spending, which were unable to bear the ensuing interest burden. This disincentive, associated with the loan component, led to a gradual reduction in the share of this formulaic component in overall non-statutory flows.

Against this policy and institutional background, the paper performs three empirical exercises to determine the year-to-year changes in the share in grants from the Center received by states in aggregate that was not subject to formula and therefore open to bargaining by the states. The first empirical exercise quantifies the non-formulaic bargaining margin within aggregate flows for each year of the period 1951–2007, and estimates it to have varied inversely with an index of political fractionalization in the federation. As fractionalization increased, the formulaic share rose. The system thus fluctuated in response to changes in the political situation. This instability is inappropriate for funding requirements of basic developmental services.

The second exercise tests whether the control over aggregate state borrowing from the financial markets (constitutionally vested at the national level, and an important force for macroeconomic stability) represents opportunistic behavior influenced by the national electoral cycle. The difference between the consolidated fiscal imbalance, or deficit (aggregated across national and state levels), and the imbalance for the Central Government alone, provides a proxy measure for measuring the extent of sub-national borrowing from financial markets.

The consolidated fiscal imbalance is shown to have risen in years preceding Parliamentary elections. This is in contrast to the fiscal imbalance at the Center, which was not dictated by the electoral cycle. Taken together, the two sets of specifications strongly suggest that aggregate Central limits on state borrowing from financial markets were raised in pre-election years.

This inter-temporal variability, together with the spatial distortions implicit in the opaque system for allocating borrowing entitlements across the states in all years, further adds to the fiscal uncertainty faced by states, and inhibits orderly and sustained planning.

The third empirical exercise deals with a major initiative that commenced in 2005 to reduce the accumulated debt burden of the states. The proposal to reduce this debt originated from the Finance Commission, and addressed debt owed by the states to the Center arising from the loan component of Plan transfers mentioned earlier. The debt relief was to be granted in exchange for promises of fiscal adjustment.

The Finance Commission took the view, later endorsed by Parliament, that the differences in initial conditions across states should be taken into account in setting such conditionality. However the conditionality actually imposed by executive action at the Center envisaged a common terminal year deficit level for all states, implying a difference in the magnitude of adjustment that varies by as much as 10 percent of state GDP, with presumed adverse consequences, once again, for the stable provision of essential state level developmental services.

Starting in 2005–06, there has been a regime change with the replacement of direct Central lending to states for Plan expenditure, with a more inflexible system of caps on state borrowing as part of the conditionality for the above-mentioned debt concessions. Thus, the kinds of uncertainties and patterns in aggregate borrowing limits on states will not be visible for a while longer.

Rajaraman further notes that there has been a fall over the last ten years in the share of state expenditure in overall public spending on health and education because of the huge new Central expenditures on primary education and mid-day meals in schools, which are not routed through states. Thus, the policy response has been to alter the pattern of functional responsibility, rather than restoration to the states of their constitutionally assigned functions, with correction of the adverse incentives that became embedded in the de facto structure of sub-national funding.

Finally, Rajaraman also uses the empirical exercises to draw implications for the nature of dialogue between the Center and the states regarding fiscal matters. She notes the absence of a dispute-resolution forum where the de facto functioning of fiscal arrangements can be subjected to continual examination and monitoring by all partners to the federation. Within such a forum, major issues spanning Central transfers, revenue rights, expenditure externalities, and unfunded mandates, could be resolved in a participatory framework. Its need is likely to become even more urgent as India moves to an integrated nation-wide goods and services tax (GST), where the direct role of the states in revenue collection would be even more restricted, and the need for a broad review of fiscal federal arrangements even more urgent.

Can Schooling Policies Affect Schooling Inequality? An Empirical Evaluation of School Location Policies in India

Over the past several decades, a primary tool used by the Government of India to improve school enrollments, particularly those of the Scheduled Castes (SCs), has been the expansion of access to schools. To this end, the government has long embraced the objective of providing a school within easy walking distance from each rural household. In her paper, Anjini Kochar argues that in implementing this policy, scant attention was paid to the fact that targeting access to schools as a primary objective may constrain the government in addressing other critical aspects of schools, particularly those related to school quality. This is because decisions regarding the location of schools determine more than just access to schools; they combine with the residential structure of a society to define the school community, and hence school characteristics known to affect schooling attainment.

According to Kochar, it is the nature of residential communities in rural India that makes this trade-off between access and quality likely. Rural India resides in habitations—distinct residential settlements within a village— which vary in size but are, on average, fairly small. Because habitations are generally organized along caste lines, the rural economy is thus characterized by a considerable degree of caste-based segregation. The stated policy objective of providing a school within easy walking distance of each household, in conjunction with the geographic distance across habitations, requires the government to adopt a policy that provides schools to relatively small habitations and frequently results in multiple schools within a village.

Therefore, the paper argues that the current school location policy does not permit an optimal allocation of schools based upon enrollment or size. Because school enrollment determines the availability of inputs such as the number of teachers, there is a corresponding variation in the number of teachers per school. To the extent that this attribute of schools affects schooling attainment, Kochar argues that the policy generates schooling inequality across regions, with schools in smaller habitations being of generally lower quality than those in larger habitations.

School location policies also affect the caste composition of the student population. When schools are provided in SC habitations as well as in the other habitations of a village, the residential segregation that characterizes the village gets translated into a corresponding system of de facto schooling segregation. The corresponding difference in the caste composition of students across village schools is also likely to affect schooling attainment. 

The paper explores these hypotheses empirically, examining the relationship between school enrollments and availability of schools within habitations, as well as the effect of the number of teachers and the prevalence of schooling segregation. To identify the effect of these school attributes, Kochar uses the policy rules that determine whether a school can be placed in a habitation and the number of teachers assigned to a school. These rules are specified at the district level, and are implemented by the government based on district level data on habitations collected in the All India Education Surveys (AIES). The paper uses this same data that guides policy decisions, and relates it to household data from the Government of India’s National Sample Surveys. The use of policy rules specific to the attributes in question, and the availability of the data that guides current policy decisions, provides a compelling source of identification. To assess the effects of school segregation, Kochar uses the insight that schooling segregation exists only when schools are provided in the SCs/STs (Scheduled Castes/Scheduled Tribes) habitations. Because the AIES data also provide information on the size distribution of SC/ST habitations, it is possible to identify the probability of schools being located in SC/ST habitations (a proxy for schooling segregation) separately from the overall effect of school availability.

The paper has two principal findings. First, based on the size distribution of habitations within a district, the author finds that the current policy rules do affect access, but they also affect teacher numbers and schooling segregation. The regression analysis shows that schools with two or fewer teachers experience reduced enrollments. The results on teacher availability suggest that the decision to provide schools even to relatively small habitations generates a source of schooling inequality: children who reside in small habitations with schools attend schools of poorer quality than those who reside in larger habitations.

Second, the author finds that school location policies also perpetuate caste-based inequalities. Since the SC habitations are generally smaller than others, this means that SC schools are of lower quality, as measured in terms of the availability of teachers. The empirical results show an asymmetric effect of schooling segregation by caste: children of upper castes benefit significantly while segregation has little effect on the SCs. The benefits of living in districts with widespread access to schools therefore vary by caste.

The results of the paper suggest that improvements in school quality cannot be affected without re-considering the government’s school location policies. Kochar admits, however, that improving school quality along the dimensions considered in the paper is no easy task. She suggests an alternative policy that consolidates habitation schools to provide one school in each village, which would enable an optimal number of teachers in each school and thereby improve schooling attainment. While the greater distance to school implied by such a consolidation, particularly for children from the SC/ST habitations, may reduce access, the paper argues that the savings generated by the consolidation could be used to implement a system of cash transfers to children from the SC and the ST conditional on their school attendance records. The positive effects from increased teachers and economies of scale are enough to provide cause for a reconsideration of school location policy in India.

Mortgaging the Future? Indian Higher Education

The higher education system in India also faces troubling distortions and suboptimal outcomes. In their paper, Kapur and Mehta argue that the vast majority of institutions of higher learning are incapable of producing students with skills and knowledge. Attendance does not serve as a screening system for the vast bulk of students, nor does it prepare students to be productive and responsible citizens. The current system is highly centralized, politicized, and militates against the production of general intellectual virtues. It may come as no surprise then, that the last few years have witnessed a rapid rise in skill premiums in India despite the country’s huge population.
 
Kapur and Mehta maintain that the poor state of the sector and the recent rise in skill premiums can be largely explained by the regulatory bottlenecks facing Indian higher education. Despite impressive reforms elsewhere, Indian higher education remains one the last bastions of the “license control raj”—with troubling implications for India’s future. The paper argues that the result is a state of crisis in Indian higher education notwithstanding the success of a few professional schools. The fact that the system produces a noticeable number of high-quality students is largely the result of Darwinian selection mechanisms and very little because of pedagogic achievements.

According to the authors, the most acute weakness plaguing India’s higher education system is a crisis of governance, both of system and of individual institutions. Because the prevailing political ideological climate views elite institutions as anti-democratic, there is a natural response in political circles to influence admissions policies, internal organization, and the structure of courses and funding. The paper provides data to show that there has been a massive increase in both private higher education and the flight of elites to foreign educational institutions. However, the private sector also suffers from regulatory obstacles and governance weaknesses, raising doubts as to its ability to address the huge latent demand for quality higher education in the country.
 
From the perspective of the three key suppliers of Indian higher education—markets, the state, and civil society (philanthropy)—the authors elaborate on six significant distortions. First, the process of regulatory approvals diminishes the capacity of private investment to respond to market needs. Second, the regulatory process produces an adverse selection in the kind of entrepreneurs that invest since the success of a project depends less upon the pedagogic design of the project and more on the ability to manipulate the regulatory system. Third, there are significant market failures in acquiring physical assets that are necessary for educational institutions, especially land. Fourth, regulatory approvals are extremely rigid with regard to infrastructure requirements (irrespective of costs or location) and academic conformity to centrally mandated course outlines, degree structures, and admissions policies. Fifth, a key element of a well-functioning market — competition—is distorted by restricting foreign universities from setting up campuses in India, which limits benchmarking to global standards. Sixth, another central element of a well-functioning market, informational transparency, is woefully inadequate.

The university system in India is the collateral damage of Indian politics. As the paper demonstrates, the dismal educational outcomes are not the result of limited resources. For politicians, the benefits of the license-control raj extend beyond old-fashioned rent seeking by manipulating contracts, appointments, admissions, and grades in government-run colleges and universities to the use of higher education for vote-banks, partisan politics, and as a source of new entrepreneurial activities.

The authors identify three key variables that help to clarify the political economy of India’s higher education: the structure of inequality in India, the principal cleavages in Indian politics, and the nature of the Indian state. India is an outlier in the extreme degree of educational inequality, which has led to a populist redistributive backlash. However, the specific redistributive mechanisms are conditioned by the principal cleavages in Indian politics and the nature of the Indian state. The growth of identity politics has sharply enhanced political mobilization around two key cleavages in Indian society: caste and religion. Consequently, redistributive measures follow these two cleavages rather than other possibilities such as income, region (urban–rural), or gender. Thus, the focus on redistribution helps explain why Indian politicians have obsessed over reservations (that is, quota-based affirmative action) in elite institutions of higher education rather than improvements in the quality of primary and secondary schooling, and the thousands of colleges of abysmal quality.

The consequences of the preceding political economy are onerous. One, a diminished signaling effect of higher education; two, an ideological entrapment between what the authors call half-baked socialism and halfbaked capitalism, with the benefits of neither; and three, a pathology of statism wherein higher education policy is being driven foremost by the state’s own interest (or perhaps its own ideological whims). Much of what goes in the name of education policy is a product of the one overriding commitment of the education bureaucracy—namely state control in as many ways as possible.

The paper also highlights the role of the Indian judiciary in higher education reforms, arguing that it has done as much to confuse as to clarify the existing regulatory framework. Although there has been a distinct shift in the Supreme Court’s stance in the past decade, its primary response does not always center on what will enable the education system to adequately respond to demands. Rather, it has uneasily and often confusingly attempted to reconcile disparate principles, be it the dichotomy between education being a charitable or commercial enterprise, or the inherent tension between institutional autonomy and equitable access in higher education.

Kapur and Mehta conclude with a few options for change moving forward. Market failure in higher education means that substantial public investment will continue to be critical in this sector. However, since there are few clear analytical criteria to address the central question of what is “good” higher education, the paper argues that a regulatory system that emphasizes diversity, flexibility, and experimentation is in the long run most likely to succeed. Such a system will also need a different conception of accountability than the one currently prevailing in the Indian system, where resource allocation decisions are centralized to an extreme degree in the Planning Commission, the Ministry of Human Resource Development, and the University Grants Commission. Its quality depends entirely upon the informational resources of a very small group of decision makers and presumes an omniscience that few decision makers can have. Instead India needs to move to a regulatory system with increased horizontal accountability that empowers students to make better informed decisions. Finally, Indian policy makers need to recognize that the competition for talent is now global and that only a combination of a flexible and supple state system that enlists the energies of the market as well as a committed non-profit sector will be able to meet the challenges and the vast scale of demand for higher education in India.

The expansion of rural credit through the “formal” financial system has been a major goal of Indian policy since independence. While a number of initiatives (including nationalization of the country’s major commercial banks) have been taken over the years, success of these initiatives has been only partial.

In 1992, the Reserve Bank of India (RBI), India’s central bank and banking regulator, issued guidelines to the public sector commercial banks (which still dominate Indian banking) encouraging them to lend to small preformed groups called “self-help groups” (SHGs). These groups are almost always composed of rural women, and are often assisted by non-governmental organizations (NGOs) in their formation and their subsequent growth and development.

While the scheme, sometimes called the “commercial bank–SHG linkage scheme”, was in part inspired by the success of Bangladesh’s Grameen Bank in sustainably widening access to financial services in that country, the Indian SHG scheme differs in several respects from the Bangladesh model, and therefore needs to be assessed in its own right. One such difference is the provision of subsidized refinancing to the commercial bank by the National Bank for Agriculture and Rural Development (NABARD) (a publicly-owned affiliate of the RBI). The RBI reports that over 2.5 million of such groups have borrowed from commercial banks since 1992, and loan disbursements by commercial banks to SHGs were 29 percent of all direct bank credit to small farmers in 2004–05.

Microfinance Lifespans: A Study of Attrition and Exclusion in Self-Help Groups in India

However, in spite of the growing importance of SHGs as a source of credit to the poor, there is little systematic evidence on their internal functioning. The paper by Baland and Somanathan attempts to fill this informational gap by using survey data on SHGs created during the period 1998–2006. It does so by describing the survival of groups and members within groups, documenting group activities, and estimating the determinants of group and member duration using an econometric survival model.

The data comes from a survey of 1,102 rural SHGs and the 16,800 women who were members of these groups at some point during the period 1998– 2006. It considers all groups formed by PRADAN (an NGO that has actively promoted SHGs since the start of the NABARD program) in the districts of Keonjhar and Mayurbhanj in northern Orissa, and the Raigarh district in the newly formed state of Chhattisgarh in central India. Although the group members are engaged in a variety of collective activities, saving and credit do seem the most important. Almost all groups surveyed had made small loans to their members and 68 percent of them had received at least one loan from a commercial bank.

For those members who do borrow from the group the average size of the loan, provided from internal group funds, is Rs. 2,200 per year. For groups with at least one bank linkage, 83 percent of members in the group received some part of this loan, and the average amount received by these members is Rs. 2,189 per year. Although loan sizes provided by some specialized microfinance institutions are often larger, these SHG loans are sizable as a fraction of local earnings and, for women who received both group loans and bank loans, it corresponds to roughly two months of labor earnings at the minimum wage in these areas.

The group members in many SHGs appear to be collectively involved in activities not directly related to credit. About 10 percent of the surveyed groups are involved in the preparation of school meals, 3 percent administer state programs that distribute subsidized foodgrains, and about half of them get involved in family or village conflicts or help members during periods of personal distress. These groups therefore seem to play a role in promoting solidarity networks in the community.
 
The paper then estimates models of both group and member duration. It finds that factors behind group survival are quite different from those affecting member longevity. With respect to group survival, the highest attained level of education in the group is important for its survival, perhaps because some educated members are needed to facilitate transactions and ensure that group accounts are accurate. The presence of other SHGs in the area also has a positive effect on group duration. It may be that a dense cluster of groups allows for the sharing of costs, provides each group with ideas for successful activities, or simply instills in members the desire to survive, compete, and be part of a larger network.

Drawing upon on a large literature pointing to the importance of social heterogeneity in collective action, the paper then explores whether such heterogeneity matters for the average duration of groups and the members within groups. For each member surveyed, the paper records both their individual caste group (or jati) and the “official” caste category to which they belong—ST, SC, Other Backward Castes (OBC), and a residual category often termed General Castes that we refer to as Forward Castes (FC).

The particular question explored is whether heterogeneity matters for group functioning when members belong to different jatis in the same official caste category. The paper finds that commonly used measures of fractionalization and social heterogeneity based on these classifications do not have systematic effects on group survival, but that they do help explain the departure of individuals from groups. Even within broad caste categories, heterogeneity matters. This suggests that the “official” classifications fail fully to capture the relevant social hierarchy.

The members from traditionally disadvantaged groups, especially from the ST, are more vulnerable to group heterogeneity. In addition to group heterogeneity, lower levels of education, lower landholdings, and fewer relatives within the SHG are also associated with higher rates of member exit.

The paper also finds that the bulk of the difference in the duration of membership in a SHG observed between Chhattisgarh and Orissa can be attributed to characteristics of groups in these areas; the authors find that state-level variations in performance are negligible once these characteristics are incorporated in their model.

The results suggest that it is problematic to evaluate the success of microfinance interventions based on conventionally reported coverage figures because they do not account for attrition. The authors’ concern is not with overall attrition rates but with the selectivity they exhibit. It is predominantly the poorer and socially marginalized communities that leave the SHG network and this makes it unlikely that women moving out of SHGs enter individual contracts with lending institutions. It also means that some of those in desperate need of credit cannot obtain it from within this sector. To arrive at concrete policy prescriptions for this sector, more information is needed about the financial opportunities available to members once they leave this sector and the extent to which SHG lending crowds out other types of lending to the poor. Although the duration of membership is only one, admittedly crude, measure of the performance of the microfinance sector, the study suggests that survey data which follows members and groups in this sector is critical to an assessment of Indian microfinance.

The Power Sector in India: An Inquiry into the Efficacy of the Reform Process

Electricity supply constitutes the most important infrastructure constraint on overall economic growth in India. While the telecommunications sector has gone through a revolution of increased service and lower prices, and signs of progress are visible in virtually all areas of transportation, progress in improving the performance of the electricity sector has been painfully slow. The paper by Saugata Bhattacharya and Urjit R. Patel examines the sources of the inefficiencies and undertakes an evaluation of the efforts to reform the industry’s distribution segment, which is dominated by state governments.

The electricity sector can be divided into three segments: the generation of electricity using a variety of fuels; the transmission of electricity from generating plants over high voltage towers and lines to the major distribution points; and the distribution of electricity from distribution points to consumers whether industrial or residential. While both the Central Government and the states have the constitutional right to legislate in areas of generation and transmission, distribution is entirely under the jurisdiction of the states. Reform in the electricity sector is made far more difficult than in the telecommunications sector because it requires active participation from the states, which often lack the necessary technical, legal, and administrative talent as well as motivation.

By the early 1960s, the electricity sector had become a vertically integrated monopoly in each state with generation, transmission, and distribution coming under a single umbrella known as the State Electricity Boards (SEBs). Recent reforms have resulted in the unbundling of these segments in many but not all states, and distribution has been delegated to autonomous distribution companies (discoms). With rare exceptions, the latter remain in the public sector.

A key problem facing the electricity sector is the large magnitude of aggregate technical and commercial (ATC) losses. In effect, ATC losses reflect that fraction of power generation for which there is no remuneration. Nationally, they amounted to 37.2 percent of electricity generated in 2001–02. Electricity shortages could be considerably alleviated if these losses could be brought down to normal international levels. Bhattacharya and Patel analyze the success achieved in this area through a variety of reform efforts beginning in the early 2000s. They emphasize the state-by-state variation in performance as a means of identifying the most successful reform measures. The authors identify three specific reforms. First, SEBs, which buy electricity from central public sector generation companies, have traditionally accumulated large arrears with the latter. The Central Government offered them a one-time settlement (OTS) scheme provided they undertook a set of efficiency-enhancing steps. Second, the Central Government followed up the OTS with the Accelerated Power Development and Reform Program (APDRP) under which incentives were offered to undertake a variety of reforms. Finally, the government introduced the landmark Electricity Act of 2003 to bring about nation-wide systemic reforms in the sector.
 
The authors study revenues and cash flows of discoms and SEBs to explain the connection between the reform initiatives and financial performance across states. They also devise a composite index of commercial orientation, which they call the Index of Revenue Orientation (IRO), and rank utilities according to it. The authors explore data over several years from a consistent group of SEBs/discoms on outcomes, and the concomitant key economic and financial parameters that indicate the effect of reform steps associated with SEBs/discoms.

The analysis yields a number of provisional findings. First, at an aggregate level, the deterioration in the power sector has been arrested. The financial situation of the sector has eased and state government subsidies as a ratio to GDP have declined. The sector, nevertheless, is still far from financial viability. The key performance indicators, after having improved significantly in the immediate aftermath of the reform measures, seem to have stagnated after 2003–04. The ATC losses, while having dipped slightly from the 2000–01 crisis levels, remain very high. The basic problem is that although the sector is expected to have made a small cash profit at an all- India level in 2005–06, there are simply not enough resources in the state government-owned system to add capacity (and/or buy excess capacity from other systems) on any appreciable scale, let alone that which is required to power India’s economic growth.

Second, there are significant differences across states and utilities in performance and related indicators (including average revenue realization, collection efficiency, composition of demand, power units input, cost of supply, and physical losses). Also, the variability in performance among states and among utilities has increased between 2001–02 and 2004–05. The outcomes and many of the underlying explanatory variables have exhibited even greater unevenness after the reform measures than in 2001–02. Some states have improved significantly and some have deteriorated sharply. Five utilities account for 80 percent of the total cash losses and another five utilities contribute 78 percent of the cash profits.
 
Finally, using their IRO, authors note that the spread of performance between utilities increased in 2004–05, compared to the situation in 2001–02. While the average index value increased from 1.14 in 2001–02 to 1.3 in 2004–05, the associated standard deviation rose from 0.9 to 1.2. In other words, utilities had a more homogenous ordering of revenue orientation in 2001–02 than in 2004–05. The authors also show that the strongest influence on the extreme ends of the rankings in the IRO was the relative amount of power supplied to the subsidizing (industry) segment versus the subsidized (agriculture and residential) segment.

What implications do these findings have for policy? Various utilities have placed emphasis on different strategies for enhancing revenues. The fragmented information indicates that there is significant progress in many of the basic inputs of utilities. These, however, do not seem to be rapidly translating into higher revenues and cash flows. The unevenness in performance among discoms suggests that there would be large gains to tariff setting at the level of discoms rather than states, or, even at the level of distribution circle and city. This would attract reliable suppliers to discoms or circles who are paying their bills and lead to lower tariffs in an area with low ATC losses. The variation of improvements in different states is also a warning sign of the increasing disparities in the ability of states to attract investments and foster growth.
Publication: The Brookings Institution and National Council of Applied Economic Research
      
 
 




ed

India Policy Forum 2008/09 - Volume 5: Editors' Summary

The fifth issue of the India Policy Forum, edited by Suman Bery, Barry Bosworth and Arvind Panagariya, includes papers on India’s financial sector, including capital account liberalization, currency appreciation and capital reserves, as well as growth and employment in Indian manufacturing and the impact of private education. The editors' summary appears below, and you can purchase a printed copy, or access individual articles by clicking on the following links: 

Download the 2008 India Policy Forum conference agenda »
Download India Policy Forum 2008-2009 - Volume 5 »
Purchase a printed copy of India Policy Forum 2008 - Volume 5 »

Download individual articles:



EDITORS' SUMMARY

The fifth annual conference of the India Policy Forum was held on July 15 and 16 of 2008 in New Delhi. This issue of the journal contains the papers and discussion presented at the conference. A total of five papers were presented. The first paper examines the growth of private schools in India and their influence on school quality. It is an extension of recent issues of this journal that have evaluated the performance of India’s education system. The second paper addresses a major question of why the growth of manufacturing output and employment in India has been disappointingly low. The final three papers share a common focus on India’s external financial relations. The third paper analyzes the process of capital account liberalization and the integration of India’s financial institutions into the global financial system. The fourth paper measures the evolution of prices in the nontradable and tradable sectors of the Indian economy and seeks explanations for the rise in the relative price of nontradables. The last paper addresses the issue of the adequacy of India’s current foreign exchange reserves.

Private Schooling in India: A New Educational Landscape

Although the growth of private schooling in India is ubiquitous even in rural areas, the contours and implications of this change remain poorly understood, partially due to data limitations. Official statistics often underestimate private school enrollment and our understanding of the effectiveness of private education in India is also limited. If we assume that parents know what is best for their children and that what is beneficial privately is also beneficial socially, their decision progressively to opt for private schools would suggest the superiority of the latter over public schools.

In their paper, Sonalde Desai, Amaresh Dubey, Reeve Vanneman, and Rukmini Banerji point out, however, that this is not a foregone conclusion. The vast body of research on school quality, especially that relating to the United States, suggests that much of the observed difference in school outcomes results from differences in parental background and levels of parental involvement with children going to different schools. In the Indian context, one runs the additional risk that many private schools are poorly endowed with resources, unrecognized (lack accreditation), and have untrained teachers. A proper empirical examination is essential to arrive at an informed assessment.

The authors use data generated from a new survey, the India Human Development Survey 2005 (IHDS), jointly conducted by researchers from the University of Maryland and the National Council of Applied Economic Research. These data allow them to explore some of the links between private school growth and school quality in India. They begin by providing a description of public and private schools in India as well as some of the considerations that guide parents in selecting private schools. They then examine whether private school enrollment is associated with superior student performance and whether this relationship is concentrated in certain sections of the population.
 
The IHDS data show considerably higher private school enrollment, particularly in rural areas, than documented in other studies. The authors place private school enrollment (including in schools receiving grants-in-aid from the government) among children aged 6–14 years at 58 percent in urban and 24 percent in rural areas. Private school enrollment is particularly high in India’s most populous state, Uttar Pradesh. In terms of outcomes, based on specially designed reading and arithmetic tests administered to children aged 8–11 years, those in private schools exhibit better reading and basic arithmetic skills than their counterparts in government schools.

But since these children also come from higher income households and have parents who are better educated and more motivated to invest in their children’s education, it is important to control for selectivity bias. The paper utilizes a variety of techniques (including multivariate regression, switching regression, and family fixed effects) to examine the relationship between private school enrollment and children’s reading and arithmetic skills. While no model is able to completely eliminate possible biases—there is a different source of bias left in each case—taken together, the results strongly indicate that private school enrollment is associated with higher achievements in reading and arithmetic skills. The magnitude of the gain from private school enrollment varies from one-fourth to one-third standard deviation of the scores.

The paper also distinguishes the relative magnitudes of the benefits from private schooling to children with rich versus poor economic backgrounds. It finds that the benefits to private school enrollment for children from lower economic strata are far greater than those for children from upper economic strata; at upper income levels, the difference between private and government school narrows considerably. This seems plausible since at upper income levels, students are likely to have better access to alternative educational resources including well-educated parents.

While the results of the paper point to positive benefits from private schools, especially for the underprivileged, the authors emphasize that their analysis does not imply that private schooling is the elixir that will cure the woes of primary education for children from poor families. They argue that both empirical results based on the IHDS data and theoretical considerations point to the need for caution.

Empirically, the paper finds that while private school students perform better than their counterparts in government schools, these effects are modest in comparison to other factors influencing the outcomes. For example, the results show substantial inter-state variation in the scores of both government and private school students. Controlling for parental characteristics, government school students in states as diverse as Kerala, Himachal Pradesh, Chhattisgarh, and West Bengal perform at a higher level than private school students in many other states. More importantly, the private school advantage seems to be concentrated in states such as Bihar, Uttar Pradesh, Uttarakhand (formerly Uttranchal), and Madhya Pradesh—states known for poorly functioning public institutions as well as high rates of poverty or low per capita incomes.

These results suggest that before a blanket embrace of private schooling, it may be worthwhile to understand why some government schools function well and others do not. Blaming teacher absence is superficially appealing, but theoretical considerations suggest that the complete story may be more complex. If the classroom environment in private schools is favorably impacted by the demands made by paying middle-class parents, a voucher program that brings a large number of poorer parents to the schools may dilute this effect. But this argument would seem to be undermined by the fact that the authors themselves find the private school effect to be significant in poor states with many students coming from poor families.

Nevertheless, the authors are correct in noting that it will be useful to further examine the processes that give rise to different classroom environments as between government and private schools before jumping to wholesale voucher programs leading to privatization of education. We must know, for example, whether children from poor households in private schools benefit because their parents are able to prevent teachers from resorting to physical punishment. And if so, would this benefit be diluted when vouchers rather than parents pay for the tuition? Can we devise mechanisms to ensure that government school teachers do not resort to discriminatory behavior when dealing with students from poor families? To date, the discourse on the benefits of private schooling in a developing country context has focused on teacher absence, lack of accountability, and lower costs of private schooling. While these are important issues, perhaps future research could try to shed additional light on other processes that establish different environments in private and public schools.

Big Reforms But Small Payoffs: Explaining the Weak Record of Growth and Employment in Indian Manufacturing

The promotion of manufacturing, particularly for export, has been a key pillar of the growth strategy employed by many successful developing countries, especially those with abundant labor. India’s recent experience is puzzling on two accounts. While India’s economy has grown rapidly over the last two decades the growth momentum has not been based on manufacturing. Rather the main contributor to growth has been the services sector. Second, the relatively lackluster performance of Indian manufacturing cannot be ascribed to a lack of policy initiatives. India introduced substantial product market reforms in its manufacturing sector starting in the mid-1980s, but the sector has never taken off as it did in other high-growth countries. Moreover, insofar as subsectors within manufacturing have performed well, these have been the relatively capital or skill-intensive industries, not the labor-intensive ones as would be expected for a labor abundant country like India.

One of the main components of reforms in India was the liberalization of the industrial licensing regime, or “delicensing.” Under the Industries Development and Regulation Act of 1951, every investor over a very small size needed to obtain a license before establishing an industrial plant, adding a new product line to an existing plant, substantially expanding output, or changing a plant’s location.

Over time, many economists and policymakers began to view the licensing regime as generating inefficiencies and rigidities that were holding back Indian industry. The process of delicensing started in 1985 with the dismantling of industrial licensing requirements for a group of manufacturing industries. Delicensing reforms accelerated in 1991, and by the late 1990s, virtually all industries had been delicensed. Large payoffs were expected in the form of higher growth and employment generation with this policy reform.
 
However, the payoffs to date have been limited. It could be argued that a lag between the announcement and implementation of the policy, and also a lag between implementation and the payoffs may be responsible. However, as many as 20 years have passed since the first batch of industries was delicensed, and the last batch of industries was delicensed almost a decade ago; the view that payoffs would occur with a lag is no longer easy to sustain.
 
What then could be the reasons for the rather lackluster performance of the industrial sector? The following factors are usually cited: (a) strict labor laws have hindered growth, especially of labor-intensive industries; (b) infrastructure bottlenecks have prevented industries from taking advantage of the reforms; and (c) credit constraints due to weaknesses in the financial sector may be holding back small- and medium-sized firms from expanding. More recently, two other factors have also been raised. First, it has been pointed out that the evolution of Indian industry may be influenced by path dependence or hysteresis so that despite the reforms of the mid-1980s and the early 1990s the relative profitability of capital and skill-intensive activities remains higher than that of labor-intensive activities. Second, the major reform initiatives undertaken so far—focused mainly on product market reforms—have been national ones. However, the working of product markets in a federal democracy such as India is influenced not only by regulations enacted by the Central Government, but also by those enacted by individual state governments. Moreover, much of the authority on administration and enforcement of regulation also rests with state governments. Accordingly, it has been pointed out that regulatory and administrative bottlenecks at the state level may be blunting the impact of reforms undertaken at the central level.

Using the Annual Survey of Industries (ASI) data at the three-digit level for major Indian states over the period 1980–2004, the paper by Gupta, Hasan, and Kumar analyzes the effects of delicensing reforms on the performance of what in India is called registered manufacturing. (The portion of manufacturing in the so-called unorganized sector is not covered by the ASI data and is therefore not analyzed in the paper; however, this component was also unlikely to have been affected by the licensing controls when these were in effect.) The paper utilizes variations in industry and state characteristics in order to identify how factors such as labor regulations, product market regulations, availability of physical infrastructure, and financial sector development may have influenced the impact of delicensing on industrial performance.

The main findings of the paper are as follows:

1. The impact of delicensing has been highly uneven across industries. Industries that are labor intensive, use unskilled labor, depend on infrastructure, or are energy dependent have experienced smaller gains from reforms.

2. Regulation at the state level matters. States with less competitive product market regulations have experienced slower growth in the industrial sector post-delicensing, as compared to states with competitive product market regulations. States with relatively inflexible labor regulations experience slower growth of labor-intensive industries and slower employment growth.

3. Infrastructure availability and financial sector development are important determinants of the benefits that accrued to states from reforms.

If supportive regulatory conditions prevailed and infrastructure availability allowed it, businesses responded by expanding their capacity and grew; thus hysteresis does not seem to matter. The authors acknowledge that their approach is subject to a few caveats. Several other major reforms have been introduced that impact Indian manufacturing, including reductions in barriers to trade and the dismantling of the policy of reserving particular industries for production by small-scale enterprises. These are not systematically examined and might interact with the impact of delicensing. Second, the neglect of the unorganized sector noted above means that the interactions between the “registered” and the “unorganized” sectors in adjusting to policy change is not systematically explored. Finally, regulations can affect firms and industries in many different ways. For example, they may create incentives for firms to operate in the informal sector, stay relatively small, or adopt particular types of techniques. While the analysis of aggregate data can shed (indirect) light on some of these effects, a more complete analysis would require the use of a microbased approach utilizing plant-level data.
 
The authors conclude that the agenda of reforms to promote manufacturing is not yet complete. Areas for additional action include further reform of labor market regulations; improvement of the business environment; provision of infrastructure and further development of the financial sector. In addition, in a federal democracy like India, reforms at the Center (especially those related to labor) need to be complemented by reforms at the state level.

Some New Perspectives on India's Approach to Capital Account Liberalization

Capital account liberalization remains a highly contentious issue. Proponents argue that rising cross-border flows of financial capital allow for a more efficient allocation of financial resources across countries and also permit countries to share their country-specific income risk more efficiently. Detractors have blamed capital account liberalization as being the root cause of the financial crises experienced by many emerging market countries. Their case has been strengthened by the lack of clear evidence of the presumed benefits of financial globalization. This debate has again become topical as many emerging market economies and even some low-income countries are coping with volatile capital inflows, with major economies like China and India contemplating further opening of their capital accounts.
 
A common argument in the literature in favor of openness from the viewpoint of the developing economies has been that access to foreign capital helps increase domestic investment beyond domestic saving. The recent literature has revived another older argument emphasizing the indirect benefits of openness to foreign capital, including the development of domestic financial markets, enhanced discipline on macroeconomic policies, and improvements in corporate governance.
 
In his paper, “Some New Perspectives on India’s Approach to Capital Account Liberalization,” Eswar S. Prasad argues that a major complication in considering capital account convertibility is that economies with weak initial conditions in certain dimensions experience worse outcomes from their integration into international financial markets in terms of both lower benefits and higher risks. For countries below these “threshold” conditions, the benefit–risk tradeoff becomes complicated and a one-shot approach to capital account liberalization may be risky and counter-productive. This perspective points to a difficult tension faced by low and middle-income countries that want to use financial openness as a catalyst for the indirect benefits mentioned above.
 
The author, nevertheless, maintains that the practical reality is that emerging market countries are being forced to adapt to rising financial globalization. In his view, capital controls are being rendered increasingly ineffective by the rising sophistication of international investors, the sheer quantity of money flowing across national borders, and the increasing number of channels (especially expanding trade flows) for the evasion of these controls. Hence, concludes the author, emerging market economies like China and India are perforce grappling with the new realities of financial globalization, wherein capital controls are losing their potency as a policy instrument (or at least as an instrument that creates more room for monetary and other macro policies). Against this background, the author provides a critical analysis of India’s approach to capital account liberalization through the lens of the promised indirect benefits from such liberalization. In recent years, the Reserve Bank of India (RBI) has taken what it calls a calibrated approach to capital account liberalization, with certain types of flows and particular classes of economic agents being prioritized in the process of liberalization. The result of these policies is that, in terms of overall de facto financial integration, India has come a long way, experiencing significant volumes of inflows and outflows. Although foreign investment flows crossed 6 percent of GDP in 2007–08, in the author’s view the flows are modest, placing India at the low end of the distribution of de facto financial integration measures in an international comparison across emerging market economies.

The RBI’s cautious and calibrated approach to capital account liberalization has resulted in a preponderance of FDI and portfolio liabilities in India’s stock of gross external liabilities. The author agrees that this is a favorable outcome in terms of improving the benefit–risk tradeoff of financial openness and has reduced India’s vulnerability to balance of payments crises. But he goes on to argue that the limited degree of openness has, nevertheless, hindered the indirect benefits that may accrue from financial integration, particularly in terms of broad financial sector development.

Against the backdrop of recent global financial turmoil, the author sees merit in a high level of caution in further opening the capital account. He states, however, that excessive caution may be holding back financial sector reforms and reducing the independence and effectiveness of monetary policy. He goes on to argue that increasing de facto openness of the capital account implies that maintaining capital controls perpetuates some distortions without the actual benefit in terms of reducing inflows. Flows of different forms are ultimately fungible and it is increasingly difficult, given the rising sophistication of investors and financial markets, to bottle up specific types of flows. In the author’s view, rising de facto openness in tandem with de jure controls may lead to the worst combination of outcomes—new complications to domestic macroeconomic management from volatile capital flows with far fewer indirect benefits from financial openness.

The author takes the view that a more reasonable policy approach would be to accept rising financial openness as a reality and to manage, rather than resist (or even try to reverse), the process of fully liberalizing capital account transactions. Dealing with and benefiting from the reality of an open capital account will require improvements in other policies—especially in monetary, fiscal, and financial sector regulations. This approach could in fact substantially improve the indirect benefits to be gleaned from integration into international financial markets.

In terms of specific steps, the author suggests that this may be a good time to allow foreign investors to invest in government bonds as an instrument of improving the liquidity and depth of this market. A deep and well-functioning government bond market can serve as a benchmark for pricing corporate bonds, which could in turn allow that market to develop. By providing an additional source of debt financing, it would create some room for the government to reduce the financing burden it currently imposes on banks through the statutory liquidity ratio—the requirement that banks hold a certain portion of their deposits in government bonds.

The author also recommends an “opportunistic approach” to liberalization whereby outflows are liberalized during a period of surging inflows. He suggests that if undertaken in a controlled manner, it could generate a variety of collateral benefits—sterilization of inflows, securities market development, and international portfolio diversification for households. The RBI has recently adopted such an approach by raising ceilings on external commercial borrowings in order to compensate for capital outflows. According to the author, these are steps in the right direction. But one potential problem he sees is that when taken in isolation rather than as part of a broader and well articulated capital account liberalization agenda, these measures are subject to reversal and unlikely to be very productive.

Despite this enthusiasm for capital account liberalization, the author goes on to suggest that none of this implies that the remaining capital controls should be dropped at one fell swoop. What it does imply is that there are some subtle risks and welfare consequences that can arise from holding monetary and exchange rate policies as well as financial sector reforms hostage to the notion that the capital account should be kept relatively restricted for as long as possible. It may seem reasonable to maintain whatever capital controls still exist in order to get at least some protection from the vagaries of international capital flows. However, in the author’s view, not only this is an unrealistic proposition, it could detract from many of the potential indirect benefits of financial integration. He sees steady progress toward a more open capital account as the most pragmatic policy strategy for India.

What Explains India's Real Appreciation?

India’s rapidly evolving economic landscape during the past two decades has elicited broad discussion of how changing economic factors will influence the future of India’s growth and prosperity. Often overlooked in the discussion are the effects of India’s changing economic structure on relative price dynamics, which have consequential effects on the allocation of resources in the economy. A host of recent developments would likely induce a change in relative prices, including the shift in economic policies beginning in 1991, the acceleration in economic growth, a rapid increase in exports, and rising per capita incomes and productivity growth. Taken together, these factors amount to the “catch-up” process that typically leads to an increase in the relative price of nontradables in developing economies.

In their paper, Renu Kohli and Sudip Mohapatra trace relative price developments in a two-sector, two-good (tradable and nontradable) framework for the Indian economy over the period 1980–2006. In line with their a priori expectations, the ratio of nontradable to tradable prices, also called the internal real exchange rate, rises consistently over the past one-and-a-half decades. Their empirical analysis confirms that this rise, or real appreciation, is driven by both demand and supply factors. A later section uses the results of the study to illuminate the evolution of past macroeconomic policies. Finally, using India’s recent robust economic performance as a guide, the paper concludes with a discussion on an appropriate macroeconomic policy mix for the future.

The authors construct the relative price of nontradables from the national accounts statistics using the degree of participation in trade as a criterion for classifying the economy into traded and nontraded sectors; the tradable– nontradable price series are derived as respective deflators for the two sectors. They find that the tradable and nontradable sectors are characterized by divergent inflation rates with the relative price of nontradables accelerating after 1991; on average, the difference exceeds 1 percentage point per year during 1991–2006. There are two competing explanations for such a divergent acceleration in prices: (a) the Balassa–Samuelson hypothesis posits that real exchange rates tend to appreciate as countries develop and (b) other demand-side explanations originate from changes in government spending and/or a shift in consumer preferences toward services (nontradable) as incomes rise. The preliminary analysis presented in the paper indicates a role for both factors in explaining the real exchange rate appreciation. A puzzle posed by the data, however, is the increase in the relative price of nontradables in conjunction with an expansion of the tradable sector, which suggests an offsetting role might have been played by economic reforms like import liberalization and exchange rate correction, leading to the emergence of new tradables through an increase in competitiveness.

The paper examines the determinants of this divergence in an integrated framework, exploring the role of both demand and supply side determinants. The relative price of nontradables is modeled as a function of the labor productivity growth gap between the tradable and nontradable sectors, real government expenditure as a share of gross domestic product, real per capita income, and a measure of import tariffs. The labor productivity growth gap and the import tariff rates capture the supply-side influences due to technological change (the Balassa–Samuelson effect) and the impact of trade liberalization, which accelerated after 1991. The fiscal and income growth variables summarize the demand side impact upon relative prices. The regression results reveal a significant influence of both demand and supply factors. A percentage point rise in the relative price of nontradables is associated with a 5 percent increase in the labor productivity growth gap, a 4 percent increase in per capita income growth, and a 3 percent increase in fiscal growth; the estimated impact of a fall in import prices upon the relative nontradables’ inflation rate is 0.04. The results are robust to a number of sensitivity checks, including different estimation methods, stability, specification, omission, and inclusion of variables as well as alternate definitions of the variables.

A decomposition of the relative price change over the sample period indicates that demand factors accounted for almost three-fourths of the average relative price increase over the sample period. In contrast, the supply-side influence stemming from the labor productivity growth differential between the two sectors accounted for only 35 percent of the mean of the dependent variable. Noting the rapid decline in import tariffs after 1991, the authors argue that this result underscores the role of convergence in tradable prices and its contribution to the divergence in sectoral inflation rates in liberalizing economies.

Kohli and Mohapatra link their results to macroeconomic policy by tracing the past evolution of exchange rate and fiscal policies in India. They argue that the fiscal expansion of the 1980s ending in the 1991 crisis led to a rise in the inflation rate of the nontradable sector, while the exchange rate policy favored steady depreciation in order to retain competitiveness and boost growth. Noting India’s recent and potential economic performance, its buoyant exports, and strong per capita income growth, they observe that the pressures upon real exchange rate appreciation, internal as well as external, are likely to continue—and indeed, accelerate—in the future. Under the circumstances, an appropriate macroeconomic policy mix would be to continue with the gradual increase in exchange rate flexibility so as to absorb the equilibrium shifts in the economy. This could be complemented with fiscal consolidation to offset competitiveness losses arising from the nominal and real exchange rate appreciation.

The Cost of Holding Excess Reserves: Evidence from India

Finally, the paper raises a number of critical data issues, not the least of which is the absence of a services price index in India. The implicit price series developed in the paper strongly suggests an understatement of generalized inflation through the current inflation indicator, the wholesale price index (WPI), which can be misleading. It also identifies gaps in the data on sectoral employment shares, emphasizing the need for sufficiently disaggregated information to enable fruitful analysis and informed policymaking.
 
The Asian financial crisis of 1997–98 served as a startling revelation to emerging economies of the drawbacks of financial integration. Neither the International Monetary Fund nor reliance on more flexible exchange rate regimes succeeded in preventing—or indeed, adequately combating—such a systemic crisis. Moreover, even countries practicing sound macroeconomic policies realized they were not immune to such crises as they can be hit by contagion and financial panic from other countries, regardless of their proximity. As a result, many countries have decided that they need to protect themselves against a speculative currency attack, and further, that the key to self-protection is the accumulation of substantial holdings of liquid foreign exchange. Over the past decade, developing countries, and particularly those in East and South Asia, have greatly expanded their foreign currency reserves. By the middle of 2008, the reserves of China, South Korea, Russia, and India alone amounted to over US$2.85 trillion. In the case of India, reserve accumulation has increased five-fold since 2001–02.
 
The security that results from high reserves does come at a price, however. The magnitude of reserves being held combined with the fact that most reserves are held as low-yield government bonds suggests that the opportunity cost of reserve holdings can be substantial. In his paper, Abhijit Sen Gupta employs a new empirical methodology to evaluate the factors influencing the demand for international reserves in emerging markets, and he estimates the costs incurred in the process for India in particular. Sen Gupta argues that the traditional analysis of the costs of reserve holdings, which considers a single adequacy measure (namely, import cover), does not reflect the multitude of factors influencing demand for international reserves in a financially integrated world. In addition to the desire to meet potential imbalances in current account financing, a central bank may also hold reserves to defuse a potential speculative run on its currency or to cover its short-term debt obligations.

The author first introduces a simple empirical model to highlight the principal determinants of reserve holding in emerging countries. Using the results of this model, one can create an “international norm” of reserve holding, and thereby calculate a measure of “excess reserves” which is the difference between actual reserve holdings and this international norm. Next, Sen Gupta provides a brief discussion of the history of reserve accumulation in India. As the bulk of India’s reserves are held in the form of highly liquid securities or deposits with foreign central banks and international organizations, the real return on these assets in recent years has been largely negative. In the final section, Sen Gupta estimates the cost of holding reserves in India by considering three alternative uses of the resources currently held in excess of the international norm described earlier.

The empirical section of the paper employs a sample of 167 countries over the period 1980–2005 and a regression framework that identifies the principal determinants of cross-country variation in the level of international reserves. In this context, reserves are defined as total reserves minus the country’s holdings of gold. The dependent variable is this measure of reserves scaled by Gross Domestic Product (GDP). The results of this regression accord well with the a priori expectations. The log of per capita GDP and a proxy for trade openness (measured as the ratio of imports to GDP) both record positive and significant coefficients for reserve holding, implying that richer countries and more open countries tend to have higher reserves. In addition, the regression results reveal that countries with less flexible exchange rate regimes and more capital account openness tend to accumulate greater reserves.

Next, the author uses the above framework for the period 1998–2005 to predict the demand for international reserves for various emerging countries. The difference between actual reserves and the reserve level predicted by the equation is interpreted as a measure of excess reserves. As illustrations of his results, Sen Gupta finds that by 2005, Indonesia, Philippines, and Argentina had reserves close to the amount predicted by the model, while Brazil’s reserve accumulation fill significantly short of the predicted value. In contrast, China, India, Korea, Russia, and Malaysia all exhibit significantly more reserves than what could be interpreted as an “international norm.”

In his discussion of India’s experience in reserve accumulation, Sen Gupta identifies several distinct episodes of significant reserve buildup in India: April 1993 to July 1995, November 2001 to May 2004, and November 2006 to February 2008. These three episodes account for more than US$ 220 billion worth of India’s current stock of reserve accumulation of US$ 300 billion. In each of these episodes, the author discusses the role that both the government and the Reserve Bank of India (RBI) played in the decision to accumulate reserves. Sen Gupta estimates that by the end of 2007, India had more than US$ 58 billion of excess reserves. In order to impute the costs of holding these excess reserves, he considers three alternative uses of the resources: financing physical investment, reducing the private sector’s external commercial borrowing, and lowering public sector debt. The cost is substantial across all specifications, both in terms of actual income foregone and as a percentage of GDP. The author estimates the annual cost of keeping excess reserves in the form of low-yielding bonds rather than employing the resources to increase the physical capital of the economy to be approximately 1.6 percent of GDP. Alternatively, if the resources were instead used to reduce private sector external commercial borrowing or public sector debt, India could gain more than 0.23 percent of GDP.

Publication: The Brookings Institution and National Council of Applied Economic Research
      
 
 




ed

India Policy Forum 2009/10 - Volume 6: Editors' Summary

The sixth annual India Policy Forum conference convened in New Delhi from July 14-15. This fourth issue of the India Policy Forum, edited by Suman Bery, Barry Bosworth and Arvind Panagariya, covers the global financial crisis and the implications for India. The editors' summary appears below, and you can download a PDF version of the volume, or access individual articles by clicking on the following links:

Download the India Policy Forum 2009-2010 agenda »
Download India Policy Forum 2009-2010 - Volume 6 » 

Download the individual volumes:



EDITORS' SUMMARY

The sixth annual conference of the India Policy Forum was held on July 14 and 15, 2009 in New Delhi. The meeting was dominated by considerations of the global financial crisis and its implications for India. The events of 2009 provided evidence of India’s growing integration with the global economy, an illustration of the resilience of country’s economic growth, and its emergence as a major participant in an expanded system of governance for the global economic system. This issue of the journal includes four papers and the associated discussion from the conference, and a fifth paper that was originally presented at the 2007 conference. 

Indian Equity Markets: Measures of Fundamental Value

Beginning in 2005, the Indian equity market underwent a period of explosive growth rising from a valuation equal to about 50 percent of GDP to a peak of 150 percent by early 2008. Growth of this magnitude raised concerns that the market was hugely overvalued and it was often characterized as an example of an asset market bubble. The market valuation subsequently fell back to about 70 percent of GDP during the global financial crisis. This experience stimulated interest in India in the question of what would constitute a reasonable or fair value for equities that could be use as a standard for evaluating market fluctuations. In “India Equity Markets: Measures of Fundamental Value,” Rajnish Mehra examines this question by comparing corporate valuations in India over the period of 1991–2008 relative to three key market fundamentals: the corporate capital stock, aftertax corporate cash flows, and net corporate debt.

Mehra’s model builds on the idea of a link between the market value of the capital stock and the debt and equity claims on that stock—a concept known as Tobin’s q. He extends the existing framework using some prior work by McGrattan and Prescott on US equity valuations, and he incorporates both intangible capital and key features of the tax code. It is a multi-period model in which firms maximize shareholder value subject to a production function with labor and two kinds of capital—tangible and intangible—as the inputs. Wages, intangible investment and depreciation of tangible capital are treated as tax-deductible expenses. It yields an equilibrium representation of the relationship between the market value of equity and the reproduction value of tangible and intangible capital in the corporate sector. All of the nominal values are normalized by GDP and the result is a framework that can be used to evaluate the effect on equity prices of a range of different policy actions, such as changes in the taxation of corporate dividends.

The model is calibrated to the Indian situation with respect to the capital stock, tax rates, and the characteristics of economic growth in the nonagricultural sector. Mehra also develops his own estimates of the valuation of intangible capital using three different methodologies. The first method is that used by McGrattan and Prescott and is based on the assumption that tangible and intangible capital earn the same rate of return along a balanced growth path. That assumption allows him to derive the equilibrium ratio of tangible and intangible capital. The alternative methods are based on recent work in the United States by Corrado, Hulten, and Sichel that involves cumulating investment flows to estimated stocks. Mehra uses two different methods to calibrate the Indian data with information from the United States, and he estimates the stock of intangible capital for two periods of 1991–2004 and 2005–08. The focus on two sub-periods is designed to capture a structural break in the data: Indian equity valuations as a fraction of GDP were fairly constant over the period 1991–2004, rising sharply starting in 2005. The two estimates of the stock of intangibles based on the comparison with the United States are very similar, but they are significantly lower than the estimates obtained with the McGrattan and Prescott methodology.

His analysis suggests that an optimistic estimate of the fundamental value of the current Indian equity market is about 1.2 times GDP, considerably lower than the 1.6 value observed in 2008, but close to the average over the full period. One effect on equity prices that the study does not account for is a change in investor demand from foreign institutional investors. If the effect of this is a change in the characteristics of the marginal investor, the relevant marginal rate of substitution will change, and with it market valuations. Thus, Mehra suggests that the extension of the model to include foreign investors should be a major objective for future research.

Why India Choked when Lehman Broke

Mehra’s paper generated an active discussion that centered on the difficulties of accurately measuring some of the values, such as the rate of technological change and real interest rates, required to calibrate the model to India’s situation. Several commentators also emphasized the important role of foreign investors. Others pointed to the difficulties of applying a model based on equilibrium conditions to the highly transitional nature of the Indian economy. In “Why India Choked when Lehman Broke,” Ila Patnaik and Ajay Shah analyze the rapid transmission of the impact of the Lehman bankruptcy into Indian financial markets. The authors propose an explanation that revolves around the treasury operations of Indian multinational corporations (MNCs). Such MNCs are less subject to the capital controls imposed on purely domestic Indian companies. 

The developments that emerged within Indian financial markets in September and October following the bankruptcy of Lehman Brothers on September 14, 2008 were quite extraordinary. First, there was a sudden change in conditions in the money market. Call money rates shot up immediately after September 15. Despite swift action by the Reserve Bank of India (RBI), the tightness persisted through the month of October. The operating procedures of monetary policy broke down in unprecedented fashion and interest rates were persistently above the target range of the Reserve Bank of India (RBI). The call rate consistently breached the 9 percent ceiling for the repo rate and attained values beyond 15 percent. There was a huge amount of borrowing from the RBI. On some days, the RBI lent an unprecedented Rs 90,000 crore through repos. These events are surprising given the extent of India’s de jure capital controls that were expected to isolate its financial markets from global developments. Greater understanding of crisis transmission, the effectiveness of capital controls, and India’s de facto openness could be achieved by carefully investigating this episode and identifying explanations.

The main hypothesis of this paper is that many Indian firms (financial and non-financial) had been using the global money market before the crisis to avoid India’s capital controls. This was done by locating global money market operations in offshore subsidiaries. When the global money market collapsed upon the demise of Lehman, these firms were suddenly short of dollar liquidity. They then borrowed in the rupee money market, converting rupees to US dollars, to meet obligations abroad.

The result was strong pressure on the currency market, and the rupee depreciated sharply. The RBI attempted to limit rupee depreciation by selling dollars. It sold $18.6 billion in the foreign exchange market in October alone. Ordinarily, one might have expected depreciation of the exchange rate in both the spot and the forward markets. However, instead of the forward premium rising in response to the pressure on the rupee to depreciate, it crashed sharply. The authors’ hypothesis is that some Indian MNCs that were taking dollars out of India planned to return the funds within a few weeks. To lock in the price at which they would bring that money back, they sold dollars forward. Thus, the one month forward premium fell sharply into negative territory.

Balance of payments data shows outbound FDI was the largest element of outflows in the “sudden stop” of capital flows to India of the last quarter of 2008. This supports the aforementioned hypothesis. During this time there was no significant merger and acquisitions activity taking place owing to the banking and money market crisis around the world. The explanation for the large FDI outflow when money market conditions in India and the world were among the worst seen in decades, could lie in the offshore money market operations of Indian MNCs. Finally, the authors analyze stock market data, finding that Indian MNCs were more exposed to conditions in international money markets as compared to non-MNCs.

This paper’s main contribution lies in showing that Indian MNCs are now an important channel through which India is financially integrated into the world economy. This raises questions about the effectiveness of India’s capital controls, which inhibit short-dated borrowing by firms. This restriction appears to have been bypassed to a substantial extent by Indian MNCs. This phenomenon contributes to a larger understanding of the gap that exists between India’s highly restrictive de jure capital controls and its de facto openness.

De jure capital controls have not made India as closed to global financial markets as expected. The expectation that a global financial market crisis would not hit India owing to these controls was proved to be incorrect when the financial crisis was transmitted to India with unprecedented speed. This evidence of India’s integration with global capital markets will influence the future discussion of its de facto capital account convertibility.

Climate Change and India: Implications and Policy Options

Climate change and the mitigation of greenhouse gas (GHG) emissions have moved to the forefront of international discussion and negotiations. While global warming may have adverse effects on Indian society, there are also concerns that efforts to mitigate emissions within India could seriously impair future economic growth and poverty alleviation. These concerns are the focus of the paper, “Climate Change and India: Implications and Policy Options” by Arvind Panagariya.

The basic perspective is that India’s current per capita carbon emissions are very small, only one-fourth those of China and one-twentieth those of the United States; and given the strong association between income and emissions, the capping of emissions at current levels would make it impossible for India to sustain the growth required to match Chinese income levels, much less narrow the gap with the developed economies. Panagariya argues that India should resist making binding emission commitments for several decades, or until it has made greater progress in poverty alleviation.

The paper begins with a discussion of various uncertainties relating to the response of temperatures to GHG emissions, and in turn, the impact of any temperature changes on rainfall and various forms of extreme weather. There is further uncertainty about the effects of those weather changes on productivity and GDP growth. The author discusses the changes in temperatures and rain patterns specific to India during the last century, as well as their impact if any on sea levels, glacier melting, and natural disasters such as drought and cyclones.

The paper then explores the question of optimal mitigation and instruments to achieve it. A key conclusion is that, absent any uncertainties, either a uniform worldwide carbon tax or a fully internationally system of tradable pollution permits should be employed to reach the optimal solution. A more complicated issue relates to the distribution of the costs of mitigation. Efficiency dictates that countries in which the marginal loss of output per ton of carbon mitigated is the lowest should mitigate more. But absent any international transfers, this may lead to an inequitable distribution of costs of mitigation. An additional question arises with respect to past emissions for which the responsibility largely rests with developed countries. A case can be made that if countries are asked to pay a carbon tax for future emissions, they should also pay for the past emissions. This is especially relevant since big emitters of tomorrow are likely to be different from big emitters of yesterday.

Panagariya argues that these distributional conflicts are the primary explanation of why countries have found it so difficult to arrive at a cooperative solution. Developing countries argue that since developed countries are responsible for the bulk of the past emissions and are also among the largest current emitters, they should undertake much of the mitigation. In turn, the United States has responded by raising the specter of trade sanctions against countries that do not participate in the mitigation efforts. The paper discusses whether such trade sanctions are compatible with the existing World Trade Organization (WTO) rules. It argues that the legality of the trade sanctions is far from guaranteed although the ultimate answer will only be known after the specific measures are tested in the WTO Dispute Settlement Body.

Turning to the specific situation of India, Panagariya argues that it should resist accepting specific mitigation obligations until 2030 or even 2040. The case for an exemption from mitigation for the next two or three decades is justified by the fact that India is a relatively small emitter in absolute as well as per capita terms. Based on 2006 data, it accounts for only 4.4 percent of global emissions, and in per capita terms it ranks 137th worldwide. This is in contrast to China, with which it is often paired. China currently emits the most carbon in the world in absolute terms, and as much as one-fourth of the United States in per capita terms. In addition, Panagariya argues that India needs to give priority to the reduction of poverty.

Given the situation of India and other poor countries, how can an international agreement to combat global warming be reached? Panagariya proposes first that significant progress can be made through agreements on the financing of investments devoted to the discovery of green sources of energy and new mitigation technologies. He believes that private firms will under-invest in such technologies due to the inherent uncertainties. Thus, he argues for establishing a substantial fund financed by contributions from the developed countries and using it to finance research by private firms with the proviso that the fruits of such research would be made available free of charge to all countries. Second, he argues that there is still considerable work to be done in completing an agenda of near-term actions. If developed countries are serious about the necessity of developing countries undertaking mitigation targets beginning some time in the near future, they need to lead by example and accept substantial mitigation obligations by 2020. Finally, he believes that mitigation targets for the developing countries should be stated in terms of emissions per capita or per unit of GDP.

The paper generated a lively exchange among participants on both the effects of climate change and on how India should participate in the international policy discussion. Some thought that Panagariya underestimated the costs to India of climate change, but most of the discussion centered on the development of an appropriate Indian policy response.

Beginning with the major 1991 reform, India has systematically phased out investment and import licensing. Progressive movement toward promarket policies accompanying this phasing out of controls was expected to bring about major shifts in India’s industrial structure. Partly because the opening up itself was uneven across sectors and partly because responses to liberalizing reforms were bound to differ across sectors and firms, it was expected that the changes would be highly variable.

India Transformed? Insights from the Firm Level 1988-2005

“India Transformed? Insights from the Firm Level 1988–2005” by Laura Alfaro and Anusha Chari, sets out to study the responses of firms and sectors accompanying the ongoing transformation of India’s microeconomic industrial structure. Relying on firm-level data, collected by the Center for Monitoring the Indian Economy from company balance sheets and income statements, they study the changes in firm activity from 1988 to 2005. They highlight the differing responses to reforms across sectors, private versus public sector firms, and incumbent versus new firms.
 
The authors define liberalization as consisting of trade and entry liberalization, regulatory reform and privatization that lead to increased domestic and foreign competition. They present a series of stylized facts relating to the evolution of firms and sectors accompanying and following liberalization. The database covers both unlisted and publicly listed firms from a wide cross-section of manufacturing, services, utilities, and financial industries. Approximately one-third of the firms in the database are publicly listed and the remaining two-thirds are unlisted. The companies covered account for more than 70 percent of industrial output, 75 percent of corporate taxes, and more than 95 percent of excise taxes collected by the Government of India.

Detailed balance sheet and ownership information permits the authors to analyze a range of variables such as sales, profitability, and assets for approximately 15,500 firms classified across 109 three digit industries encompassing agriculture, manufacturing, and services. Therefore, in contrast to most existing firm level studies that focus on manufacturers, the authors are able to study the firms in the services and agriculture sectors as well. The data also permit distinction according to ownership categories such as state-owned, business groups, private stand-alone firms, and foreign firms. The authors divide the years from 1988 to 2005 into five sub-periods: 1988–90, 1991–94, 1995–98, 1999–2002, and 2003–05. This division into sub-periods is intended to capture the effects of various reforms taking place over time.

The authors present detailed information on the average number of firms, firm size, as measured by assets and sales, and profitability as measured by operating profits and the return on assets. The information is presented by sector as well as by category of firm: state-owned enterprises, private firms incorporated before 1985 (old private firms), private firms incorporated after 1985 (new private firms), and foreign firms for the five sub-periods. Sales, entry, profitability, and overall firm activity are interpreted as disaggregated measures of economic growth and proxies for efficiency; and thus, they provide an understanding of the effectiveness of reforms. The authors also look at market dynamics with regard to promotion of competition in order to understand the efficiency of resource allocations. They also examine the evolution of industrial concentration over time.
 
Alfaro and Chari find some evidence of a dynamic response among foreign and private firms as reflected in the expansion of their numbers as well as growth in assets, sales, and profits. But overall, they find that the sectors and economy continue to be dominated by the incumbent state-owned firms and to a lesser extent traditional private firms that were incorporated before 1985. Sectors dominated by state-owned and traditional private firms prior to 1988–90, where dominance is defined by 50 percent or larger share in assets, sales, and profits, generally remain so in 2005. Interestingly, rates of return remain remarkably stable over time and show low dispersion across sectors and across ownership groups within sectors. Not only is concentration high, but there is persistence in terms of which firms account for the concentration.

The exception to this broad pattern is the growing importance of new and large private firms in the services industries in the last ten years. In particular, the assets and sales shares of new private firms in business and IT services, communications services and media, health, and other services have expanded at a rapid pace. These changes coincide with the reform measures that took place in the services sectors after the mid-1990s, and they are also consistent with the growth in services documented in the aggregate data.

According to Joseph Schumpeter (1942), creative destruction, defined as the replacement of old firms by new firms and of old capital by new capital, happens in waves. A system-wide reform or deregulation such as the one implemented in India may have been the shock that prompted the creative destruction wave. Creation in India seems to have been driven by new entrants in the private sector and foreign firms forcing the incumbent firms to shape up as well. Outside of the services sectors noted in the previous paragraph, and especially in many manufacturing sectors, transformation seems not to have gone through an industrial shakeout phase in which incumbent firms are replaced by new ones. In many of these sectors, stateowned enterprises and private business groups have continued to dominate despite many liberalization measures.

Different explanations may account for these findings. In part, continued dominance of public sector firms in certain sectors may reflect the high barriers to exit that not only impede destruction of marginal firms but also discourage new firms from entry. On the one hand, potential entrants know that exit of public sector firms is unlikely; on the other hand, they may fear paying high exit costs in case they fail to find a foothold. An additional explanation, perhaps not sufficiently stressed in the debate, is the possibility that entrenched public sector and business group firms subvert true liberalization in sectors in which they dominate. The authors find, for example, that both industry concentration and state ownership are inversely correlated with measures associated with liberalization.

Recent literature highlights the idea that economic growth may be impeded not simply by a lack of resources such as capital and skilled labor, but also by a misallocation of available resources. The high levels of state ownership and ownership by traditional private firms in India raise the question of whether significant gains could be made simply through the allocation of existing resources from less efficient to more efficient firms.

Land Reforms, Poverty Reduction, and Economic Growth: Evidence from India

In “Land Reforms, Poverty Reduction, and Economic Growth: Evidence from India,” Klaus Deininger and Hari K. Nagarajan consider the important but relatively neglected issues of land market policies and institutions. They focus attention on three issues: the role of rental markets in land, the contribution of land sales to the promotion of efficiency, and the potential benefits of better land ownership records and the award of land titles. The authors posit that well-functioning rental and sales markets lead to superior outcomes by raising productivity and providing improved access to land. On an average, these markets shift land toward more efficient farmers, thus contributing to poverty alleviation. The paper also brings into question the long-held view that land sales markets are dominated by distress sales whereby poor farmers facing credit constraints are forced to sell their land for below-market prices to their creditors.

In evaluating the impact of rental markets, the authors test three hypotheses: 

  1. Whether a household becomes a lessor or a lessee should be a function of the household’s agricultural ability. Efficient but land-poor households would rent additional land to cultivate while inefficient and land-abundant households should rent out their land for cultivation by other more efficient households. In this manner, well-functioning rental markets in land enhance productivity and improve factor use in the economy.
  2. The presence of high transactions costs inhibits households from participation in rental markets. These costs may force households to withdraw from rental transactions altogether and undermine productivity.
  3. Participation in rental markets is crucially impacted by wage rates offered in the market. Increases in wage rates will prompt households with low ability to manage their land to rent their land to other households. The resulting increase in the supply of land to the rental markets leads to lower rental rates.

Using survey data, the authors test these various hypotheses. They show that rental markets improve productivity of land use by transferring land to more efficient producers. The results suggest that the probability for the most productive household in the sample to rent additional land is more than double that of the average household. The paper also shows that higher land and lower labor endowments increase the propensity of households to supply land to the rental market. By transferring land to labor-rich but land-poor households, markets allow gainful employment of rural labor. The current policies have severely curtailed rental and have therefore retarded advancement of efficiency and equity in rural India.

The authors next turn to markets for land sales. They examine the impact of a well-functioning land sales market on land access. The long-held view has been that land sales are primarily motivated by adverse exogenous shocks. To the contrary, the authors find that such markets have helped more productive and more labor-abundant farmers to gain access to land. The authors also show that land sales markets exhibit greater activity in the presence of higher economic growth. This suggests that if other factor market imperfections are removed, the role of sales markets in promoting equity and efficiency will be expanded. Finally, identifying the source of shocks leading to distress sales and adopting policies that directly address these shocks can ameliorate the adverse effects of such sales in otherwise well-functioning land sales markets.

The last issue addressed in the paper concerns the importance of land administration for the promotion of efficient rental and sales markets. In India, there exist multiple institutions governing land records, registration, and transactions. This situation has led to a duplication of land records, leading to confusion and conflicts over ownership. It also creates a general sense of insecurity of tenure. The authors argue that the computerization of land records can help alleviate these problems. They cite Karnataka and Andhra Pradesh as examples of this experience. They note that the computerization of records can reduce petty corruption, ease access to land records, and possibly increase the probability of land becoming acceptable as collateral to obtain credit.

Publication: The Brookings Institution and National Council of Applied Economic Research
      
 
 




ed

India Policy Forum 2010/11 - Volume 7: Editors' Summary

The seventh annual India Policy Forum conference convened in New Delhi from July 13-14. This seventh volume of the India Policy Forum, edited by Suman Bery, Barry Bosworth and Arvind Panagariya, cover economic growth, infrastructure, and politics in India. The editors' summary appears below, and you can download a PDF version of the volume, or access individual articles by clicking on the following links:

Download India Policy Forum 2010-2011 - Volume 7 »


EDITORS' SUMMARY

The India Policy Forum held its seventh conference on July 13 and 14, 2010 in New Delhi. This issue of the journal contains the papers and the discussions presented at the conference, which cover a wide range of issues. The first paper examines the services sector in India, evaluating its growth and future prospects. The second paper looks at India’s corporate sector, analyzing the profitability of firms in the wake of liberalization. The third paper explores the reasons for the large time and cost overruns that have been endemic to Indian infrastructure projects. The final two papers focus on more political issues, looking at the impact of political reservations used to increase women’s political voice, as well as the politics of intergovernmental resource transfers.

Among fast-growing developing countries, India is distinctive for the role of the service sector. Whereas many earlier rapidly growing economies emphasized the export of labor-intensive manufactures, India’s recent growth has relied to a greater extent on the expansion of services. Although there are other emerging markets where the share of services in Gross Domestic Product (GDP) exceeds the share of manufacturing, India stands out for the dynamism of its service sector. Barry Eichengreen and Poonam Gupta critically analyze this rapid service-sector growth in their paper “The Service Sector as India’s Road to Economic Growth?”

Skeptics have raised doubts about both the quality and sustainability of the increase in service-sector activity. They have observed that employment in services is concentrated in the informal sector, personal services, and public administration—activities with limited spillovers and relatively little scope for productivity improvement. They downplay information technology and communications-related employment on the grounds that these sectors are small and use little unskilled and semi-skilled labor, the implication being that a labor-abundant economy cannot rely on them to move people out of low-productivity agriculture. Some argue that the rapid growth of service sector employment simply reflects the outsourcing of activities previously conducted in-house by manufacturing firms—in other words, that it is little more than a relabeling of existing employment. They question whether shifting labor from agriculture directly to services confers the same benefits in terms of productivity growth and living standards as the more conventional pattern of shifting labor from agriculture to manufacturing in the early stages of economic development.

This paper evaluates these claims, coming up with an in-depth look at the services sector in India. Eichengreen and Gupta find that the growth of the sector has been unusually rapid, starting 15 years ago from a very low level. The acceleration of service-sector growth is widespread across activities, but the modern services such as business services, communication, and banking are the fastest growing activities. Other rapidly growing service sectors are hotels, restaurants, education, health, trade, and transport. Some observers have dismissed the growth of modern services on the grounds that these activities constitute only a small share of output and therefore contribute only modestly to the growth of GDP. However, the results show that the contribution of the category communication, business services, and financial services has in fact risen to the point where this group contributes more to growth of GDP than manufacturing. A slightly broader grouping of communication, business services, financial services, education, health, and hotels accounted for roughly half of total growth of the service sector in 2000–08. These activities explain most of the post-1990 acceleration in service sector growth.

Modern services have been the fastest growing in India and their takeoff began at much lower incomes than in the Organisation for Economic Co-operation and Development (OECD) countries. This, clearly, is a unique aspect of the Indian growth experience. Furthermore, the expansion of the modern service sector is not simply disguised manufacturing activity. Only a relatively small fraction of the growth of demand for services reflects outsourcing from manufacturing. Most production that does not go towards exports, in fact, derives from fi nal demand at home. Thus, the growth of service sector employment does more to add to total employment outside agriculture than outsourcing arguments would lead one to expect.

Looking at the proximate determinants of services growth, Eichengreen and Gupta show that tradable services have grown 4 percentage points a year faster than nontradable services, other things equal. Services that have been liberalized have also grown significantly faster than the average. Regulatory change has been an important part of the story: where essentially all services were heavily regulated in 1970, the majority have since been partially or wholly deregulated. The services segments which were both liberalized and tradable grew 7–8 percentage points higher than the control group (nontradable/nonliberalized services). All this implies that policy makers should continue to encourage exports of IT, communication, fi nancial, and business services while also liberalizing activities like education, health care, and retail trade, where regulation has inhibited the ability of producers to meet domestic demand.

The fact that the share of services has now converged more or less to the international norm raises questions about whether it will continue growing so rapidly. In particular, it will depend on the continued expansion of modern services (business services, communication, and banking). But, in addition, an important share of the growth will result from the application of modern information technology to more traditional services (retail and wholesale trade, transport and storage, public administration and defense). This second aspect obviously has more positive implications for output than for employment.

Finally, the authors find that the mix of skilled and unskilled labor in manufacturing and services is increasingly similar. Thus it is no longer obvious that manufacturing will need to be the main destination for the vast majority of Indian labor moving out of agriculture, or that modern services are a viable destination only for the highly skilled few. To the extent that modern manufacturing and modern services are both constrained by the availability of skilled labor, growth in both areas underscores the importance for India of increasing investments in labor skills.

The paper concludes that sustaining economic growth and raising living standards will require shifting labor out of agriculture into both manufacturing and services, not just one or the other. The argument that India needs to build up labor-intensive manufacturing and the argument that it should exploit its comparative advantage in services are often posed in opposition to one another. Eichengreen and Gupta argue that these two routes to economic growth and higher incomes are in fact complements, not incompatible alternatives.

In their paper “Sources of Corporate Profits in India: Business Dynamism or Advantages of Entrenchment?” Ashoka Mody, Anusha Nath, and Michael Walton ask whether the liberalization during the last two decades has led to increased competition, characterized by innovation and growth, or to profiteering through entrenchment and increased market power of the large firms. While the authors consider various indicators of market structures, the main focus of their analysis is the evolution of the profit rate at the firm level in the wake of liberalization. The authors find that while liberalization induced considerable new entry in the 1990s, that pattern did not continue into the 2000s. On the whole, the major business houses and public sector firms were able to maintain their dominance in terms of market share. 

The authors employ firm-level data from the Prowess database, which provides detailed information on large- and medium-sized companies in India. They focus on firms listed on the Bombay Stock Exchange. While they present some trends for the period spanning 1989–2009, their core econometric analysis covers the shorter period from 1993 to 2007, during which the sample size increased from 1,000 to about 2,300 firms. Several significant conclusions emerge from the authors’ discussion of corporate and macroeconomic trends and their econometric analysis.

First, despite some deviations in the early years, they find a consistent pattern that the corporate profit rate—measured as a return on assets—has gone up and down in line with overall economic growth. Profit rates were high in the early 1990s (with a median rate of 10–12 percent) when growth accelerated and fell subsequently as GDP growth decelerated until around 2001 (reaching about 4 percent). The rates rose again (to about 8 percent in 2007–08) as growth in the Indian economy accelerated again.

Second, unless the expansion of the tradable sectors lagged behind the growth in nontradable sectors—a possibility that cannot be ruled out—the trade liberalization of the late 1980s did not have a major influence on corporate profi ts. There is a striking similarity in the evolution of profitability in the tradable and nontradable sectors, both moving in unison with domestic growth. Tradable sectors enjoyed a somewhat higher profit rate than nontradable sectors.

In contrast to trade liberalization, industrial deregulation was associated with a more definite impact on profitability. Following deregulation around 1991, the number of firms increased in virtually all sectors. This increase was associated with reduced market shares. The authors’ econometric analysis suggests that smaller market shares, in turn, were associated with reduced profitability. Thus, in the second half of the 1990s, slower GDP growth and the scramble for market shares both contributed to driving down profit rates. 

The bulk of new entry, in terms of numbers, was of Indian stand-alone firms, but both government-owned firms and business houses remain dominant in terms of sales and asset shares. Indeed, the share of business houses in the total sales rose slightly from 41 percent in 1989 to 42 percent in 2008.

Firm profitability does show substantial year-to-year persistence, raising the possibility of some market power. But the persistence declines when profitability is averaged over longer periods (up to four years), implying that some “super-normal” profits are whittled away over time. Also, more efficient firms tend to have more persistent profits. Thus, some part of the persistence reflects greater efficiency, although because of the overlap between efficient and large firms, the possibility that market power may play a role in maintaining the profit rate over time cannot be completely ruled out.

There is no consistent evidence of a general influence of market concentration on profitability: if anything, firms in less concentrated sectors have slightly higher profit rates. The 2000s witnessed some reconcentration in some sectors, affecting about a third of all the firms, but the profit behavior of firms in re-concentrating sectors appears to be similar to that in the overall sample. Firms with growing market shares do enjoy higher profitability, but the pattern of results is more consistent with causality fl owing in the other direction, that is, with the success of dynamism. In particular, this association is at least as strong for small firms and for less concentrated industries. 

This said, following significant new entry and competition for market shares in the first half of the 1990s, the pace of entry abruptly stalled in the late 1990s, market shares stabilized, and concentration rates started to rise again in some sectors. Thus, the findings are also consistent with the possibility that the phase of competitive dynamism may be diminishing, with incentives for the exercise of market power and investment in business– government relationships being on the rise.

Finally, the authors’ econometric results show that the faster a firm grew, the higher was its profitability. Supporting descriptive statistics add interesting nuances to this finding. The gap in firms’ growth rates opened up in the 2000–07 period. During that period, the fast-growing firms opened up the largest gap in profitability rates relative to the medium-growth firms. Slow growing firms, typically much smaller in size, have had particularly low profit rates and have actually been shrinking in terms of real sales. This suggests that efficiency was rewarded: the dynamic medium-sized firms were able to grow fast and garner sizeable profits, reinforcing their ability to grow. The smallest firms fell increasingly behind. Thus, the shakeout resulted in a potentially more efficient structure.

Greatly expanded level of infrastructure investment is critical to sustaining Indian economic growth. During the last decade, an increasing volume of funds has been allocated to building infrastructure, and successive governments have accorded infrastructure a high priority. Nevertheless, delays and cost overruns remain large and frequent. Moreover, owing to a paucity of research on the subject, our understanding of the causes behind the cost and time overruns and their remedies remains poor. These issues assume additional importance in view of the recent changes in the official procurement policy in infrastructure. The central government as well as state governments are increasingly looking to private funding for infrastructure projects principally through public–private partnerships (PPPs). Though a shortage of funds within the government sector is largely responsible for this shift, there is equally a belief that private-sector participation can reduce delays and cost overruns. However, there is insufficient empirical work to either support or repudiate this confidence in the superiority of the private sector.

In his paper “Determinants of Cost Overruns in Public Procurement of Infrastructure: Roads and Railways,” Ram Singh provides a detailed analysis of time and cost overruns in infrastructure projects in India using two large datasets that contain information on the key dates for implementing and completing projects and the difference between planned and actual costs. The first dataset includes 934 infrastructure projects completed during April 1992–June 2009. The second dataset includes 195 road projects under the supervision of the National Highways Authority of India (NHAI). The analysis develops several hypotheses and subjects them to empirical testing. Among other issues, the paper compares delays and cost overruns in PPPs with traditionally funded projects.

A simple tabulation of the data shows large cost overruns, averaging 15 percent, and time delays of about 80 percent. However, the author also finds that delays and cost overruns have declined over time. It is also evident that time delays are the primary cause of cost overruns and that larger projects lead to larger percentage cost overruns. Projects in sectors such as roads, railways, urban development, civil aviation, shipping and ports, and power have experienced much longer delays and higher cost overruns than those in other sectors, but the author finds no evidence of any regional pattern of cost overruns or delays. He suggests that incompleteness in the initial planning and contracting is responsible for many of the cost overruns.

The study shows that the design of the contract has a significant bearing on the level of delays. Traditional item-rate contracts provide little or no incentives to avoid delays. In contrast, since a PPP allows contractors to reap returns as soon as the project is complete, it creates a strong incentive to complete the project at the earliest possible date. Moreover, by bundling responsibility for maintenance with construction, the PPP also motivates contractors to avoid compromising on quality. Somewhat surprisingly, PPP projects experience higher cost overruns even though they have significantly lower time delays. The author attributes the shorter time delays to the fact that the project revenues do not begin until it is complete. The larger cost overruns are more puzzling, but may reflect incentives to expand the scope of the project.

Finally, according to the author, a comparison of road with railways sector projects suggests that organizational factors also contribute to delays and cost overruns. The author identifies three specific aspects. The railways sector is slower during planning and contracting phases. Second, contract management by the railways sector is poorer than by the roads sector. While the NHAI awards most project works to a single contractor, the railways award different works to different contractors. This results in poor project coordination. Third, in the railways sector, projects are allocated funds only for the relevant fiscal year and this is done in the second half of the year. The NHAI’s project delivery mechanism is not subject to this constraint.

Despite recent progress in India toward the social inclusion and empowerment of women, their presence in the country’s state and national lawmaking bodies remains low, raising concerns about how well women’s interests are represented. Previous empirical evidence has substantiated these concerns: women have different policy preferences than men, and elected leaders tend to implement policies in line with their own personal policy preferences, regardless of earlier campaign promises. These arguments provide an important motivation for gender-based affirmative-action policies.

In order to increase women’s political voice, the Indian government amended its constitution in 1993, devolving significant decision-making powers to village-level councils called Gram Panchayats (GPs) and requiring a randomly selected third of all members and leaders (Pradhans) of these councils to be reserved for women. Most recently, in 2010, the upper house of the Indian parliament passed a bill applying similar reservation requirements to the state and national levels of government in the face of considerable resistance and skepticism. Despite the widespread adoption of such gender-reservation policies, several concerns about their effectiveness remain. First, little will change if husbands of female leaders elected to reserved seats lead by proxy, and second, reservation could leave fewer seats to be contested among other disadvantaged groups for which reservations were not established, such as India’s Muslims.

Using new data spanning 11 Indian states, the paper by Lori Beaman, Esther Duflo, Rohini Pande, and Petia Topalova, “Political Reservation and Substantive Representation: Evidence from Indian Village Councils,” assesses the impact of introducing political reservation in India’s GPs, with particular attention to the aforementioned concerns. In conducting their study, the authors collect GP meeting data across fi ve economically and socially heterogeneous states, obtain data on public-good provision from a nationwide survey, and conduct their own survey of 165 GPs within the Birbhum district of West Bengal.

The study examines the effect of reservations in local village councils; the results are likely to be applicable to similar provisions within higher levels of government because the electoral process is the same, voter participation is high, and political parties invest significant resources in elections across all levels of government. Furthermore, by exploiting the random assignment of GP gender reservations, the authors are able to ensure that observed effects can be attributed to political reservations, rather than other factors, such as social attitudes toward women and local demand for public goods. The expansive data and novel study design allow the authors to shed light on three distinct elements of the debate on gender reservations in policymaking: politician selection, citizen participation in politics, and policymaking. 

First, the authors assess the degree to which reservation affects politician selection. Encouragingly, they fi nd no evidence that reservation for women has caused the crowding-out of other politically underrepresented social groups. Evidence does suggest, however, that women elected to reserved seats are less experienced and more likely to enlist their husband’s help in carrying out their duties as Pradhan. Nevertheless, two years into their tenure, female Pradhans from reserved GPs claim they are as comfortable and effective in their roles as their counterparts in nonreserved seats.

The study also reveals the causal mechanisms through which issues important to women might receive insufficient attention in local government. The authors hypothesize that underinvestment in what they determine are “female-friendly” issues occurs because male leaders either possess entirely different preferences, or discriminate against the viewpoints of the opposite gender, regardless of whether or not their preferences diverge. The study revealed that neither is the case. Leaders in reserved GPs are neither more likely to react positively to a female-friendly issue, nor more likely to respond favorably to the inquiry of a female participant in Village Council (Gram Sabha or GS) meetings. On the contrary, women in both reserved and nonreserved GPs were found to receive more constructive responses in these meetings then men. This suggests that the problem lies not in unsympathetic leadership, but in a lack of female constituent participation in the political process that would voice women’s policy concerns. Accordingly, the study also examines the effect of gender reservation on female participation in politics. Reservation does have a positive effect on whether women participate at all in the GS meeting, and the degree to which they remain engaged throughout the meeting. Therefore, inasmuch as electing women to Pradhan seats continues to encourage the participation of women in GS meetings, the reservations will continue to prove effective.

Finally, the study takes advantage of new data to elucidate earlier claims regarding the effects of political reservations on allocations of public goods. A first dataset, much broader in geographic scope than that of previous studies, confirms earlier findings that female Pradhans elected to reserved seats deliver more drinking water infrastructure, sanitation, and roads than their nonreserved counterparts. However, in exploiting the richer cross-time variation of a second dataset, the study reveals that reservations have a much broader impact across sectors than previously thought. The data from the Birbhum region of West Bengal allow the authors to compare public goods allocation patterns between newly reserved GPs, GPs reserved twice in a row, and GPs that are currently unreserved but were reserved before. These new data indicate that, while continuing to push drinking water investments, women elected in the second term under a reserved seat also invest more in “male issues” such as school repair, health center repair, and irrigation facilities. These investment patterns are found to be enduring, as even male Pradhans elected to previously reserved seats continue to invest in female friendly issues, after female reservation for their GP has expired.

Taken together, the findings of the study provide important insights into how leaders in reserved seats are elected, affect policymaking, and actual policy outcomes. While women elected in reserved GPs do differ from their male counterparts in their experience as leaders, they are able to increase female participation in the political process and make different policy decisions. The basic structure of India’s fiscal federalism was in place within fi ve years of the country’s independence on August 15, 1947. The division of expenditure responsibilities and sources of revenue across units of the federation as well as the institutions for allocating resources between levels of government gave substantial discretion to the central government, thereby concentrating economic and political power at the federal level. The design was understandable in light of the perceived need to combat incipient forces of separatism and the economic logic of planned development. This framework for fiscal federalism has been remarkably stable, however, even as the fears of separatism faded, political power dispersed and new parties representing state interests gained representation at all levels of government, and markets replaced planners in directing investment.

In their paper “Inelastic Institutions: Political Change and Intergovernmental Transfer Oversight in Post-Independence India,” T.N. Srinivasan and Jessica Seddon Wallack examine the persistence, and in some cases strengthening, of centralizing features in India’s fiscal federalism, which is a surprising exception to the general trend toward decentralization that other analysts of India’s political economy have described.

The paper focuses in particular on the two institutions—the Finance Commission (FC) and the Planning Commission (PC)—that oversee the bulk of intergovernmental resource transfers. The FC, a constitutional body designed to be independent of both Center and state constitutionally defined jurisdictions, was created to ensure that states had predictable and stable resources and autonomy in their use. In practice, the FC has played a limited role relative to its constitutional potential. Many have argued that it has unique constitutional authority to oversee intergovernmental revenue transfers, but a substantial portion of these transfers are determined and allocated through the PC instead.

The PC, an entity created by a cabinet resolution and hence a part of the constitutional sphere of the Center, was to advise the Center on planning and plans for national development. In contrast to the FC, the PC has in fact played a much larger role in allocating transfers than advising would necessarily imply. As a transfer mechanism, it facilitates Central government oversight of states’ development policies and has ample scope for Central government discretion in transfers. The centralizing aspects of this arrangement have been highlighted in various high-profile public discussions questioning the division of responsibilities between the FC and the PC as well as the various mechanisms for transfers by the PC. Yet, little has changed in terms of the institutional oversight over resource flows.

The authors explore various explanations for the persistence of these centralizing features and conclude that the most likely explanation lies in the barriers that India’s federal institutions pose to collective action by states. State leaders have ample political reasons to seek greater control over their finances and in fact do appear to care about the centralizing implications of the fiscal federal framework. However, they are divided both by design—state boundaries were in many cases drawn on the basis of linguistic or cultural differences—as well as by the economic reality of diverging fortunes and varying dependence on transfers.

India’s institutions also offer no authoritative forum for states and Central government to discuss federal arrangements and propose alternatives. The available arenas for intergovernmental discussions are either toothless or have structures that create incentives for individualist behavior. The Union Parliament, for example, would be able to effect changes to the federal structure through instruments available to it under the constitution or through constitutional amendments if needed. However, the parliamentary system also gives those state parties that are part of the government a vested interest in preserving the status quo.

Srinivasan and Wallack’s analysis implies that there will be limited change in the intergovernmental transfer system, a conclusion that they find worrisome for India’s ability to adjust economically and politically to changing circumstances. Not only does conventional public finance theory favor decentralization of decision making with respect to the financing and provision of public goods and services, especially in heterogeneous societies, but “voices from below” are increasingly valuable as an information source about what is needed in a fast-changing world. They argue that India’s record of government performance also suggests a dearth of accountability, and that real decentralization of roles and responsibilities—not delegated expenditure duties—can be more effective in creating stronger performance incentives.

      
 
 




ed

How Louisville, Ky. is leveraging limited resources to close its digital divide

Every region across the country experiences some level of digital disconnection. This can range from Brownsville, Texas, where just half of households have an in-home broadband subscription, to Portland, Ore., where all but a few pockets of homes are connected. Many more communities, such as Louisville, Ky., fall somewhere in the middle. In Louisville, most…

       




ed

10 things we learned at Brookings in March

March 2020 was the month in which the World Health Organization declared coronavirus a global pandemic. Before and since, Brookings experts have examined different policy responses to the widening global crisis. For more, visit the COVID-19 page on our website. 1. What grocery workers need as they work the front lines of COVID-19 From left:…

       




ed

10 things we learned at Brookings in April

April 2020 was another month in which Brookings experts produced a wealth of research and analysis about addressing the COVID-19 crisis, both in the U.S. and globally. But research on other topics continues. Below is a selection of new research across a range of topics. 1. The Federal Reserve's response to the COVID-19 crisis “The Federal…

       




ed

Is the Iranian-Saudi “cold war” heating up? How to reduce the temperature

In Saudi Arabia and Iran, emotions are running high, and even an accidental spark could turn the cold war between the two regional powers hot. Their antagonism is a grave threat to the wider region, which isn’t exactly a bastion of stability these days—and it’s contrary to those states' long-term interests.

      
 
 




ed

Principles for Transparency and Public Participation in Redistricting


Scholars from the Brookings Institution and the American Enterprise Institute are collaborating to promote transparency in redistricting. In January 2010, an advisory board of experts and representatives of good government groups was convened in order to articulate principles for transparent redistricting and to identify barriers to the public and communities who wish to create redistricting plans. This document summarizes the principles for transparency in redistricting that were identified during that meeting.

Benefits of a Transparent, Participative Redistricting Process

The drawing of electoral districts is among the most easily manipulated and least transparent systems in democratic governance. All too often, redistricting authorities maintain their monopoly by imposing high barriers to transparency and public participation. Increasing transparency and public participation can be a powerful counterbalance by providing the public with information similar to that which is typically only available to official decision makers, which can lead to different outcomes and better representation.

Increasing transparency can empower the public to shape the representation for their communities, promote public commentary and discussion about redistricting, inform legislators and redistricting authorities which district configurations their constituents and the public support, and educate the public about the electoral process.  

Fostering public participation can enable the public to identify their neighborhoods and communities, promote the creation of alternative maps, and facilitate an exploration of a wide range of representational possibilities. The existence of publicly-drawn maps can provide a measuring stick against which an official plan can be compared, and promote the creation of a “market” for plans that support political fairness and community representational goals.

Transparency Principles

All redistricting plans should include sufficient information so the public can verify, reproduce, and evaluate a plan. Transparency thus requires that:

  • Redistricting plans must be available in non-proprietary formats.
  • Redistricting plans must be available in a format allowing them to be easily read and analyzed with commonly-used geographic information software.
  • The criteria used as a basis for creating plans and individual districts must be clearly documented.

Creating and evaluating redistricting plans and community boundaries requires access to demographic, geographic, community, and electoral data. Transparency thus requires that:

  • All data necessary to create legal redistricting plans and define community boundaries must be publicly available, under a license allowing reuse of these data for non-commercial purposes.
  • All data must be accompanied by clear documentation stating the original source, the chain of ownership (provenance), and all modifications made to it.

Software systems used to generate or analyze redistricting plans can be complex, impossible to reproduce, or impossible to correctly understand without documentation. Transparency thus requires that:

  • Software used to automatically create or improve redistricting plans must be either open-source or provide documentation sufficient for the public to replicate the results using independent software.
  • Software used to generate reports that analyze redistricting plans must be accompanied by documentation of data, methods, and procedures sufficient for the reports to be verified by the public.

Services offered to the public to create or evaluate redistricting plans and community boundaries are often opaque and subject to misinterpretation unless adequately documented. Transparency thus requires that:

  • Software necessary to replicate the creation or analysis of redistricting plans and community boundaries produced by the service must be publicly available.
  • The service must provide the public with the ability to make available all published redistricting plans and community boundaries in non-proprietary formats that are easily read and analyzed with commonly-used geographic information software.
  • Services must provide documentation of any organizations providing significant contributions to their operation.

Promoting Public Participation

New technologies provide opportunities to broaden public participation in the redistricting process. These technologies should aim to realize the potential benefits described and be consistent with the articulated transparency principles.

Redistricting is a legally and technically complex process. District creation and analysis software can encourage broad participation by: being widely accessible and easy to use; providing mapping and evaluating tools that help the public to create legal redistricting plans, as well as maps identifying local communities; be accompanied by training materials to assist the public to successfully create and evaluate legal redistricting plans and define community boundaries; have publication capabilities that allow the public to examine maps in situations where there is no access to the software; and promoting social networking and allow the public to compare, exchange and comment on both official and community-produced maps.



Official Endorsement from Organizations – Americans for Redistricting Reform, Brennan Center for Justice at New York University, Campaign Legal Center, Center for Governmental Studies, Center for Voting and Democracy, Common Cause, Demos, and the League of Women Voters of the United States.

Attending board members – Nancy Bekavac, Director, Scientists and Engineers for America; Derek Cressman, Western Regional Director of State Operations, Common Cause; Anthony Fairfax, President, Census Channel; Representative Mike Fortner (R), Illinois General Assembly; Karin Mac Donald, Director, Statewide Database, Berkeley Law, University of California, Berkeley; Leah Rush, Executive Director, Midwest Democracy Network; Mary Wilson, President, League of Women Voters.

Editors Micah Altman, Harvard University and the Brookings Institution; Thomas E. Mann, Brookings Institution; Michael P. McDonald, George Mason University and the Brookings Institution; Norman J. Ornstein, American Enterprise Institute.

This project is funded by a grant from the Sloan Foundation to the Brookings Institution and the American Enterprise Institute.

Publication: The Brookings Institution and The American Enterprise Institute
Image Source: © Lucy Nicholson / Reuters
     
 
 




ed

Pulling Back the Curtain on Redistricting


Every 10 years — unfortunately, sometimes more frequently — legislative district lines are redrawn to balance population for demographic changes revealed by the census. What goes on is much more than a simple technical adjustment of boundaries, with ramifications that largely escape public notice.

Politicians often use redistricting as an opportunity to cut unfavorable constituents and potential challengers out of their districts. Barack Obama, for example, learned the rough and tumble of redistricting politics when Rep. Bobby Rush (D-Ill.) carved Obama's Chicago home out of Rush's congressional district after losing a 2000 primary challenge to Obama, then a state senator.

Incumbents can also use redistricting to move favorable constituents into their districts. Obama himself used the state legislative redistricting to extend his predominantly African American district north into a wealthy area of Illinois known as the Gold Coast. This new constituency allowed Obama to hone an effective biracial campaigning style that served him well when he ran for the U.S. Senate and the presidency.

Critically, these decisions are made with little or no public input or accountability. While Arizona and California are among the few states that give the public a chance to see and participate in how the boundaries are set, by using open redistricting commissions, most states gerrymander legislative lines behind closed doors. Figures from both major parties tilt the electoral playing field so much that one party is essentially assured of winning a given district, controlling the state legislature or winning the most seats in the state's congressional delegation. In other words, the democratic process is subverted. In this system, politicians select voters rather than voters electing politicians.

A 2006 Pew survey found that 70 percent of registered voters had no opinion about congressional redistricting. Among the few that expressed an opinion, some mistook the question to be about school districts rather than congressional districts.

For many reasons it has been hard to fault the public. An immense amount of population data must be sifted and then assembled, much like a giant jigsaw puzzle, to ensure that districts satisfy complex federal requirements relating to equal population and the Voting Rights Act, and varying state requirements that may include compactness and respect for existing political boundaries or communities. And access to these data and the software necessary to assemble and analyze them have long been out of public reach.

In the previous round of redistricting, according to a 2002 survey of authorities we conducted with our colleague Karin Mac Donald, most states did not provide any tools, facilities, dedicated assistance or software to support the public in developing redistricting plans. Many states failed to provide even minimal transparency by making data available, providing information about their plans online or accepting publicly submitted plans. Many redistricting authorities have not made firm plans to support transparency or public participation in the current round of redistricting.

In the coming year, however, technological advancements will enable anyone with a Web browser and an interest in how he or she is represented to draw district maps of his or her community and state that meet the same requirements as official submissions. Under the direction of scholars at the Brookings Institution and the American Enterprise Institute, and with consultation from an array of experts in redistricting issues, we have developed a set of principles for transparency and public participation. These principles have been endorsed by an array of stakeholders, including Common Cause and the League of Women Voters of the United States.

Americans will be able to participate directly in their democracy by offering plans to be compared with the politician-drawn maps. The public and even the courts will no longer have to accept that whatever is devised by politicians in the backroom.

The Wizard of Oz appeared powerful because he hid behind a curtain -- until it was pulled back. The time has come to pull back the curtain on redistricting. A good place to start is by passing Rep. John Tanner's Redistricting Transparency Act, which has 38 co-sponsors from both parties. If Congress will not act, state governments can follow the lead of the few states that provide for meaningful transparency and public participation. Failure to provide for transparency and public participation should be recognized for what it is: an obviously self-serving act, placing the interests of politicians above the public interest.

Publication: The Washington Post
Image Source: © Joel Page / Reuters
     
 
 




ed

Toward Public Participation in Redistricting


Event Information

January 20, 2011
9:00 AM - 12:00 PM EST

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

The drawing of legislative district boundaries is among the most self-interested and least transparent systems in American democratic governance. All too often, formal redistricting authorities maintain their control by imposing high barriers to transparency and to public participation in the process. Reform advocates believe that opening that process to the public could lead to different outcomes and better representation.

On January 20, Brookings hosted a briefing to review how redistricting in the 50 states will unfold in the months ahead and present a number of state-based initiatives designed to increase transparency and public participation in redistricting. Brookings Nonresident Senior Fellows Micah Altman and Michael McDonald unveiled open source mapping software which enables users to create and submit their own plans, based on current census and historical election data, to redistricting authorities and to disseminate them widely. Such alternative public maps could offer viable input to the formal redistricting process.

After each presentation, participants took audience questions.

Learn more about Michael McDonald's Public Mapping Project »

Video

Audio

Transcript

Event Materials

     
 
 




ed

@ Brookings Podcast: The Politics and Process of Congressional Redistricting

Now that the 2010 Census is concluded, states will begin the process of reapportionment—re-drawing voting district lines to account for population shifts. Nonresident Senior Fellow Michael McDonald says redistricting has been fraught with controversy and corruption since the nation’s early days, when the first “gerrymandered” district was drawn. Two states—Arizona and California—have instituted redistricting commissions intended to insulate the process from political shenanigans, but politicians everywhere will continue to work the system to gain electoral advantage and the best chance of re-election for themselves and their parties.

Subscribe to audio and video podcasts of Brookings events and policy research »

Video

Audio

     
 
 




ed

The Rigged Redistricting Process


Voters are supposed to choose their representatives, but the flawed redistricting process in our nation too often allows representatives to choose their voters. This rigged game is in full flower in Virginia, which has an accelerated redistricting process this year because elections for its House of Delegates and Senate take place in November. State Senate Majority Leader Richard Saslaw (D) was stunningly candid in a recent radio interview in describing the process politicians would follow to redraw the lines:

“The House does theirs. The Senate does theirs. And I’m not gonna interfere with the lines the House draws for the House. And they’re not gonna interfere with the lines I draw for the Senate. And I would simply say, well, you know, our goal is to make the Democratic districts, particularly the marginal ones, a little bit better than they are now. I’m not greedy. I’m not trying to put all the Republicans out of business by any stretch. They didn’t do that to us 10 years ago. And we’re not gonna do that to them.”

Saslaw described a classic bipartisan incumbent gerrymander; the majority Democrats in the state Senate would let the majority Republicans in the state House stack the deck for its incumbents, and vice versa. The biggest losers? The voters of Virginia, denied competitive elections in which the outcomes reflect their collective preferences.

The situation is different but just as smelly for the redrawing of lines for Virginia’s 11 congressional seats. As Politico described last week, the 11 incumbents — three Democrats and eight Republicans — cut a deal to protect each other, solidifying the GOP’s 8-to-3 edge by making several competitive seats strongly Republican while allowing Democrats to make a sinecure out of the seat of Rep. Gerry Connolly, who barely won in 2010.

Around the country, comparable deals will be cut by pols intent on protecting each other or maximizing the number of seats a party controls (in a way that distorts the actual partisan balance in the state). Thanks to the Supreme Court, the only restraint, other than adhering to the requirements of the Voting Rights Act, is to make sure that all the districts are virtually equal in population. With the aid of sophisticated software, the one-person, one-vote rule allows ample scope for the self-interested manipulation of district boundaries.

Politicians could get away with this in the past because few others had access to the tools to create districts using official census data and past election returns.

No more.

Michael McDonald of George Mason University and Micah Altman of the Institute for Quantitative Social Science at Harvard, working in conjunction with our two think tanks, have created the Public Mapping Project, an open-source software package that enables anybody to create districts for any state that balance such desirable qualities as compactness and the protection of communities of interest with competitiveness and partisan fairness, all while satisfying one-person, one-vote and the Voting Rights Act.

The first important use of the software is coming in Virginia. To his credit, Gov. Bob McDonnell (R) has created an independent redistricting commission that can recommend more objective and public-interest-oriented plans than the ones produced by the pols. Unfortunately, McDonnell’s commission has no teeth beyond its public profile and ability to showcase plans that can point up, by their quality, the folly of rigged plans. The commission has agreed to give serious weight to the best plans produced by a competition created by George Mason and Christopher Newport universities. Teams of students from 13 colleges and universities in Virginia produced 57 plans for Congress, the House of Delegates and the state Senate. Unlike the plans politicians are crafting behind closed doors, all of the student plans are online (at districtbuilder.varedistrictingcompetition.org).

The two of us judged the plans (awards will be presented in Richmond on Tuesday) and were deeply impressed with what these students — most of them undergraduates but including a team from William and Mary Law School — accomplished. They weighed how to draw district maps that respected federal and state requirements without bending to the interests of incumbent officeholders or political parties. They created two sets of maps: one, through a politically blind process that prioritized contiguous and compact districts respecting Virginia’s communities of interest, including cities and counties, and sensitive to the representation of minorities; the other, by adding to these standards an explicit effort to create as many competitive districts as possible and to fairly reflect public support for the two parties. This was not easy, given the substantial changes in Virginia’s population since the last census, the need to create districts that are virtually equal in population and the trade-offs required when redistricting criteria conflict.

The best student plans show that it is possible to create more legitimate and responsive districts — and that with the right tools, citizens anywhere can create better plans to choose their representatives than the representatives do to protect their own careers. While politicians may fight to keep the process closed, the tools are available to enable us to do better. Virginia’s college students have demonstrated that. The challenge is to replicate their efforts across the country and to harness informed and empowered public participation to improve the quality of our democracy.

Authors

Publication: The Washington Post
Image Source: © Yuri Gripas / Reuters
      
 
 




ed

Redistricting and the United States Constitution


Thomas Mann joins Sean O’Brien and Nate Persily on the Diane Rehm Show to examine what the U.S. Constitution says about drawing congressional and legislative districts and how court decisions have further shaped those guidelines.

DIANE REHM: Thanks for joining, us I'm Diane Rehm. The framers of the U.S. Constitution did not use the word district when they outlined how Congressional representatives would be chosen. Article 1, Section 2 of the document states only how to choose the number of lawmakers. Today, the redistricting process has become at times contentious and blatantly partisan. As part of our "Constitution Today" series, we look at what the document says about the process of redistricting and how court cases have furthered shaped those guidelines.

Joining me here in the studio are Sean O'Brien of the Center for the Constitution at James Madison's Montpelier, Thomas Mann of the Brookings Institution and joining us from Columbia Law School where he is The Beekman Professor of Law and Political Science, is Nate Persily. Throughout the hour, we'll welcome your calls, questions, 800-433-8850. Send us your e-mail to drshow@wamu.org. Good morning, gentlemen. Thank you for joining me.

SEAN O'BRIEN: Good morning.

THOMAS MANN: Good morning.

NATE PERSILY: Good morning.

REHM: Sean O'Brien, let me start with you. What does the constitution actually say about legislative districts and I'm glad that you have a copy of the constitution right in front of you, good. Nate Persily has his as well.

O'BRIEN: As you indicated in the opening it's very, very vague, as are many things in the constitution, and we have to figure out how to implement what this constitution says. Really what they did initially was set up the initial representation and came up with the number of representatives that each state would have before they knew how many people lived there and set up a minimum number of representatives that each state could have and the maximum size, which they could be.

And so they basically -- it just says here the actual enumeration shall be made within three years after the first meeting of the Congress of the United States and within every subsequent term of 10 years, in a manner as they shall by law direct. The number of representatives shall not exceed one for every 30,000, but each state shall have at least one representative. And until such enumeration shall be made and then they lay out which states get how many members of Congress in the first Congress.

And that gets into an interesting story that Tom and I were talking about out in the lobby, but again, it's pretty open and that's why we have a lot of opportunities to continue to talk about this issue right now.

REHM: All right. And turning to you, Nate Persily, when did the word district first come into play?

PERSILY: Well, for hundreds of years now, we've had districts, but as Sean said, there's no constitutional requirement that we have it. We have since the Supreme Court decisions in the 1960s abided by a rule of population equality for congressional and other districts and are drawn but Congress then has passed statutes, various apportionment statutes over time that have required single member districts and the one that currently exists today is about 90 years old.

REHM: Ninety years old? Tom Mann.

MANN: It's important to remember the other provision of the constitution that is relevant here is Article 1, Section 4, the times, places and manner of holding elections for senators and representatives shall be prescribed in each state by the legislature thereof. So it was the states that were given the authority to decide how those representatives would be elected. They could have set up a proportional representation system, everyone running at large statewide in which case redistricting would never have arisen as a problem.

Listen to the interview or read the full transcript at thedianerehmshow.org »

Authors

Publication: The Diane Rehm Show
Image Source: © Jonathan Ernst / Reuters
      
 
 




ed

Web Chat: The Politics of Congressional Redistricting


Following each decennial Census, states re-draw the boundaries of their voting districts, often to the benefit of one party over another. Some states which have lost population lose seats in the House of Representatives and some growing states gain. This highly-charged political process is taking place against a backdrop of fierce partisanship at the national and local levels at a time when sophisticated redistricting technology is widely available and when the decisions made by state governments will reverberate in the coming elections.

On April 20, Thomas Mann answered your questions on the status of the redistricting process, and efforts for reform around the nation, in a live web chat moderated by David Mark, senior editor at POLITICO.

The transcript of this chat follows:

12:31 David Mark: Welcome to the chat. I'll open the discussion by asking about Texas, which will get four new House seats through reapportionment. Will Republicans realistically be able to add four new seats or will gains be limited by Voting Rights Act regulations?

12:33 Tom Mann: This is a case in which complete partisan control of the redistricting process is no guarantee that the majority party will reap the benefits of additional seats in the state delegation. Over a majority of the population gains in Texas have come from Hispanics and many of them are concentrated in urban areas. They will almost certainly garner at least two of the four new seats and the odds are that Democrats will win those seats.

12:36 David Mark: California for the first time will draw districts based on recommendations by a non-partisan citizens panel. Will this put incumbents in danger and how else might it affect the redistricting process?

12:40 Tom Mann: California has specialized in eliminating competitive House districts through the redistricting process. No other state comes close to them. The new commission is almost certain to put some incumbents in both parties in more competitive districts. However, it is not clear that one party will gain. The current lineup of seats by party pretty much reflects their statewide strength.

12:40 [Comment From Dan: ] Who’s got the edge in the redistricting process across the country – Democrats or Republicans, and why?

12:45 Tom Mann: Republicans have a clear advantage because of their success in the 2010 midterm elections, in which they took control of many governorships and state legislatures. They control the process in 17 states with roughly 200 seats while the Democrats are in charge in only 7 states with 49 seats. But there are other factors limiting Republican gains, including the fact that they now have many seats in districts won by Obama in 2008 (60). Republicans will likely put a higher priority on shoring up some of their vulnerable incumbents than in drawing new Republican districts.

12:45 [Comment From Sally: ] Is it all 50 states that will see new congressional district boundaries? I have heard only about Texas and Ohio. Is that where the big fights are?

12:46 [Comment From Stephanie: ] We’ve limited the House to 435 members for many years now, but there was a time when the size of the House changed with the Census. What’s the history on that? Why did they decide to cap it, and should it stay capped?

12:47 Tom Mann: States with only a single House district have no congressional boundaries to redraw. All of the others have to redistrict to account for seat gains/losses and/or population shifts within states. Major battles are shaping up in Florida, Pennsylvania, Michigan, Illinois, Georgia and North Carolina as well.

12:48 [Comment From John: ] It looks like the south and the west will gain seats, while the industrial northeast and the farm heartland will lose. Who makes the ultimate decision on which states will win or lose a seat? Is that process complete?

12:50 Tom Mann: That Apportionment process is complete. It is determined by a congressionally-approved formula applied to new census data. Ten states, mostly in the industrial north/midwest, will lose 12 seats. Eight states, including 4 in Texas and 2 in Florida, will gain a total of 12 seats.

12:50 [Comment From Rebecca: ] You’ve written about how political this process is, and some call redistricting the “incumbent protection” process. Is that good or bad?

12:56 Tom Mann: Redistricting in most states in done through the normal legislative process. (A few states use a bipartisan or independent redistricting commission.) Political self-interest -- protecting the interests of incumbents and/or the dominant party -- drives the process and is constrained only by requirements for equal population, protection of minority interests, and some other criteria specified by individual states. I believe this self-interest should not automatically prevail over broader public interest in competitive elections, accountable elected officials, and communties of interest.

12:56 [Comment From Don: ] How can we best reform the redistricting process and remove the partisanship that seems to dominate it?

1:02 Tom Mann: There are a variety of approaches. One is to alter the basic electoral system by moving from single-members districts to some form of proportional representation. Another is to lodge redistricting authority with independent, nonpartisan or bipartisan commissions. Arizona and now California are two examples of this. Yet another is to build into state (or federal) law requirements for competitive elections and partisan fairness. Finally, a new effort underway this cycle is to rely on transparency and public participation to create alternative maps and use them to bring pressure to bear on those with formal redistricting authority. I've been involved in a collaborative effort to develop open-source mapping software to do just that. It is being picked up by individuals and groups around the country. You can get information at publicmapping.org.

1:02 [Comment From Joe: ] How can ordinary citizens get involved? The whole redistricting system seems rigged to me.

1:02 Tom Mann: My last answer is directly responsive to your question.
Wednesday April 20, 2011 1:02 Tom Mann

1:03 [Comment From Tom: ] I saw Rep. Dennis Kuchinich on the Daily Show with Jon Stewart, and he said his district was going to disappear entirely. Does that really happen?

1:05 Tom Mann: Ohio will lose two seats. That means two current incumbents will be out of a job in Ohio, 12 nationally, just because of reapportionment. Kuchinich may well survive this process but it will be driven by Republicans, since they control the process.

1:05 David Mark: Thanks for joining us today.

Authors

Image Source: © Yuri Gripas / Reuters
      
 
 




ed

@ Brookings Podcast: Redistricting for Political Gains

Every decade since 1790, a census of the entire U.S. population is used by state governments to apportion representatives in the U.S. House of Representatives. But the redrawing of congressional districts that follows the census is an exercise in pure politics, says expert Thomas Mann. With the power to redistrict in the hands of incumbents in state legislatures, coupled with powerful mapping technologies, a state’s representation in Congress often bears little relation to the actual partisan makeup of its population, he says.

Video

Audio

      
 
 




ed

Democratizing Legislative Redistricting


Often considered among the most self-interested and least transparent systems in American democracy, the decennial process to redraw legislative district boundaries is now in full swing. On Monday, experts will review the results coming in from the states and discuss initiatives—from public mapping to independent commissions—to open up redistricting. Thomas Mann explains how this round may be a start toward transparency.

Authors

Image Source: © Allen Fredrickson / Reuters
      
 
 




ed

A Status Report on Congressional Redistricting


Event Information

July 18, 2011
10:00 AM - 11:30 AM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

Full video archive of this event is also available via C-SPAN here.

The drawing of legislative district boundaries is arguably among the most self-interested and least transparent systems in American democracy. Every ten years redistricting authorities, usually state legislatures, redraw congressional and legislative lines in accordance with Census reapportionment and population shifts within states. Most state redistricting authorities are in the midst of their redistricting process, while others have already finished redrawing their state and congressional boundaries. A number of initiatives—from public mapping competitions to independent shadow commissions—have been launched to open up the process to the public during this round of redrawing district lines.

On July 18, Brookings hosted a panel of experts to review the results coming in from the states and discuss how the rest of the process is likely to unfold. Panelists focused on evidence of partisan or bipartisan gerrymandering, the outcome of transparency and public mapping initiatives, and minority redistricting.

After the panel discussion, participants took audience questions.

Video

Audio

Transcript

Event Materials

      
 
 




ed

The Impact of Density and Diversity on Reapportionment and Redistricting in the Mountain West


Executive Summary

During the first decade of the 21st century the six states of the Mountain West — Arizona, Colorado, Idaho, Nevada, New Mexico, and Utah — experienced unprecedented political and demographic changes. Population growth in all six states exceeded the national average and the region is home to the four states that underwent the largest population gains between 2000 and 2010. As a consequence, the region is now home to some of the most demographically diverse and geographically concentrated states in the country— factors that helped to transform the Mountain West from a Republican stronghold into America’s new swing region. This paper examines the impact that increased diversity and density are exerting on reapportionment and redistricting in each Mountain West state and assesses the implications that redistricting outcomes will exert both nationally and within each state in the coming decade.  Nationally, the region’s clout will increase due to the addition of three seats in the House of Representatives (one each in Arizona, Nevada, and Utah) and electoral contexts in Arizona, Colorado, Nevada, and New Mexico that will result in competitive presidential and senate elections throughout the decade. At the state level, the combination of term limits, demographic change, and the reapportionment of state legislative seats from rural to urban areas will alter the composition of these states’ legislatures and should facilitate the realignment of policy outcomes that traditionally benefitted rural interests at the expense of urban needs.

Introduction

As reapportionment and redistricting plans across the 50 states are finalized and candidate recruitment begins in earnest, the contours of the 2012 election are coming into focus. One region of the country where reapportionment (redistributing seats to account for population shifts) and redistricting (drawing boundaries for state legislative and congressional districts) are likely to have significant consequences in 2012 and beyond is in the six states of the Mountain West: Arizona, Colorado, Idaho, Nevada, New Mexico, and Utah. Driven by explosive growth during the past decade, the Mountain West is now home to some of the most demographically diverse and geographically concentrated states in the country. As a consequence, the region has increasingly become more hospitable to Democrats, particularly Colorado, Nevada, and New Mexico and to a lesser extent Arizona. In this paper, I examine how these changes are affecting reapportionment and redistricting across the region. Specifically, after summarizing some of the key regional demographic and political changes, I offer a brief overview of the institutional contexts in which the maps are being drawn. This is followed by an assessment of outcomes in each state. I conclude with a discussion of the national and state level implications that reapportionment and redistricting are likely to engender across the Mountain West.

A Region in Transition

Between 2000 and 2010 population growth in all six Mountain West states outpaced the national average of 9.7 percent and the region contains the four states that experienced the largest percent population increase in the country (Nevada = 35.1 percent; Arizona = 24.6 percent; Utah = 23.8 percent, and Idaho = 21.1 percent).[i] As a consequence, Nevada and Utah each gained their fourth seats in the House of Representative and Arizona was awarded its ninth. Beginning with the 2012 election, the Mountain West will have 29 U.S. House seats (Idaho has two House seats, New Mexico has three, and Colorado has seven) and 41 Electoral College votes.

Across the Mountain West, population growth was concentrated in the region’s largest metropolitan statistical area (MSA).[ii] Most notably, the Las Vegas metro area is now home to nearly three out of four Nevadans — the mostly highly concentrated space in the region. In Arizona, roughly two-thirds of the population now resides in the Phoenix MSA, which grew by nearly 30 percent. The Albuquerque MSA experienced the largest overall increase as a share of total population (nearly 25 percent) and now contains 44 percent of New Mexico’s population. And while Idaho remains the state in the region with the least dense population, growth in the Boise MSA significantly outpaced that state’s overall population gain and nearly 40 percent of all Idahoans reside in and around Boise. On the other end of the spectrum are the Salt Lake City and Denver MSAs, which as shares of the Colorado and Utah populations decreased slightly from 2000. Still, better than half (50.57 percent) of all Coloradoans live in Denver and its suburbs and around 41 percent of Utah’s population is concentrated in the Salt Lake City MSA.

In addition to further urbanizing the region, the prior decade’s growth continued to transform the region’s demographics as all six Mountain West states are now more ethnically diverse as compared to a decade ago.[iii] The largest changes occurred in Nevada where the minority population increased by over 11 percent and now better than 45 percent of Nevadans are classified as non-white. While the bulk of this growth was among Hispanics, whose share of the population increased by 7 percent and are now 26.5 percent of all Nevadans, the Silver State also recorded large increases among Asian and Pacific Islanders. Arizona experienced similar increases as that state’s minority population mushroomed from 36.2 percent to 42.2 percent with Hispanics now constituting 30 percent of the population. In Colorado, the minority population increased by 3.5 percent to 30 percent. Nearly all of this change was caused by an increase in Hispanics, who now constitute 20.7 percent of the state’s population. New Mexico continues to be the Mountain West’s most diverse state as nearly three out of five New Mexicans are minorities and the state contains the region’s largest Hispanic population (46 percent). And while Idaho and Utah remain overwhelmingly white, both states’ non-white populations grew at levels similar to Colorado. Idaho is now 16 percent non-white (including a Hispanic population of 11.2 percent) and nearly one in five Utahans is a minority. Between 2000 and 2010, Hispanics increased by 4 percent to constitute 13 percent of Utah’s population.

Politically, these changes helped to create competitive electoral contexts across the region. Indeed, with the obvious exceptions of Idaho and Utah, the Mountain West is now more hospitable to the Democratic Party than it was in 2000. In particular, Democrats were able to make significant gains in Colorado, Nevada, and New Mexico and effectively flipped those states from Republican leaning in 2000 to Democratic leaning in 2010. In Arizona, the Democratic performance was highly variable and moved in near perfect tandem with the broader national political environment. At the same time, the downturn in Democratic support in 2010 indicates that the party has not yet consolidated its gains. Riding a favorable 2010 macro-environment, Mountain West Republicans gained one governorship (New Mexico), seats in ten of the region’s 12 state legislative chambers, and seven House seats (out of a total of 26 in the region).[iv] Thus, heading into the 2011 redistricting cycle, Republicans control the executive and legislative branches in Arizona, Idaho, and Utah and there are no Mpuntain West states where the Democrats have unified control as the partisan composition of the Colorado legislature is divided and Nevada and New Mexico have Republican governors and Democratic legislatures.

The Institutional Context

Because of variation in the institutional arrangements governing how each state approaches reapportionment and redistricting, the impact that the demographic and political changes outlined above are exerting on map drawing differs across the region. To be sure, there are a number of commonalities across the states such as requirements of equally populated U.S. House districts, minimum population variation for state legislative districts, and boundary lines that are compact, contiguous, and maintain communities of interests. 

Beyond these constraints, mapmakers across the region are afforded different degrees of latitude in how they go about doing their work. For instance, in Nevada and New Mexico, the residency of incumbents can be considered, while Idaho forbids it. Idaho allows for twice as much inter-district population variation for state legislative districts as Colorado and New Mexico, and Idaho only allows state legislative districts to cross county lines if the counties are linked by a highway. Arizona and Idaho mandate that two lower chamber districts be nested within the boundaries of a state senate seat, while Colorado, Nevada, New Mexico, and Utah do not. Nevada also allows for multi-member member state legislative districts. Lastly, Arizona’s redistricting plans must be pre-cleared by the U.S. Department of Justice. While Arizona is the only state in the region subject to preclearance, protection of minority voting rights also has been a point of contention in prior redistricting cycles in New Mexico.

The Mountain West states also vary in terms of who oversees the redistricting process. State legislators control the process in Nevada, New Mexico, and Utah, while Arizona and Idaho use commissions. In Colorado, the General Assembly draws the map for the state’s seven U.S. House seats, while a commission oversees the drawing of state legislative maps. For the three states that use commissions for either all or part of their processes, commission size and composition differs significantly and only the Arizona Independent Redistricting Commission (AIRC) is charged with drawing maps that are competitive.[v] 

However, the most significant constraint on reapportionment and redistricting in the Mountain West is the small size of the region’s state legislatures.[vi] The mix of small chambers, increased urbanization, and large geographic spaces means very large and increasingly, fewer and fewer stand- alone rural districts. This dynamic also helps to explain the region’s history of malapportionment that often allocated seats by county regardless of population.[vii] 

State Summaries

Based upon the overview presented above, expectations about the general contours of reapportionment and redistricting in the Mountain West are fairly straightforward: the clout of urban and minority interests will increase and to the degree that those factors benefit the Democrats, the Democrats should gain some partisan advantage. Realizing these outcomes, however, has proven to be less than amicable. With the exception of Utah, all other states in the region have had various aspects of their processes litigated, and map drawing for Colorado’s U.S. House seats and all of Nevada and New Mexico’s redistricting is being completed in state courts. Below, I summarize the status of reapportionment and redistricting in each state.

Arizona

Beginning its work amid criticism of its composition, calls for its abolishment, and an investigation by the Arizona attorney general, the voter-initiated Arizona Independent Redistricting Commission (AIRC) has struggled to balance the conflicting demands of drawing competitive districts with the protection of minority voting rights. The commission’s work has been further hindered by Republican Governor Jan Brewer’s unsuccessful attempt to impeach the commission’s nonpartisan chair. In addition, Arizona has filed a lawsuit in federal court challenging the state’s preclearance requirement.

Republican attempts to undermine the AIRC stem from the fact that given unified Republican control of the Arizona governorship and legislature, Republicans would otherwise be in a position to implement a partisan gerrymander. At the same time, the GOP’s present dominance is partially an artifact of the 2001 redistricting. To gain preclearance in 2001, the AIRC’s maps created a large number of majority-minority state legislative districts and minority-friendly U.S House seats by packing Democratic voters into these districts. In so doing, Democratic support in the surrounding districts was weakened; allowing Republicans to more efficiently translate their votes into seats.[viii] Thus, despite a slight partisan voter registration advantage (4.35 percent as of July 2011), Republicans presently hold more than two-thirds of the state legislative seats and five of eight U.S. House seats.

Given Arizona’s growth patterns between 2000 and 2010 coupled with the AIRC’s charge of creating competitive district, drawing a map as favorable to the GOP in 2011 is virtually impossible unless the size of the Arizona legislature is increased. Still, in order to protect minority voting rights, Arizona’s final maps are likely to tilt in favor of the GOP — just not to the degree that they have in the past. In particular, the elimination and consolidation of rural state legislative districts and a more urban orientation for Arizona’s nine U.S. House districts should provide the Democrats with electoral opportunities that will only increase as Arizona’s population continues to diversity and urbanize.

Colorado

As noted above, Colorado uses a commission (the Colorado Redistricting Commission) for redistricting state legislative seats and the Colorado General Assembly draws the maps for the state’s seven U.S. House seats. Neither process has gone smoothly. For the state’s seven U.S. House seats, the Democratic-dominated state senate and the Republican-controlled lower chamber failed to find common ground after exchanging two rounds of maps. Because Democratic governor John Hickenlooper refused to call a special session, redistricting of Colorado U.S. House seats was completed in state court. After a good deal of legal wrangling, the Colorado Supreme Court upheld a map favored by Colorado Democrats that creates two safe Republican districts, one safe Democratic district, and four districts where neither party’s registration advantage exceeds 4 percent. As a consequence, Colorado will feature a number of competitive U.S. House elections throughout the coming decade.

Map drawing for state legislative seats by the CRC has also been hindered by partisanship. Hoping to break a partisan stalemate, in late summer the nonpartisan chair of the CRC offered maps that combined parts of prior Democratic and Republican proposals to create thirty-three competitive seats (out of a total of 100) and twenty-four seats with Hispanic populations of 30 percent or more. After being approved by the CRC with some Republican dissents, the plan was rejected by the Colorado Supreme Court, which must sign-off on the CRC’s plans before they can be implemented. By attempting to draw more competitive maps — a criterion that the CRC is not obligated to consider – the CRC’s maps undermined its charge of producing districts that keep communities of interest intact. The CRC’s second set maps, which were widely viewed as favoring the Democrats, were upheld by the Colorado Supreme Court.

Idaho

While partisan considerations have loomed large in the reapportionment and redistricting processes in Arizona and Colorado, in Republican-dominated Idaho the main points of contention have been spatial. Indeed, because of the difficulty of satisfying a constitutional requirement limiting county splits and a state law constraining how geographic areas can be combined, the Idaho’s Citizen Commission for Reapportionment (ICCR) failed to reach an agreement before its constitutionally imposed deadline. After sorting through a number of legal and constitutional questions, a second set of commissioners were impaneled and completed their work in less than three weeks. Given Idaho’s partisan composition, the final maps are a regional anomaly as they benefit the GOP while being somewhat more urban oriented. This was accomplished by moving rural Republican voters into urban Democratic state legislative districts and adjusting the lines of Idaho’s 1st House district to shed roughly 50,000 citizens. At the same time, because of Idaho’s strict constraints on how cities and counties can be divided, the map for the state legislature paired a number of incumbents in the same district and one district contains the residences of five incumbents, setting up a number of competitive primary elections.

While growth patterns and demographic and partisan change in Nevada between 2000 and 2010 insured a redistricting process that would favor Democrats, Nevada Republicans sought to delay this inevitability as long as possible. The state’s Republican governor, Brian Sandoval, vetoed two sets of maps passed by the Democratic controlled legislature and Sandoval refused to call a special session to complete redistricting. Instead, he and his party hoped for a better outcome in state court. Despite drawing a supervising judge who was the son of a former Republican Governor, Nevada Republicans fared no better in state court. Ultimately, the process was turned over to three special masters who rejected Nevada Republicans’ claim that section 2 of the Voting Rights Act required a majority Hispanic U.S. House district.[ix] As a consequence, two of Nevada’s U.S. House seats favor Democrats, one is safely Republican, and the fourth is a swing district. In the Nevada legislature the representation of urban interests will increase as parts of or all of forty-seven of the sixty-three seats in the Nevada legislature are now located in the Democratic stronghold of Clark County. 

New Mexico

The 2011 process in New Mexico has essentially been a rerun of the gridlock that engulfed the state’s 2001 redistricting debate. Once again, the Democrats sought to use their control over both chambers of the New Mexico legislature to preserve their majorities and draw the boundaries for the state’s three U.S. House seats in manner favorable to the party. However, because of bickering among Democrats the legislature failed to approve its map for the state’s three U.S. House seats prior to the end of the special session and the plans for the state legislature that were passed on party line votes were vetoed by Republican governor Susana Martinez. Thus, once again, New Mexico’s divided state government coupled with the state’s history of litigating redistricting plans (in 2001 map drawing and court battles cost the state roughly $3.5 million) means that redistricting will be completed in state court. While the Republicans may be able to gain some concessions through the courts, New Mexico is the most Democratic state in the Mountain West and, as noted above, the state’s growth during the prior decade was concentrated in heavily Democratic Albuquerque and its suburbs. Thus, as in 2001, the likely outcome in New Mexico is a redistricting plan that will be favorable to the Democrats and weaken the influence of rural interests.

Utah

Utah is the only state in the region where conditions exist (e.g., unified partisan control in a non-commission state) for the implementation of a partisan gerrymander. However, to accomplish this end required the slicing and dicing of communities and municipalities particularly those in and around the state’s urban center. Most notably, in drawing the state’s four U.S. House seats, Republicans divided the Utah’s population center (Salt Lake City County) into four districts by combining parts of the urban core with rural counties - a plan that, not coincidentally, cracks the only part of the state where Democrats are able to compete. Similarly, maps for state legislative districts increase the number of seats that favor the GOP and, in many instances, protect incumbents from potential primary challengers by dividing communities into multiple districts. Democrats in Utah are so depleted that they were unable to get the Republicans to even agree to include recognition and protection of minority communities of interest to in Utah’s redistricting guidelines. Thus, despite constituting nearly 20 percent of the state’s population, minorities received no consideration in Utah’s 2011 redistricting.

Implications and Conclusions

Reapportionment and redistricting are often regarded as the most political activities in the United States; an expectation that is certainly being realized across the Mountain West. In the swing states where legislators draw the maps (for example, Colorado, Nevada, and New Mexico) but where state government is divided, partisan considerations loomed large, causing all of these states to conclude all or parts of their redistricting processes in the courts. The conflicts between Arizona’s preclearance requirement and the AIRC’s commitment to drawing competitive districts have partisan consequences as well. In one-party Idaho and Utah, the politics of space were at issue.  Geographic constraints on district boundaries imposed through statute and the Idaho constitution ensured that more rural seats were preserved and that the growing influence of urban interests will be checked. In Utah, Republicans moved in the opposite direction by carving up the very communities from which they are elected in order to implement a partisan gerrymander. 

Another school of thought, however, argues that the most typical redistricting outcome is not partisan gain or loss, but an uncertainty that shakes up the state political environment and facilitates political renewal. In the case of the Mountain West, there is evidence to support that claim as well. The biggest source of uncertainty will continue to be growth. While the economic downturn has slowed migration to the region, the Mountain West states remain poised to keep expanding in a manner that will further concentrate and diversify their populations. A second source of uncertainty is the region’s large number of nonpartisans. While redistricting is often framed as a zero-sum game played between Democrats and Republicans, the electoral hopes for either party hinges on its ability to attract the support of the region’s expanding nonpartisan demographic.[x] 

At the state level, with the exception of Idaho, the most significant consequence will be a reduction in rural influence. The combination of term limits in Arizona, Nevada, and Colorado, small legislative chambers, and fast growing urban populations will continue to decrease the number of entrenched rural legislators and the number of stand-alone rural districts. Consequently, urban interests should be positioned to align state policy with demographic reality. The void created by the demise of rural legislators will be filled by minorities, particularly Hispanics. To date, the increased political activism of Hispanic communities across the region has primarily benefited Democrats; helped in no small part by the hard-line rhetoric and policies championed by some Mountain West Republicans.[xi] More generally, depending on growth patterns, by 2020 Nevada and perhaps Arizona may join New Mexico as states with majority-minority populations. Thus, with or without Section 2 of the Voting Rights Act, minority legislators, primarily Hispanics, will increase their ranks significantly. The only question is whether all of these politicians will be taking office with a “D” next to their names or whether some will be elected as Republicans.  

Nationally, the impact of reapportionment and redistricting is mixed. Certainly, the addition of three U.S. House seats after the 2010 census will give more voice to regional issues in Washington D.C. At the same time, because the Mountain West’s House delegation will continue to be split along partisan lines and many of the region’s competitive House seats will rotate between the parties throughout the decade, it may be difficult for any but the safest Mountain West representatives to accrue the requisite seniority to become players in the House. Also, because of pending retirements in Arizona and New Mexico, a successful 2010 primary challenge in Utah, and a resignation in Nevada, the region’s influence in the U.S. Senate is likely to decline in the near term. Indeed, after the 2012 election the only senators from the region who will have served more than one term will be Nevada’s Harry Reid, Arizona’s John McCain, Idaho’s Mike Crapo, and Utah’s Orrin Hatch (presuming a successful 2012 reelection).

Thus, the arena where the region is likely to garner the most attention is in the coming decade’s three presidential elections. Colorado, Nevada, and New Mexico were all battleground states in 2004 and 2008, with Republican George W. Bush narrowly winning all three in 2004 and Democrat Barack Obama flipping them blue in 2008 by wider margins. Obviously, Idaho and Utah will remain out of reach for the Democrats in statewide contests for some time.  However, Arizona is likely to become the region’s fourth swing state in the near future. Thus, continued investment in Arizona and throughout the region will allow the Democrats to further expand the number of Mountain West states in play while forcing the GOP to spend resources to defend turf that it once could safely call its own.

Endnotes
[i] U.S. Census Bureau, “State and County Quick Facts,” August 2011 (http://quickfacts.census.gov/qfd/index.html ).

[ii] U.S. Census, “American Fact Finder,” August 2011 (http://factfinder2.census.gov/faces/nav/jsf/pages/index.xhtml ).

[iii] U.S. Census Bureau, “State and County Quick Facts,” August 2011 (http://quickfacts.census.gov/qfd/index.html ).

[iv] Despite close elections in Colorado and Nevada, none of the region’s U.S. Senate seats changed parties in 2010.

[v] The Arizona Independent Redistricting Commission (AIRC) consists of five appointed members: four partisans chosen by the party leaders of each legislative chamber and a nonpartisan who is chosen by the other four members and serves as chair. The Colorado Redistricting Commission (CRC), which oversees redistricting for state legislative districts, consists of 11 members: four of whom are picked by the party leaders of the General Assembly; three who are selected by the governor; and four who are chosen by the Chief Justice of the Colorado Supreme Court. The Idaho Citizen Commission for Reapportionment (ICCR) consists of six members, four of whom are chosen by party leaders of the Idaho Legislature and one member chosen by each of the state chairs for the Democratic and Republican parties.  

[vi] Excluding Nebraska (because of its unicameral structure), the average size of the lower and upper houses of the other 49 state legislatures are 110 and 39.22 respectively. Only the 42-member New Mexico Senate exceeds the national average chamber size. The largest lower house in the region, Utah’s 75-seat House of Representatives, is 35 seats below the national average. 

[vii] Legislative size, however, is not immutable. To increase the size of the legislatures in Colorado, Idaho, and New Mexico would require amending those states’ constitutions. The lower chamber of the Utah legislature could be expanded as it is presently below its constitutional cap. Arizona and Nevada set the sizes of their legislatures by statute.

[viii] In this regard, redistricting outcomes in Arizona are similar to those in another Section 2 region, the South. In both instances, the provisions of the Voting Rights Act have the perverse effect of increasing symbolic representation for minority groups while decreasing the number of legislators who may be receptive to minority interests. See, Kevin A. Hill, “Congressional Redistricting: Does the Creation of Majority Black Districts Aid Republicans?” Journal of Politics (May 1995): 384–401, and David Lublin, The Paradox of Representation: Racial Gerrymandering and Minority Interests in Congress (Princeton University Press, 1999).

[ix] Governor Sandoval and Republicans in the legislature claimed that Section 2 of the Voting Rights Act requires the use of race as the basis for drawing a Hispanic U.S. House seat — a position clearly at odds with the holding in Shaw v. Reno (509 U.S. 630, 1993), which allows race to be taken into consideration but does not allow it to be the predominant factor. Democrats and many Hispanic activists countered that packing Hispanics into a single House district would marginalize their influence in Nevada’s other three U.S. House districts and because white voters in Nevada do not vote as a block as evidenced by the fact that Hispanic candidates won eight state legislative seats, the attorney generalship, and the governorship in 2010 without such accommodations, race-based redistricting in Nevada is unnecessary

[x] At the time of the 2010 election, nonpartisan registrants constituted over 30 percent of Arizona voters, 26 percent of the Colorado electorate, and around 15 percent of voters in Nevada and New Mexico (Idaho and Utah do not report partisan registration figures)

[xi] For example, Arizona’s 2010 Support Our Law Enforcement and Safe Neighborhoods Act (SB 1070) and Utah’s 2011 Utah Illegal Immigration Enforcement Act (HB497). 

Downloads

Authors

Image Source: © Adam Hunger / Reuters
      
 
 




ed

Arizona State Legislature v. Arizona Independent Redistricting Commission, et al.


Editor's Note: For full disclosure, Tom Mann (joined by Norm Ornstein) filed an amicus curiae brief in Arizona State Legislature v. Arizona Independent Redistricting Commission.

James Madison would be pleased. The 5-4 decision announced today by the Supreme Court upholding Arizona’s use of the initiative to establish an independent redistricting commission is a model of constitutional reasoning and statutory interpretation. It underscores the essential connection between republican government and popular sovereignty, in which the people have the ultimate authority over who shall represent them in public office. The majority opinion quotes Madison to powerful effect: “The genius of republican liberty seems to demand . . . not only that all power should be derived from the people, but those entrusted with it should be kept in dependence on the people.”

Madison worried about the dangers of the manipulation of electoral rules to serve the immediate interests of political actors. He was himself the target of a gerrymander designed (unsuccessfully) to deny him a seat in the first Congress. The Elections Clause of the Constitution, by granting Congress the power to override state actions setting the time, place and manner of elections, was designed partly as a safety valve to contain the abuse of power by those in a position to determine which voters will hold them accountable.

Today’s intensely polarized politics drive major partisan campaigns to seize control of the redistricting authority in the states and to wield that power to boost prospects for majority standing in the House. Partisan gerrymandering is not the major source of our dysfunctional politics but it surely reinforces and exacerbates the tribal wars between the parties. A number of states have used the initiative device provided in their constitutions to establish independent commissions to replace or supplement the regular state legislative process in redrawing congressional and/or state legislative district boundaries. Such commissions are no panacea for partisan gerrymandering. Their composition and rules vary in ways that can shape the outcome. But the evidence suggests they can mitigate the conflicts of interest that are a part of the regular process and produce more timely plans less subject to judicial preemption.

The Court has upheld the right of those states to legislate electoral rules through a popular vote. Had the minority position prevailed, state laws governing many aspects of the electoral process would have been subject to constitutional challenge. And an important safety value available to the people of the states for responding to abuses of power by those in public office has been preserved.

This should not be read more broadly as a triumph of direct democracy over representative government. Many scholars who provided expert opinion supporting the majority opinion retain serious concerns about the overuse and misuse of initiatives and referendums. Instead, the decision strengthens the legitimacy of representative democracy by reinforcing the essential link between republican government and popular sovereignty.

Authors

Image Source: © Jonathan Ernst / Reuters
      
 
 




ed

Long-range stand-off does not make sense, nor do its proposed numbers


The U.S. military will carry out a major modernization of its strategic nuclear forces in the 2020s.

It will cover all three legs of the strategic triad.

Much of the planned program makes sense. The long-range standoff (LRSO) — a new nuclear-armed cruise missile to outfit strategic bombers — does not.

The primary reason for the modernization program is that many US strategic weapons systems are aging out, and American policy is that, as long as there are nuclear weapons, the United States will maintain a safe, secure and robust nuclear deterrent.

The Ohio-class ballistic missile submarines will begin to hit the end of their service life in the late 2020s, and the Navy will need new submarines. Submarines and submarine-launched ballistic missiles (SLBMs) make up the most survivable leg of the triad, and they carry the bulk of deployed US strategic warheads.

The service life of the Minuteman III intercontinental ballistic missile (ICBM) runs out in 2030. The Air Force seeks a replacement ICBM. At a minimum, keeping an ICBM leg of the triad would require another life extension program for existing Minuteman III missiles.

As for the air-breathing leg of the triad, the Air Force wants to procure 80 to 100 B-21 bombers. Plans are shrouded in secrecy but reportedly will incorporate stealth features and advanced electronic warfare capabilities to allow the aircraft to penetrate contested air space. The Air Force is also modernizing the B61 nuclear gravity bomb for use on strategic bombers.


One can and should question the Pentagon’s desired numbers for these programs. That is especially the case given the projected costs of strategic modernization, which Pentagon officials openly admit they do not know how to fund.

It is not clear why the United States will need to replace 400 deployed ICBMs on a one-for-one basis, particularly as the Air Force several years ago was prepared to go down to 300. A force of 200-300 ICBMs would suffice and result in significant cost savings. Likewise, one can challenge the requirement for 12 new ballistic missile submarines, as opposed to nine or 10.

The biggest question, however, arises over the LRSO, with a projected cost of $20 billion to $30 billion. The Air Force originally developed nuclear-armed air-launched cruise missiles (ALCMs) in the 1970s because the B-52 — then the mainstay of the strategic bomber fleet — presented a big target for adversary radars. That would make it hard for the aircraft to penetrate air defenses. A B-52 armed with ALCMs could remain outside of radar range and release its cruise missiles.

The B-2, with its stealth features, was designed to restore a penetrating capability. The Air Force plans to use stealth and electronic warfare capabilities to give the B-21 a penetrating capability as well. If these bombers can defeat and penetrate air defenses, that makes the LRSO redundant. (Moreover, unlike in the 1970s, the Air Force today has very capable long-range conventionally armed cruise missiles that provide a standoff capability for bombers.)

If, on the other hand, the stealth of the B-21 will be compromised in the not-too-distant future, then one has to question the wisdom of spending $60 billion to $80 billion — and perhaps more — to procure the B-21. If we believe the B-21 would soon encounter problems penetrating air defenses, scrap that program. Buy instead modified Boeing 767s, a variant of which will serve as the Air Force’s new aerial tanker, and arm them with the LRSO.

The Air Force’s evident attachment to the B-21 suggests, however, that it believes that the aircraft will be able to defeat adversary air defenses for some time to come. That means that the LRSO would add little capability to the US strategic force mix.

If one were to argue for the redundant capability provided by the LRSO, the number of new ALCMs that the Pentagon proposes to purchase — 1,000 to 1,100 — is difficult to understand. Even allowing for extra cruise missiles for test purposes, the number seems excessively high.

In its 2010 annual report to Congress on implementation of the Strategic Offensive Arms Reduction Treaty (SORT), the State Department advised that, as of Dec. 31, 2009, the United States had 1,968 operationally deployed strategic nuclear warheads. That figure captured the actual number of nuclear warheads atop SLBMs and ICBMs plus the number of nuclear bombs and ALCMs at air bases for use by bombers.

On June 1, 2011, a State Department fact sheet showed the number of deployed US strategic warheads as 1,800 as of Feb. 5, 2011, when the New Strategic Arms Reduction Treaty (New START) went into force. A Dec. 1, 2011, fact sheet provided a more detailed breakdown of US strategic forces. It stated that, as of Sept. 1, 2011, the United States had 1,790 deployed strategic warheads and 125 deployed strategic bombers. Like SORT, New START counts each warhead on a deployed ballistic missile as a deployed warhead. But New START counts bomber weapons differently from SORT. New START attributes each deployed bomber as one warhead, regardless of the number that it can carry or the number of weapons that may be at bomber bases.

The 125 deployed bombers on Sept. 1, 2011, would have counted as 125 under New START’s deployed strategic warhead total. Reducing 1,790 by 125 yields 1,665 — the number of deployed warheads then on US SLBMs and ICBMs.

Comparing the SORT and New START numbers is a bit of an apples-and-oranges comparison, but it gives some idea of the number of bomber weapons at US strategic bomber bases. Unless there was a dramatic increase in the number of warheads on ICBMs and SLBMs between the end of 2009 and September 2011 — and there is no reason to think that there was — comparing SORT’s 1,968 figure (end of 2009) to the 1,665 deployed warheads on ICBMs and SLBMs (under New START counting rules in September 2011) suggests some 300 nuclear bombs and ALCMs were at bomber bases. The B-2s would have been armed with bombs, which indicates a maximum of 200-250 ALCMs. The Federation of American Scientists (FAS) also estimates that there are about 300 nuclear weapons at strategic bomber bases, of which 200 are nuclear-armed ALCMs. FAS believes an additional 375 ALCM airframes are held in reserve.

This comparison raises the question: Why would 1,000-1,100 ALCM airframes be needed to support a couple of hundred deployed ALCMs?

The United States should sensibly modernize its strategic deterrent, particularly in a time of tight defense budgets. The case for the LRSO is demonstrably weak, especially for the planned size of the program. The LRSO should be shelved.

This piece was originally published in Defense News.

Authors

Publication: Defense News
Image Source: © Kim Hong-Ji / Reuters
     
 
 




ed

The Iran deal and regional nuclear proliferation risks, explained


Was the Iran nuclear deal, signed last summer, a prelude to proliferation across the Middle East? This is a question that Brookings Senior Fellow Robert Einhorn and Non-resident Senior Fellow Richard Nephew explore in a new report. At an event to discuss their findings—moderated by Brookings Deputy Director of Foreign Policy and Senior Fellow Suzanne Maloney and with panelists Derek Chollet and H.E. Yousef Al Otaiba—Einhorn and Nephew argued that none of the Middle East’s “likely suspects” appears both inclined and able to acquire indigenous nuclear weapons capability in the foreseeable future. They also outlined policy options for the United States and other members of the P5+1.

Einhorn described the incentives and capabilities of Egypt, Saudi Arabia, Turkey, and the United Arab Emirates for acquiring nuclear weapons. He argued that, while both Saudi Arabia and the UAE a) consider Iran a direct military threat, b) have concerns about the U.S. commitment to the security of the region, and c) have sufficient financial resources, they recognize that they have no choice but to rely on the United States for their security and are unwilling to jeopardize that relationship by seeking nuclear weapons. Einhorn also said that both Egypt and Turkey do not view Iran as a direct military threat and are more preoccupied with instability on their borders and internal security, concerns that cannot be addressed by possession of a nuclear weapons capability.

Nephew outlined policy recommendations, including measures to ensure strict implementation of the JCPOA, greater intelligence sharing and security cooperation with Middle East allies, and means of fostering IAEA-supervised regional arrangements that would encourage peaceful nuclear energy development and limit potentially destabilizing nuclear activities. Nephew also asserted that some elements of the JCPOA, such as online monitoring of nuclear facilities, could be applied to other nuclear energy programs in the region to enhance transparency. 

Derek Chollet of the German Marshall Fund argued the United States must deter Iran and reassure U.S. allies by maintaining a robust military presence in the region, planning a range of U.S. responses to destabilizing Iranian activities, and ensuring that U.S. forces have the weapons systems and personnel required for scenarios involving Iran. He suggested that the United States and its Middle East allies continue regular summit meetings on security and broader partnership issues, and possibly formalize security cooperation by establishing a dedicated regional security framework. 

Emirati Ambassador to the United States Yousef Al Otaiba emphasized that, to many of the countries in the region, Iran poses a threat wider than just its nuclear activities. He suggested that the JCPOA will be judged on the degree to which the United States and its allies address Iran’s destabilizing behavior outside of the nuclear file, such as Tehran’s support for Hezbollah and the Houthis, as well as its ballistic missile activities. Al Otaiba said that, though he has seen some efforts by the Obama administration to push Iran on its regional behavior, it has sent a mixed message overall, with senior U.S. officials also encouraging European banks to invest in Iran. The ambassador asserted that rigorous enforcement of the JCPOA will be critical to convincing Iran not to eventually proceed to build nuclear weapons.

On Saudi Arabia, Einhorn noted that although the Obama administration supported the Saudi military campaign in Yemen, there was a risk that the Kingdom would overreact to its regional security challenges. He suggested that the United States pursue a dual-track approach: counter provocative Iranian behavior and defend the security interests of its regional partners, while at the same time seeking a resolution of regional disputes and encouraging Saudi Arabia and Iran to find ways of reducing tensions between them.

On the possibility that Iran would rapidly scale up its enrichment program, Einhorn acknowledged that while Tehran can legally do so under the JCPOA in 10 to 15 years, it will not have a strong civil nuclear rationale since it will be able to acquire nuclear fuel from Russia and other suppliers. Furthermore, Iran’s progress in centrifuge research and development may not be as rapid as Iran currently anticipates. Moreover, even if Iran elects to ramp up its enrichment program down the line, the JCPOA and Nuclear Non-Proliferation Treaty (NPT) will bar it from pursuing nuclear weapons, and monitoring arrangements still in place will provide warning and enable the United States to intervene and prevent Iran from building nuclear weapons.

On reaching a regional accommodation that includes Iran, Al Otaiba indicated that the UAE would have much to gain, especially economically, from a better relationship with Tehran. He said the UAE and others in the region would like to try to engage with Iran to reduce tensions—but Iran, for its part, seems unwilling.

On prospects for a U.S.-Saudi civil nuclear cooperation agreement, Einhorn said that progress on such an agreement has stalled due to Saudi reluctance to formally renounce enrichment, something the United States has so far insisted on. He suggested that Washington should be prepared to relax the so-called “gold standard” (i.e., a formal renunciation of on enrichment and reprocessing) and instead accept an approach that would still discourage Saudi fuel cycle programs, such as giving Riyadh the right to pursue enrichment but allowing the United States to cease its nuclear cooperation if the Kingdom exercised that right. On the UAE’s civil nuclear program, Al Otaiba affirmed that the Emiratis continue to value the “gold standard” barring enrichment which is enshrined in the U.S.-UAE civil nuclear agreement, and have no plans to change their position on enrichment.

Authors

  • James Tyson
  • Leore Ben Chorin
      
 
 




ed

Fostering competition in consolidated markets

On March 16, Paul B. Ginsburg testified before the California Senate Committee on Health on fostering competition in consolidated markets.

      
 
 




ed

Affordable Care Encourages Healthy Living: Theory and Evidence from China’s New Cooperative Medical Scheme

On May 25th, 2016, the Brookings-Tsinghua Center and China Institute for Rural Studies hosted a public lecture on the topic –Affordable Care Encourages Healthy Living: Theory and Evidence from China's New Cooperative Medical Scheme, featuring Dr. Yu Ning, assistant professor of Economics at Emory University.

      
 
 




ed

Turkish democracy: Battered but not yet sunk


The videos showing an unruly scene in and around Brookings last Thursday during the visit of Turkish President Recep Tayyip Erdoğan make for distressing viewing. The incongruity of what occurred—think Trump security meets Turkish nationalism—invites introspection about how scholarly institutions manage appearances by controversial leaders.

There are legitimate criticisms to be made of the Turkish government in general and of Thursday’s security detail in particular. But lost in the melee—and in the past year of terrorism, arrests, and media closures—is the message that Erdoğan most needed to convey.

Dramatic changes in the geopolitical neighborhood now present the most serious challenge to Turkish territorial integrity since the founding of the Republic. With the aid of Western intervention, the wars in Iraq and Syria accomplished more for the Kurdish cause than decades of terrorism and negotiation. Since the addition of a second stronghold in Syria to the de facto Kurdish territory in northern Iraq, Turkey is paying a price for conflicts not wholly of its own making.

It is not quite a century since European armies last seized Ottoman territories or supported national Kurdish independence from Istanbul. Whether or not now is the moment an autonomous Kurdish state takes legal form, the model is being proven nearby under Western protection. It does not make things easier that this time it is not Western countries’ intention to hurt Turkey’s national interests. Adding insult to Erdoğan’s injury, in 2015 the Kurdish cause met unprecedented support among urban elites around Turkey—and the United States and Europe—for a political party (HDP) that spoiled Erdoğan’s institutional ambitions by denying him a supermajority in parliament.

The Turkish president is criticized for allowing feelings of personal betrayal to color his strategic relationships—for example with Israel and Syria—yet many in the U.S. foreign policy community also now react to him emotionally. Because their high hopes were dashed after Gezi Park and the Gülen scandals, he can do no good again. This fuels Erdoğan’s outrage: Turkey gets no respect for its current role absorbing waves of refugees or for “taking the fight to terrorists.”

Erdoğan alienated Western allies with a take-no-prisoners approach in domestic politics and bears some responsibility for the government’s disastrous relationship with the country’s two major dissident groups—one ethnic (Kurdish) and one spiritual (Gülenist). But that should not relax similarly robust democratic expectations of these groups’ own political behavior, and the impression of such a double standard is at the root of the Turkish president’s annoyance. The suicide bombs and illegal wiretaps his country endured have failed to capture the American imagination. Instead, he perceives friends who would tie his hands as he defends the rule of law against terror and treason.

The Turkish government should be discouraged from abusing executive power, squelching dissent, or other acts of overzealous majoritarianism and break the cycle of retaliation against political opponents. But it is bizarre to equate the Turkish president with former Venezuelan leader Hugo Chavez or Russian President Vladimir Putin: Turkish democracy is battered but not yet sunk, and its government is still not a strategic opponent—it remains a NATO member in accession talks with the EU.

The Brookings incident is said to have exposed the regime’s true colors and thin skin, and to emblematize how polarized and undemocratic Turkey has become in the last decade. But a lopsided and illiberal democracy also preceded AKP rule: a quarter century of single party rule followed by four military coups in as many decades, with strict limits on free speech and religious exercise. American enthusiasm for democratization in the region must include a commitment to remain constructively engaged when the spring recoils and conservative parties win power—including those who appear to abuse that power—through exactly this kind of visit.

Because last week’s scene unfolded in the same auditorium where a younger Erdoğan appeared as a promising democratic leader years ago, it is fair to ask which was the real one. He who walked down the aisle with files on 57 imprisoned journalists in March 2016? Or the Erdoğan who arrived in government with dossiers on negotiations with PKK leader Abdullah Öcalan, opened talks with the Alevi religious minority, and established a ministry for European Union affairs?

Critics now say that was all just a show and diversionary tactic that have finally given way to his true attitudes towards the proverbial “tram of democracy.” But with friends who are deaf to some of Turkey’s legitimate concerns it is fair to ask what may now be an academic question: Is President Erdoğan an ex-liberal who simply got off the tram, or was he mugged by reality while on board?

Image Source: © Joshua Roberts / Reuters
     
 
 




ed

Trump's proposed ban on Muslims


Editors’ Note: Presumptive Republican presidential nominee Donald Trump has proposed, in various forms and iterations, banning Muslims from entering the United States “until we figure out what’s going on,” in his words. Shadi Hamid responds to this proposal below, in an excerpt from a longer piece in The Atlantic in which Uri Friedman surveys various experts on the issue.

If Donald Trump is really interested in understanding the roots of anti-Americanism, there’s a solution: to read the hundreds of books and articles written on why, exactly, “Muslims” might not be particularly enthused about American policy in the Middle East (there’s little evidence to suggest that large numbers of Muslims have any particular antipathy toward Americans as people).

But it’s possible that Trump is just being imprecise. Perhaps what he really wants to say is not that Muslims “hate” Americans, but rather that they may be ambivalent about or even opposed to certain liberal values that are associated with being American. Obviously, it is impossible to generalize about an entire religious group, but polling does suggest that majorities in Arab countries like Egypt and Jordan, as well as non-Arab countries like Indonesia and Malaysia, aren’t quite classical liberals when it comes to issues like apostasy, religiously derived criminal punishments, gender equality, or the relevance of religious law in public life more generally.

If this happens to be Trump’s argument, it would be ironic, since Trump himself cannot be considered a liberal in the classical sense. In fact, he fits the definition of an “illiberal democrat” quite well, as I argued in a recent essay here in The Atlantic. That said, I have to admit that I’m concerned about anti-Muslim bigots misconstruing my own arguments around “Islamic exceptionalism”—that Islam has been and will continue to be resistant to secularization—after the attacks in Orlando. It’s undoubtedly true that large numbers of Muslims in both the West and the Middle East consider homosexual activity to be religiously unlawful, or haram, but let us be careful in drawing a link between such illiberalism (which many Christian evangelicals and Republican politicians share) and the desire to kill. That’s not the way radicalization works. We would never argue, for instance, that Senators Ted Cruz or Marco Rubio are “at risk” individuals who may, if we don’t keep a close eye on them, commit mass murder against gay Americans.

In any case, conservative Muslims, orthodox Jews, Christian evangelicals (or for that matter Trump supporters residing in Poland who want to emigrate to the U.S. if Trump wins) have the right to be “illiberal” as long as they express their illiberalism through legal, democratic means. These are rights that are protected by the American constitution, enshrined in the Bill of Rights.

Perhaps Trump is thinking specifically about violence perpetrated by Muslims, as he suggested in comments after the Orlando attacks. The interesting thing though—and something that is rarely acknowledged by U.S. politicians—is that the preponderance of Middle Eastern violence in recent decades has been perpetrated not by Islamists but by secular autocrats against Islamists, in the name of national security. These, as it happens, are the very strongmen that Trump seems to have such a soft spot for.

Ultimately, Trump cannot, through the force of arms or his genuinely frightening anti-Muslim rhetoric, compel the many conservative Muslims in the Middle East to be something they’re not, or would rather not be. To suggest that Muslims need to be secular or irreligious (by Trump’s own arbitrary standards) is dangerous. The message there is one that ISIS would find appealing for its own divisive purposes: that an increasingly populist and bigoted West has no interest in respecting or accommodating Islam’s role in public life, even when expressed legally and peacefully. The sad fact of the matter, though, is simple enough: Trump has less respect for the American constitution than the vast majority of American Muslims, many of whom, like me, are the children of immigrants. In Trump’s America, it so happens, my parents would have been banned from ever entering in the first place.

Authors

      
 
 




ed

Turkey’s failed coup could have disastrous consequences for Europe’s migrant crisis


Editors’ Note: Turkey’s failed coup may lead to the worsening of Europe’s migration crisis, writes Jessica Brandt. That’s because it could lead to the dissolution of a recent pact between Brussels and Ankara over the plight of refugees arriving on the European Union’s shores. This post originally appeared on Vox.

Turkey’s recent failed coup may lead to the worsening of Europe’s migration crisis. That’s because it could lead to the dissolution of a recent pact between Brussels and Ankara over the plight of refugees arriving on the European Union’s shores. Even before the events of last weekend, the fate of the agreement was uncertain amid quarrels between the parties. Now its future is even more in doubt.

Last year, more than a million migrants and refugees crossed into Europe, roiling politics across the continent. It’s a crisis EU chief Donald Tusk has described as an “existential challenge.”

Under the terms of the deal, Turkey agreed to accept the “rapid return of all migrants not in need of international protection crossing from Turkey into Greece and to take back all irregular migrants intercepted in Turkish waters.” In other words, almost all refugees who cross into Greece are slated to be returned to Turkish soil.

In return, the EU pledged to speed up the allocation of €3 billion in aid to Turkey to help it house and care for refugees, “reenergize” Turkey's bid for membership in the EU, and lift visa restrictions on Turkish tourists and businessmen.

But the European Commission has conditioned changes to the visa restrictions on better governance in Turkey. In particular, it requires a change in President Recep Tayyip Erdoğan’s controversial anti-terror law, which he has used to crack down on journalists and critics. Erdoğan was already adamantly against narrowing the law to protect free speech. Having now overcome a determined coup attempt, he is even less likely to do so.

Instead, it appears probable that he will further clamp down on civil liberties, acting on his authoritarian instincts and retaliating against his detractors. On Sunday, he suggested that he might reintroduce the death penalty, a practice Turkey abolished in 2004 as part of its bid for EU membership. Doing so would widen the gap in political culture between Turkey and Europe and, as German Foreign Minister Frank-Walter Steinmeier asserted forcefully on Monday in Brussels, derail the already limited possibility of reigniting accession talks.

The pact has already been strongly opposed by the European left, and particularly by humanitarian and human rights groups. Rising authoritarianism in Turkey would only increase resistance to the deal, making implementation even harder, especially if those groups were to scale back their activities on the ground.

That would not be without precedent. The United Nations High Commissioner for Refugees, Doctors Without Borders, and the International Rescue Committee, among others, have suspended some of their activities in refugee centers because they do not want to be involved in implementing a deal that they describe as constituting the blanket expulsion of refugees from Turkey back to Greece.

[A] crackdown could also undermine the legal basis of the agreement.

Crucially, a crackdown could also undermine the legal basis of the agreement. One of the agreement’s key provisions is that individuals who cross from Turkey into Greece will be sent back across the Aegean to Turkey. That hinges on the notion that Turkey is a “safe third country” for migrants. A crackdown could prompt refugees to argue that it isn’t.

If that were the case, deporting them to Turkey could be seen as constituting “refoulement”—the forcible return of asylum seekers to a country where they are prone to be subjected to persecution—which is forbidden under both international and EU law.

That’s a problem, since some analysts believe worsening conditions in Turkey could lead even more people seeking refuge to journey onward to Europe. In the past, Erdoğan has threatened to “open the gates” and send refugees streaming into Europe when displeased with the level of financial assistance from Brussels earmarked for managing the crisis. Preoccupied by troubles at home, he may see stability as in his interest and resist taking aggressive steps that would cause an open breach.

For both parties, finding a stable, though imperfect, accommodation—as they were poised to do prior to the events of last weekend—is still the most promising path forward. Let’s hope the parties take it. Managing Europe’s migration crisis depends on it.

Authors

Publication: Vox
      
 
 




ed

Targeting an Achievement Gap in One of the Country's Most Educated Metropolitan Areas

Over the past two decades, the Puget Sound area’s innovation-driven economy has become a magnet for highly educated people from across the country and around the world. Drawn to the region by some of the nation’s most innovative companies—Microsoft, Boeing, Nintendo, Amazon, Genentech and the Fred Hutchinson Cancer Research Center, to name a few—the Puget Sound region ranks well on measures of educational attainment. Of the nation’s largest 100 metro areas, the Seattle-Tacoma-Bellevue area is 11th in bachelor’s degree holders and 17th in graduate degree attainment.

But for all its brainpower, the region has fallen behind in terms of cultivating homegrown talent, particularly in less affluent school districts located in South Seattle and South King County. Starting from an early age, low-income students and children of color in these communities tend to lag behind on important indicators of educational success. The effects of this achievement gap worsen with time, putting these students at a serious disadvantage that often affects their ability to find jobs and their earning potential. 

In an effort to address this achievement gap, the Community Center for Education Results has teamed up with the city of Seattle, the University of Washington, the Seattle Community Colleges District, the Puget Sound Educational Service District, the Bill & Melinda Gates Foundation and others to form the Road Map Project, a coalition working to double the number of South Seattle and South King County students pursuing a college diploma or career credential by 2020.

What’s innovative about the Road Map Project is its focus on collective action and community engagement. By bringing together key stakeholders to collaborate on shared goals, the project is creating a new model for efforts to reduce inequality in educational attainment. Its cradle-to-college-and-career approach aims to improve student outcomes beginning with access to prenatal care and kindergarten readiness all the way through to elementary and secondary schooling and beyond. Through a combination of community outreach and partnership building, data-driven goal-setting and performance management, the project supports area organizations working to boost student success and close the achievement gap in South Seattle and South King County.

In December, the Project released its baseline report, which provides a detailed snapshot of student achievement in the Road Map region during the 2009-2010 school year. With this initial data in hand, the project will be able to work with area organizations to encourage and track progress on a wide variety of indicators, ranging from birth weight and full-day kindergarten enrollment to proficiency in reading, math, and science, parent engagement to graduation rates and postsecondary enrollment. “Demographics should not determine the destiny of children in this region,” says Mary Jean Ryan, executive director of the Community Center for Education Results. “The children who grow up here deserve as good of an education as the people who show up here.”

Authors

Publication: The Atlantic Cities
     
 
 




ed

Seattle Uniquely Placed to Compete on Global Stage, but Success is Not Inevitable

In an increasingly international and interconnected economy, Seattle was global before global was cool.

The region’s competitive global assets include internationally competitive firms, strategically important ports and one of the nation’s largest foreign-born populations.

Still, today’s unique economic moment demands an extra measure of purposeful global engagement.

As cities and metropolitan areas begin to emerge from the Great Recession, leaders are realizing the need to restructure the economy — to move from one based on debt and consumption to one powered by production and innovation.

At the same time, most economic growth over the next decade will occur outside of America’s borders. As of 2009, the combined economies of Brazil, India and China eclipsed that of the United States and now account for more than one-fifth of the global economy. By 2018, their share is expected to surpass one-quarter.

The developing world, with a rapidly rising middle class, represents a huge market opportunity for American firms. China and India alone are expected to increase their urban populations by more than 500 million over the next 20 years, which naturally leads to a rise in their consumer classes. By 2050, Chinese and Indian consumers will account for more than half of all middle-class consumption worldwide, up from just 2 percent in 2000.

These growing metropolises will also require massive investments in infrastructure and face huge challenges as they expand, challenges that U.S. firms have the expertise to solve — in transportation and mobility, in sustainability and clean energy, in information technology and software.

America’s metropolitan areas are uniquely positioned to take advantage of this dual challenge through increased trade and investment. The top 100 metro areas not only produce three-quarters of our gross domestic product, they also concentrate our most innovative firms, our research institutions and universities, and the majority of our skilled workers.

So how does the central Puget Sound region stack up? Recently, I came to Seattle as part of the Global Cities Initiative, a joint project of the Brookings Institution and JPMorgan Chase. This initiative aims to catalyze a shift in economic development priorities and practices that would result in more globally connected metropolitan areas and more sustainable economic growth.

The metro area has a strong platform for trade: firms such as Boeing, Microsoft, and Amazon; world-class research assets including the University of Washington and the Fred Hutchinson Cancer Research Center; and a strong legacy of globally oriented leadership, with a wide coalition, including public, private and civic leaders, actively promoting a regional strategy for global engagement.

The data bear this out: While Seattle is the 15th largest metro area in the United States, it has the sixth highest export total, sending more than $47 billion in goods and services abroad in 2012. These exports are overwhelmingly driven by globally competitive clusters in aerospace and information technology.

Partly due to this industry specialty, Seattle’s economy is also highly innovative and uniquely oriented toward science, technology, engineering and math: More than one-quarter of jobs in the metro are in STEM occupations, the fourth highest share of any metropolitan area in the country.

Still, in such a competitive and dynamic global economy, no metro area can afford complacency. In order to maintain its position in the global economy, Seattle needs to get serious about global engagement.

First, focus on global trade and investment. Continue the collaborative efforts of your public, private and civic leaders to focus economic development strategies on growth abroad. In Seattle earlier this month, regional leaders committed to expanding these efforts, joining the Global Cities Initiative’s Exchange, through which the metro area will develop a strategy to increase foreign direct investment in key industries.

Second, invest in what matters. To compete globally, metro areas must be strong at home. In Seattle, this means shoring up your workforce-development pipeline so that local residents have a path to good jobs in advanced industries. It also calls for a regional approach to financing and delivering transportation solutions that not only reduce congestion at home, but also improve your connections abroad.

Finally, metropolitan leaders must look beyond their own borders, identify their trading partners, and build relationships to increase both trade and investment. For example, as part of the Global Cities Initiative, Chicago and Mexico City entered into a first-of-its-kind economic partnership that builds on the extensive economic, social, cultural linkages between the two metros to make both more prosperous.

There are promising efforts under way in the region, as the King County Aerospace Alliance has started collaborating with Aéro Montréal so that the two aerospace clusters can be more competitive.

Simply put, in today’s economic landscape, every city is a global city. The success of regional economies hinges on their engagement throughout the global economy. Seattle has an enviable hand to play; but success is not inevitable.

This opinion piece originally appeared in the Seattle Times.

Authors

      
 
 




ed

Confronting Concentrated Poverty in Tough Economic Times

I want to begin by saying how grateful we were at Brookings to partner with the Federal Reserve System on this concentrated poverty project. We like to think that at Brookings we know a lot about this subject, but it was only through this partnership with the Fed that we were able to ground this understanding in the experiences of the 16 communities across the United States that were the focus of the report’s case studies.

The report demonstrates that in addition to managing the macroeconomy, the Fed also possesses a unique and powerful understanding of the U.S. economy from the ground up, which is absolutely necessary for designing smart policy in turbulent times like these.

I want to also give special thanks to my colleagues David Erickson and Carolina Reid at the San Francisco Fed. They played several roles in this project for me: intellectual partners, co-conspirators, mood lighteners, and Fed sherpas. It can be tough for foreigners like myself to navigate this system, and they lightened my load throughout the project. I also want to thank my Brookings colleague Elizabeth Kneebone, who performed a lot of the data analysis for this project.

I want to argue three points, largely policy points, in my remarks this morning.

First, the current economic climate makes the issue of concentrated poverty, and our response, more relevant, not less.

Second, major near-term investments our country makes to resolve the economic crisis can and should provide meaningful opportunities for the most disadvantaged families and communities.

And third, our longer-run efforts to assist high-poverty areas and their residents must take account of the economic challenges and opportunities that manifest at the regional, metropolitan level.


To begin, let’s review where we were when the Fed and Brookings joined forces on this effort in May 2006.

  • The unemployment rate was 4.7 percent, a five-year low.
  • Payrolls were expanding every month for the third consecutive year.
  • The poverty rate, while still above its low in 2000, was dropping.
  • The federal deficit was a relatively manageable 2% of GDP.
  • The Dow was above 11,000, and on its way up.
  • And the 2008 general election promised a storied matchup between party favorites Hillary Clinton and Rudy Giuliani.
A lot can happen in 30 months!

In the wake of record house-price declines and financial market fallout, the economic outlook today is grim. The unemployment rate is 6.5 percent and rising. One projection suggests that the downturn could eventually increase the ranks of the nation’s poor by anywhere from 7 to 10 million. And amid declining revenues and increased expenditure needs, the U.S. budget deficit is expected to top $1 trillion this year.

In short, the situation for the lowest-income communities and their residents is not encouraging.

And neither is our starting point.

As Paul Jargowsky’s research has shown, the incidence of concentrated poverty in America dropped markedly during the 1990s, after two decades of increase. Some combination of a tight labor market and policy changes to promote work and break up the deepest concentrations of poverty seemed responsible for that decline.

But as Elizabeth and I found in a recent Brookings report, we may have given back much of that progress during the first half of this decade. The population in what we termed “high working poverty” communities rose by 40 percent between 1999 and 2005. This suggests that America’s high-poverty areas may have never really recovered from the modest downturn we experienced at the beginning of the decade.

Now, with all the turmoil in our economy, it would be easy to lose sight of these places and their residents, who even seem to have missed out on the benefits of recent growth.

But if we are to meet the enormous challenges facing our country—economic, social, and environmental—we simply can’t afford to take a blind eye to the continuing problem of concentrated poverty.

As decades of research and this report have shown, concentrated poverty magnifies the problems faced by the poor, and exacts a significant toll on the lives of families in its midst.

This report greatly enhances our understanding of how high-poverty communities of all stripes bear these costs. Moreover, it suggests that the contemporary circumstances of these communities owe not just to long-term market dynamics, but also to policy choices made over several decades’ time—some deliberate in their intent, and some producing unfortunate unintended consequences.

Today we’re at an important inflection point for policy. With the economy souring, we don’t have the luxury of using an “auto-pilot” strategy of macroeconomic growth to reach the most disadvantaged places and their residents. Quite the opposite—just as these communities are often “last in” for economic opportunity during boom times, they seem to be “first out” when things shift into reverse.

But the specific nature of the current crisis also poses added challenges for high-poverty communities.

That is because many of these areas were ground zero for risky subprime lending over the last several years. In many of the case-study communities in the report, half or more of recent home mortgages were high-cost subprime loans.

Now, they are on the front lines of the fallout. Our calculations of HUD data show that census tracts where the poverty rate was at least 40 percent in 2000—the conventional definition behind concentrated poverty—have an estimated foreclosure rate over 9 percent, roughly double the nationwide average.

This poses both an immediate and a long-term threat to what little stability these communities possess.

Over the short term, these areas face problems associated with heightened property neglect, vacancy, and abandonment. Not only can those conditions breed crime and disorder, but also they can accelerate a process of further disinvestment from high-poverty neighborhoods, which are all too familiar with that cycle of decline.

Over the long run, the public sector will work to return foreclosed properties in these neighborhoods to productive use. But there is a danger that we may once again re-concentrate poverty in these neighborhoods if these assets are not managed and deployed strategically.

In sum, recent trends and a perilous road ahead merit a meaningful policy response to the challenges facing areas of concentrated poverty and their residents.

This brings me to my second point, which is that near-term policy choices can ameliorate the impacts of the current crisis on areas of concentrated poverty.

In less than 50 days, a new administration will take office in Washington, facing economic challenges of a scale not seen in decades.

The president-elect and his advisors have signaled that they are ready to “do what it takes” to stimulate the economy, create and protect jobs, and catalyze investment in new sectors to spur longer-term growth.

I believe that policies advanced by the new administration and Congress in the first few weeks of the new year, if designed and executed well, could matter greatly for the fortunes of the nation’s high-poverty communities.

First, a comprehensive strategy to deal with the foreclosure crisis is sorely needed. This would feature, first and foremost, a broad plan to forestall the rising tide of mortgages, including many in high-poverty communities, headed for default due to falling home prices, economic dislocation, and poor underwriting.

However, even a sweeping, generous approach will not prevent the inevitable. Especially in high-poverty areas, more loans will fall into foreclosure, more people will lose their homes, and fiscally-strapped local governments will be left to manage the consequences of increasing vacancy and abandonment.

The Neighborhood Stabilization Program enacted by Congress and the Bush administration during the summer of 2008 represents an initial effort to arm state and local leaders with the resources to tackle the neighborhood impacts of rising foreclosures.

But significant deterioration of the economy in the intervening months suggests that the problem may now be of a much larger scale than was originally anticipated. What’s more, many local governments lack the capacity, expertise, and legal authorities to use existing or additional resources strategically.

So the new administration, and HUD in particular, will need to consider a further round of response—using some mix of fiscal, regulatory, capacity-building, and bully pulpit powers—to help cash-strapped local governments mitigate the impacts of foreclosure on their most vulnerable communities.

Second, there seems to be wide agreement that the economic recovery package should include a series of measures that inject money into the economy right away.

So the package will provide immediate assistance to families, communities, and governments hit hard by the downturn, in the likely form of extended unemployment and increased food stamp benefits, increased state and local aid, and low- to middle-income tax cuts, spending designed to make a real economic impact in the next several months.

A couple of details here are of real consequence to communities of concentrated poverty.
  • Income tax cuts included in the package should be refundable, like the Earned Income Tax Credit, or EITC. Boosting the EITC, for instance, would provide additional help to workers most likely to be hit hard by the downturn, and target resources to families most likely to spend the additional cash immediately. As the report shows, at least 30 percent, and as many as 60 percent, of families in the case-study communities today benefit from the EITC.
  • Unemployment insurance benefits should be extended, but also modernized. As the case studies showed, work among residents of high-poverty communities is often seasonal or part-time, even in a good economy. As a result, many laid-off workers from poor areas in several states may not qualify for benefits due to outmoded eligibility rules. Therefore, in addition to extending weeks of eligibility for UI, Congress and the new administration might also consider providing incentives to states to expand the pool of workers who could benefit from the program during the downturn.
Third, infrastructure will clearly figure prominently among the spending priorities in the recovery package.

Yet there is a significant risk that focusing dollars primarily on projects that states deem “shovel-ready,” as has been discussed, will repeat mistakes of the past. It would primarily subsidize road-building at the metropolitan fringe, and do little to enhance long-run economic growth, or provide better opportunities for low-income people and the places they live.

Infrastructure investments of the magnitude under consideration must not only create jobs, but also promote inclusive and sustainable growth. That means setting strict criteria for federal investment, including a real assessment of costs and benefits that considers economic, environmental, and social impacts. As the report shows, poor infrastructure often acts as a barrier to the economic integration of high-poverty communities into their larger municipal and regional areas.

To that end, we should also consider providing direct support for large, cash-strapped municipal governments that they could use to modernize and preserve roads, bridges, transit, water, sewer, and perhaps even broadband infrastructure. At the same time, we should hold them and grantees at all other levels of government accountable for connecting younger, disadvantaged workers and communities to the jobs that result.

In short, what happens in the first several weeks of the new year here in Washington could, if structured properly, provide meaningful support and opportunity for low-income areas and their residents. At a minimum, this might avert the sort of backsliding these communities suffered during the much milder recession we experienced earlier this decade.

So that brings me to my third and final point, which is that, over the longer term, we must advance policies that actively link the fortunes of poor communities to those of their regional neighbors. As you probably heard or read, our division at Brookings is named the “Metropolitan Policy Program.”

Our mission is to provide decision makers with cutting-edge research and policy ideas for improving the health and prosperity of cities and metropolitan areas.

You might ask, why metropolitan? After all, this is not a term that most Americans use, think about, or even recognize, even though 85 percent of us live in metropolitan areas. A friend of the program once told us that it sounded like a combination of “metrosexual” and “cosmopolitan.” Not exactly what we were going for.

More specifically, what relevance does “metropolitan” have for addressing the challenges of concentrated poverty?

Well, the report points to skills and employability problems that hold back residents of high-poverty communities. If the route to improving the lives of families affected by concentrated poverty runs in part through the labor market, then we must devise strategies and solutions that respect and respond to the geography of that market—which is metropolitan.

The report also points to housing problems, of various stripes, that segregate the poor in these communities and make their daily lives more difficult. Housing markets, too, are metropolitan—and housing dynamics in the wealthiest parts of each metro are inextricably linked to those in the poorest parts.

The fact is, our national economy—and that of most industrialized nations—is largely the aggregate of its individual metropolitan economies. In the United States, the 100 largest metro areas account for 12 percent of our land mass, hold 65 percent of our residents, and generate three-quarters of our Gross Domestic Product. They possess even greater shares of our innovative businesses, our most knowledgeable workers, the critical infrastructure that connects us to the global economy, and the quality places that attract, retain, and enhance the productivity of workers and firms.

And as the report shows, regions—both metropolitan and non-metropolitan—each retain distinctive clusters that shape their individual contributions to the national economic pie. Photonics in Rochester. Hospitality and tourism in Atlantic City and Miami. Manufacturing in Albany, Georgia. Agriculture and business services in Central California. These clusters do not possess equal strength or equal potential, but they define the starting point for thinking about the regional economic future of these areas, and economic opportunities for their residents.

Not only are the assets of our economy fundamentally metropolitan… increasingly, our challenges are, too. In 2006, we found that for the first time, more than half of the poor in metropolitan America lived in suburbs, not cities. While poor suburban families don’t yet concentrate at the levels seen in the communities in this report, they are trending in this direction. Between 1999 and 2005, the number of suburban tax filers living in “moderate” working poverty communities rose by nearly 50 percent.

So what does recognition of our metropolitan reality imply for longer-run policies to help the poorest communities and their residents?

Bruce has argued elsewhere that our nation must embrace a new, unified framework for addressing the needs of poor neighborhoods and their residents. He has termed this, Creating Neighborhoods of Choice and Connection. Neighborhoods of choice are communities in which lower-income people can both find a place to start, and as their incomes rise, a place to stay. They are also communities to which people of higher incomes can move, for their distinctiveness, amenities, or location. This requires an acceptance of economic integration as a goal of housing and neighborhood policy.

Neighborhoods of connection are communities that link families to opportunity, wherever in the metropolis that opportunity might be located. This requires a much more profound commitment to the “educational offer” in these communities and the larger areas of which they are a part. It also requires a pragmatic vision of the “geography of opportunity” with regard to jobs, housing, and other choices.

If we take this vision seriously, then our interventions must operate within, and relate to, the metro geography of our economy. This means viewing the conditions and prospects of poor areas through the lens of the broader economic regions of which they are a part, and explicitly gearing policy in that direction.

A simple example relates to the geography of work. In the Springfield, Massachusetts metro area, roughly 30 percent of the region’s jobs still cluster in the neighborhoods close to downtown, including Old Hill and Six Corners. In the Miami metro area, by contrast, only 9 percent of the region’s jobs lie close to its downtown, implying transportation needs of a quite different scale for Little Haiti’s residents. In response, we should empower metropolitan transportation planners to address the unique nature of these spatial divides, and measure their performance on creating inclusive systems that overcome them.

This metro lens applies to workforce development as well. Labor market intermediaries are some of the most promising mechanisms for bridging the information and skills divide between poor communities and regional economic opportunity. One of the highest performers, the Wisconsin Regional Training Partnership, works in the home region of one of our case-study communities, Milwaukee. If workforce policies and funding at all levels of government were to emphasize employer partnerships, provide greater flexibility, and reward performance, we could grow more capable institutions like these that serve the needs of low-income communities and regional firms alike.

A metro perspective can apply to school reform as well. We have called for a new focus at the Department of Education on supporting proven, successful educational entrepreneurs—charter management organizations like KIPP, human capital providers like Teach for America, student support organizations like College Summit. The demand for these entrepreneurial solutions extends well beyond the highest-poverty neighborhoods. Federal education policy should consider investing in these entrepreneurs at the metropolitan scale, to aggregate a critical mass of those organizations, serve a significant percentage of the area’s children, and drive positive changes in the entire public education environment.

Finally, our housing policies must embrace metro-wide economic diversity, which is a hallmark of neighborhoods of choice and connection.

This means expanding housing opportunities for middle-income families in deprived neighborhoods. We simply cannot continue to cluster low-income housing in already low-income areas, perpetuating the sort of economic segregation evident in so many of the case-study communities, and thereby consign another generation to a childhood amid concentrated poverty. Likewise, we must guard against the possibility that the current foreclosure crisis leads to a re-concentration of poor households in neighborhoods that were just beginning to achieve greater economic diversity.

But this is a two-way street. It also means creating more high-quality housing opportunities for low-income families in growing suburban job centers. Requiring or providing incentives to metropolitan areas to engage in regional housing planning, alongside regional transportation planning, may be a necessary first step. Those plans could also apply a more rational screen to the development choices that have fueled sprawl, and thereby added to the social and economic isolation of the lowest-income communities.

Let me end where I began.

This is both an auspicious and a challenging moment at which to wrestle with the problem of concentrated poverty in America.

Auspicious in that we are approaching the dawn of a new government in Washington that has signaled concern for our nation’s low-income residents and communities, recognition that metropolitan economies are the engines of our prosperity, and a pragmatic commitment to doing what works.

Challenging in that making progress against concentrated poverty, and improving opportunity for those in its midst, is a tall order when the macroeconomy isn’t cooperating.

But the current economic climate is not an excuse to avoid this problem; rather, it’s an imperative to act, strategically and purposefully.

That means doing the big near-term things the right way, so that low-income communities and their residents do not bear an excessive brunt of the downturn, and so that they participate meaningfully in our eventual economic recovery.

And it means getting the long-term vision right, so that policy advances sustainable, metro-led solutions that connect poor neighborhoods and poor families to opportunity in the wider economy around them.

The Federal Reserve System has tremendous, well-earned credibility for understanding and advancing dialogue around the future of our nation’s economic regions. I look forward to continuing to work with the Fed to increase public understanding of concentrated poverty, and to make tackling it a crucial element of strategies to promote regional and national prosperity.

Downloads

Authors

Publication: Federal Reserve Board of Governors
     
 
 




ed

How to Reverse the Trend of Concentrated Poverty

One of Cleveland's neighborhoods made the Washington scene earlier this month.

Alas, it wasn't up for a multibillion-dollar bailout.

Instead, the Central neighborhood and 15 other communities across the United States were the centerpiece of a new report published by the Federal Reserve System and the Brookings Institution.

These communities share a simple, disappointing characteristic. In 2000 - the peak of the last economic boom - at least 40 percent of their residents lived below the federal poverty line. That was about three times the national average.

No American needs to look very far to find places like these. Concentrated poverty affects manufacturing cities like Cleveland, and Albany, Ga.; immigrant gateways like Miami, Fla., and Fresno, Calif.; and rural areas like eastern Kentucky and northern Montana. About 4 million poor Americans live in these areas of extremely high poverty.

How did this happen? Policy decisions made decades ago - like clustering thousands of the Cleveland region's public housing units in the Central neighborhood - helped shape their trajectory. So too did economic changes, like the long-run loss of decent-paying manufacturing jobs, or - in rural areas - mining and agricultural jobs.

By allowing poverty to concentrate in these places, we've magnified the problems their poor residents face. For instance, many low-income children in these communities start school not yet "ready to learn." On top of that, though, they attend schools burdened with lots of other poor kids who face similar challenges, and deal with higher levels of neighborhood crime that affect their mental health and educational performance.

The challenges of concentrated poverty extend to many other areas: low adult work-force skills and employment, poor-quality housing and a lack of investment by mainstream businesses.

And that's in a good economy. Today, Central - and thousands of other high-poverty communities like it across the nation - faces even more significant challenges as the United States enters what may be its worst recession in decades.

So what should Washington do for these places and their residents in the face of such difficult circumstances?

First, we must not lose sight of them in the economic turmoil. That's especially true because the roots of this crisis, in the subprime mortgage market, grew in many very poor neighborhoods like Central. As a result, home foreclosure rates in high-poverty communities are more than double the national average.

To stabilize these hard-hit communities, Washington must adopt new measures to prevent foreclosure and provide additional resources and guidance for state and local governments to help them cope with the rising numbers of vacant properties.

Second, a forthcoming economic stimulus package from Washington that could amount to half a trillion dollars or more should not bypass these neighborhoods and their residents.

That implies the need for immediate federal aid to sustain basic public services in states like Ohio, where the deficit for this year already tops $1 billion. It also suggests providing direct assistance to struggling workers and their families, through enhanced unemployment benefits and tax credits.

At the same time, the infrastructure dollars in the package - which could amount to more than $100 billion - must be spent strategically. States should not be permitted to go on expanding highway capacity at the metropolitan fringe, to the detriment of poor communities near the urban core. Cities like Cleveland, and metropolitan organizations like the Northeast Ohio Areawide Coordinating Agency, should get their fair share of new transportation funds. And funds should be set aside for training programs that provide low-income residents with a pathway to decent jobs.

Third, we have to rethink neighborhood policy over the longer term.

For too long, government has funded housing, schools and economic development in these communities as though they were islands unto themselves.

That's not how the real economy works. These neighborhoods are part of larger regional labor and housing markets. Decisions made across the Cleveland region, such as where firms locate new jobs, or where families buy homes and send their kids to school, ultimately dictate whether neighborhoods like Central can become real neighborhoods of choice and better connected to economic opportunity.

Public policy must leverage that real economy for the benefit of lower-income residents, by building on smart regional strategies like the Fund for Our Economic Future and WIRE-Net in Northeast Ohio. It should diversify housing in poor communities, but also encourage affordable housing development in wealthier parts of metropolitan areas.

Cleveland's Central neighborhood, like other high-poverty communities across the United States, faces a tough road ahead. Short-term opportunities, and long-term strategies, are needed to help its next generation of residents overcome the challenges of concentrated poverty.

Authors

Publication: Cleveland Plain Dealer
     
 
 




ed

Challenges Associated with the Suburbanization of Poverty: Prince George's County, Maryland

Martha Ross spoke to the Advisory Board of the Community Foundation for Prince George’s County, describing research on the suburbanization of poverty both nationally and in the Washington region.

Despite perceptions that economic distress is primarily a central city phenomenon, suburbs are home to increasing numbers of low-income families. She highlighted the need to strengthen the social service infrastructure in suburban areas.

Full Presentation on Poverty in the Washington-Area Suburbs » (PDF)

Downloads

Authors

      
 
 




ed

The Re-Emergence of Concentrated Poverty: Metropolitan Trends in the 2000s


As the first decade of the 2000s drew to a close, the two downturns that bookended the period, combined with slow job growth between, clearly took their toll on the nation’s less fortunate residents.

Over a ten-year span, the country saw the poor population grow by 12.3 million, driving the total number of Americans in poverty to a historic high of 46.2 million. By the end of the decade, over 15 percent of the nation’s population lived below the federal poverty line—$22,314 for a family of four in 2010—though these increases did not occur evenly throughout the country.

An analysis of data on neighborhood poverty from the 2005–09 American Community Surveys and Census 2000 reveals that:

After declining in the 1990s, the population in extreme-poverty neighborhoods—where at least 40 percent of individuals live below the poverty line—rose by one-third from 2000 to 2005–09. By the end of the period, 10.5 percent of poor people nationwide lived in such neighborhoods, up from 9.1 percent in 2000, but still well below the 14.1 percent rate in 1990.


To view an interactive version of this map, please download Adobe Flash Player version 9.0 and a browser with javascript enabled.



People Living in Extreme Poverty Tracts 2005 2009

Concentrated poverty nearly doubled in Midwestern metro areas from 2000 to 2005–09, and rose by one-third in Southern metro areas. The Great Lakes metro areas of Toledo, Youngstown, Detroit, and Dayton ranked among those experiencing the largest increases in concentrated poverty rates, while the South was home to metro areas posting both some of the largest increases (El Paso, Baton Rouge, and Jackson) and decreases (McAllen, Virginia Beach, and Charleston). At the same time, concentrated poverty declined in Western metro areas, a trend which may have reversed in the wake of the late 2000s housing crisis.


To view an interactive version of this map, please download Adobe Flash Player version 9.0 and a browser with javascript enabled.



Concentrated Poverty in the Nation's Top 100 Metro Areas

The population in extreme-poverty neighborhoods rose more than twice as fast in suburbs as in cities from 2000 to 2005–09. The same is true of poor residents in extreme-poverty tracts, who increased by 41 percent in suburbs, compared to 17 percent in cities. However, poor people in cities remain more than four times as likely to live in concentrated poverty as their suburban counterparts.

The shift of concentrated poverty to the Midwest and South in the 2000s altered the average demographic profile of extreme-poverty neighborhoods. Compared to 2000, residents of extreme-poverty neighborhoods in 2005–09 were more likely to be white, native-born, high school or college graduates, homeowners, and not receiving public assistance. However, black residents continued to comprise the largest share of the population in these neighborhoods (45 percent), and over two-thirds of residents had a high school diploma or less.

The recession-induced rise in poverty in the late 2000s likely further increased the concentration of poor individuals into neighborhoods of extreme poverty. While the concentrated poverty rate in large metro areas grew by half a percentage point between 2000 and 2005–09, estimates suggest the concentrated poverty rate rose by 3.5 percentage points in 2010 alone, to reach 15.1 percent. Some of the steepest estimated increases compared to 2005–09 occurred in Sun Belt metro areas like Cape Coral, Fresno, Modesto, and Palm Bay, and in Midwestern places like Indianapolis, Grand Rapids, and Akron.

These trends suggest the strong economy of the late 1990s did not permanently resolve the challenge of concentrated poverty. The slower economic growth of the 2000s, followed by the worst downturn in decades, led to increases in neighborhoods of extreme poverty once again throughout the nation, particularly in suburban and small metropolitan communities and in the Midwest. Policies that foster balanced and sustainable economic growth at the regional level, and that forge connections between growing clusters of low-income neighborhoods and regional economic opportunity, will be key to longer-term progress against concentrated disadvantage.

Downloads

Video

Authors

Image Source: Shannon Stapleton
      
 
 




ed

The Growth and Spread of Concentrated Poverty, 2000 to 2008-2012


Downloads