academic and careers

State of biomedical innovation conference


Event Information

March 13, 2015
9:00 AM - 11:30 AM EDT

Falk Auditorium
Brookings Institution
1775 Massachusetts Avenue NW
Washington, DC 20036

Register for the Event

As policy agendas for 2015 come into sharper focus, much of the national conversation is aimed at tackling challenges in biomedical innovation. The first two months of the year alone have seen landmark proposals from Congress and the Obama Administration, including the House’s 21st Century Cures initiative, a bipartisan Senate working group focused on medical progress, President Obama’s Precision Medicine Initiative and a number of additional priorities being advanced by federal agencies and other stakeholders.

On March 13, the Engelberg Center for Health Care Reform hosted the State of Biomedical Innovation Conference to provide an overview of emerging policy efforts and priorities related to improving the biomedical innovation process. Senior leaders from government, academia, industry, and patient advocacy shared their thoughts on the challenges facing medical product development and promising approaches to overcome them. The discussion also examined the data and analyses that provide the basis for new policies and track their ultimate success.

 Join the conversation by following @BrookingsMed or #biomed

Video

Audio

Transcript

Event Materials

      




academic and careers

Cost, value and patient outcomes: The growing need for payer engagement


Editor's note: This article appears in the April 2015 issue of Global Forum. Click here to view the full publication.

Since passage of the Affordable Care Act in 2010, the last several years have seen a groundswell in physician payment and delivery reforms designed to achieve higher value health care through incentivizing higher quality care and lower overall costs. Accountable care models, for example, are achieving marked progress by realigning provider incentives toward greater risk-sharing and increased payments and shared savings with measured improvements in quality and cost containment. Medical homes are introducing greater care coordination and team-based care management, while the use of episode-based or bundled payments is removing perverse incentives that reward volume and intensity.

These reforms are coming just as the number of highly targeted, highly priced treatments continues to expand. The U.S. Food and Drug Administration (FDA) approved a decade-high 41 novel new drugs in 2014, many of them targeted therapies approved on the basis of increasingly sophisticated progress in genomics and the understanding of disease progression. In areas like oncology, such targeted treatments have grown as a percentage of global oncology market size from 11% in 2003 to 46% in 2013. New brand specialty drug spending in the U.S. is estimated to have been $7.5 billion in 2013, or 69% of total new drug spending. The growing prevalence of these drugs and their cost to the health system are setting the stage for significant flashpoints between industry, payers, and providers, seen most clearly in the debate over hepatitis C treatment costs that roiled stakeholder interactions for most of the past year. 

More of these targeted treatments are in the development pipeline, and a growing number of public policy efforts taking shape in 2015 are focused on accelerating their availability. The House of Representatives' 21st Century Cures Initiative, for example, has released a slew of legislative proposals aimed at promoting breakthrough innovation by increasing the efficiency of drug development and regulatory review. These efforts have significant downstream implications for the pace at which targeted and specialty therapies will become available, their associated costs, and the growing importance of demonstrating value in the postmarket setting.

As payers and providers continue their push toward increased value-based care, more innovative models for connecting such reforms to drug development are needed. Earlier collaboration with industry could enable more efficient identification of unmet need, opportunities to add value through drug development, and clearer input on the value proposition and evidentiary thresholds needed for coverage. Equally important will be unique public-private collaborations that invest in developing a better postmarket data infrastructure that can more effectively identify high value uses of new treatments and support achieving value through new payment reforms.

Stronger collaboration could also improve evidence development and the coverage determination process after a targeted  treatment has gained regulatory approval. Facilitated drug access programs like those proposed by the Medicare Administrative Contractor Palmetto GBA create access points for patients to receive targeted anti-cancer agents off-label while payers and industry gather important additional outcomes data in patient registries. More systematic and efficient use of policies like Medicare's Coverage with Evidence Development (CED), which allows for provisional coverage for promising technologies or treatments while evidence continues to be collected, could enable industry and payers to work together to learn about a medical product's performance in patient populations not typically represented in clinical studies. A CED-type model could be especially useful for certain specialty drugs: data collected as a condition of payment could help payers and providers develop evidence from actual practice to improve treatment algorithms, increase adherence, and improve outcomes. 

Finally, collaborations that support stronger postmarket data collection can also support novel drug payment models that further reward value. Bundled payments that include physician-administered drugs, for example, could encourage providers to increase quality while also incentivizing manufacturers to help promote evidence-based drug use and lower costs for uses that generate low value. Outcomes-based purchasing contracts that tie price paid to a medical product's performance could be another promising approach for high-expense treatment with clearly defined and feasibly measured outcomes.

Many of these ideas are not new, but as manufacturers, payers, providers, and patients move into an increasingly value-focused era of health care, it is clear that they must work together to find new ways to both promote development of promising new treatments while also making good on the promise of value-based health care reforms.

Authors

Publication: Global Forum Online
Image Source: © Mike Segar / Reuters
      




academic and careers

Breakthrough therapy designation: A primer


Breakthrough therapy designation (BTD) is the newest of four expedited programs developed by the U.S Food and Drug Administration (FDA) to accelerate the development and review of novel therapies that target serious conditions. The public response to the program has been largely positive, and dozens of drugs have successfully received the designation. However, the FDA denies many more requests than it grants. In fact, as of March 2015, less than one in three of the BTD requests submitted have been granted. By contrast, roughly 75 percent of the requests for fast track designation (another of the Agency’s expedited programs) were granted between 1998 and 2007. This discrepancy suggests ongoing uncertainty over what exactly constitutes a “breakthrough” according to the FDA’s criteria.

On April 24, the Center for Health Policy at Brookings will host an event, Breakthrough Therapy Designation: Exploring the Qualifying Criteria, that will discuss qualifying criteria for the BTD program using real and hypothetical case studies to explore how FDA weighs the evidence submitted. Below is a primer that describes the definition, value, and impact of BTD.

What is BTD?

BTD was established in 2012 under the Food and Drug Administration Safety and Innovation Act, and is intended to expedite the development and review of drugs that show signs of extraordinary benefit at early stages of the clinical development process. However, BTD is not an automatic approval. The drug still has to undergo clinical testing and review by the FDA. Rather, BTD is designed to facilitate and shorten the clinical development process, which can otherwise take many years to complete.

What criteria does FDA use to evaluate potential breakthroughs?

In order to qualify for the designation, a therapy must be intended to treat a serious or life-threatening illness, and there must be preliminary clinical evidence that it represents a substantial improvement over existing therapies on at least one clinically significant outcome (such as death or permanent impairment).

In considering a request for BTD, FDA relies on three primary considerations:

1) the quantity and quality of the clinical evidence being submitted;

2) the available therapies that the drug is being compared to; and

3) the magnitude of treatment effect shown on the outcome being studied.


In practice, however, it can be difficult to define a single threshold that a therapy must meet. The decision depends on the specific context for that drug.  In some cases, for example, the targeted disease has few or no treatments available, while in others there may be several effective alternative treatments to which the new therapy can be compared. The request may also be made at different stages of the clinical development process, which means that the amount and type of data available to FDA can vary. In some cases, early evidence of benefit may disappear when the drug is tested in larger populations, which is why FDA reserves the right to rescinded the designation if subsequent data shows that the therapy no longer meets the criteria.

How many therapies have received the designation?

As of March 2015, FDA had received a total of 293 requests for BTD. Of these, 82 received the designation, and 23 have since been approved for marketing. Ten of these approvals were new indications for already approved drugs, rather than novel therapies that had never before received FDA approval.

What are the benefits of BTD?

For drug manufacturers, it is about the intensity and frequency of their interactions with FDA. Once the designation is granted, the FDA takes an “all hands-on-deck” approach to providing the manufacturer with ongoing guidance and feedback throughout the clinical development process. Products that receive BTD are also able to submit portions of their marketing application on a rolling basis (rather than all at once at the end of clinical trials) and BTD can also be used in combination with other expedited programs in order to further reduce the product’s time to market.

For patients, the potential benefits are straightforward: earlier access to therapies that may significantly improve or extend their lives.

How does BTD relate to the other three expedited programs?

The other three expedited review and development programs—fast track designation, priority review, and accelerated approval—are also geared at facilitating the development and approval of drugs for serious conditions. These other programs have been in place for over 15 years, and have played a significant role in accelerating patient access to new therapeutics (Table 1). In 2014 alone, 66 percent of the 41 drugs approved by FDA's Center for Drug Evaluation and Research used at least one of these four pathways, and 46 percent received at least two of the designations in combination.

Table 1: Overview of FDA’s Expedited Review Programs


 Adapted from FDA's Guidance for Industry: Expedited Programs for Serious Conditions - Drugs and Biologics

Authors

      




academic and careers

The medical device tax: A primer


Quickly following on the heels of the midterm elections, Senate Majority Leader Mitch McConnell (R-KY) indicated that the medical device tax was a key target for repeal in the 114th Congress. Today, the Senate Finance Health Care Subcommittee will hold a hearing about the effects of the 2.3 percent tax that was included in the Affordable Care Act. Many believe that a repeal is, in fact, possible. Below is a basic primer about the tax and its contentious history.

1. What is the medical device tax?

Included in the Affordable Care Act (ACA) and launched in 2013, the medical device tax imposes a 2.3 percent sales tax on medical device supplies. The tax applies broadly to a range of products, including pacemakers, artificial joints, surgical gloves, and dental instruments. It does not apply to eyeglasses, contact lenses, hearing aids, wheelchairs, or any other device that the public generally buys for individual use. Further, the tax is applied equally to imported and domestically produced devices, and devices produced in the U.S. for export are tax-exempt.

2. Why was it included in the Affordable Care Act?

According to the Joint Committee on Taxation, the tax is estimated to bring in $29 billion over the next decade. The tax was one of many revenue-raising provisions designed to offset the cost of providing coverage to more than 25 million Americans through the ACA, and these newly insured individuals would likewise increase demand for medical device manufacturers' products and services. Other industries were subject to levies as well, including health plans (an estimated $101 billion), and employers (an estimated $130 billion). It has been noted that then-Senator John Kerry from Massachusetts helped negotiate the tax from 4.6 percent to 2.3 percent.

3. How has the medical device industry responded?

The U.S. is home to more than 7,000 medical device companies with estimated annual sales of $106 to $116 billion per year. The largest concentrations of companies are located in California, Massachusetts, New York and Minnesota. Since 2010, the medical device industry has led a full court press effort to repeal the tax. Companies and trade groups argue that the tax would cost over 40,000 U.S. jobs, and undermine innovation by moving manufacturing offshore - conclusions that are heavily contested by the tax's supporters.

By some accounts this tax is coming at a particularly challenging time for medical device innovation. A recent analysis by Ernst & Young reported that venture capital investment in medical devices in 2013 fell 17% from the previous year, a downward trend that has been observed for the past seven years. In addition, investment funding is also shifting towards less risky later-stage medical device companies instead of smaller earlier stage ventures. These trends are worrisome since early-stage investment companies can promote innovative and disruptive medical device technologies that introduce new therapeutic benefits or quantum improvements in patient care.

It is unclear what impact the medical device tax will have on investment in early stage innovation. Key factors that have reduced the availability of venture capital for early-stage medical device companies pursuing pre-market approval include U.S. regulatory unpredictability and delays in approval, and an uncertain reimbursement environment. Additionally, efforts outside the U.S to attract medical device investment, such as offering tax havens and other incentives for device developers in Ireland and the Netherlands add to the attractiveness for device companies to move out of the U.S. Moving to a country that has lower tax rates and less stringent corporate governance requirements may save large device companies billions of dollars.

Recognizing that the "country of first choice by medical device developers is a key contributor to early patient access to high-quality, safe and effective devices," the Center for Devices and Radiological Health's (CDRH) at the U.S. Food and Drug Administration issued its 2014-2015 Strategic Priorities, which describe their efforts to improve regulatory predictability and device development efficiency in order to "help medical device developers choose the U.S. as the country of first choice for their technologies." While the FDA's efforts seem to focus on encouraging medical device innovation in the U.S., the medical device tax seems to be contradictory to this effort.

Some also argue that while expanding insurance coverage will help drug companies sell more products and bring in new patients for providers and hospitals, it will not help sell more devices because the majority of potential beneficiaries are much older and already covered by Medicare.

Hundreds of companies and trade groups have signed on to letters opposing the tax from industry associations, like the Medical Device Manufacturers Association (MDMA) and AdvaMed. Others have launched significant lobbying efforts to support the tax's repeal, an industry that accounts for $30 million in lobbying expenditures annually since the ACA was passed in 2010. The Center for Responsible Politics has also identified $5.7 million in political contributions on behalf of medical device companies to specific candidates during the 2013-2014 campaign cycle.

4. How are lawmakers responding?

The tax's repeal has been supported by Democrats and Republicans alike. Many opponents cite the Senate's fiscal 2014 budget resolution as an indicator of support - drumming up 79 supporters for repeal, including 33 Democrats. However, the resolution "was non-binding and viewed as a free vote to show displeasure with an unpopular aspect of the health law." The tax's repeal has garnered outspoken support from Orrin Hatch (R-UT) and Mitch McConnell (R-KY), as well as Elizabeth Warren (D-MA), Al Franken (D-MN), Amy Klobuchar (D-MN) - Democrats with notably high concentrations of medical device companies in their states. The House has approved the repeal of the device tax three separate times in the past two years, including as recently as September 2014. The White House has historically opposed these efforts, but President Obama recently indicated he would entertain the idea.

A report from the nonpartisan Congressional Research Service, released last week, concluded that the tax is unlikely to hurt the profits of device companies, estimating that it will reduce industry output and employment by no more than .2 percent. CRS states, "The effect on the price of health care, however, will most likely be negligible because of the small size of the tax and small share of health care spending attributable to medical devices." A separate report from Ernst & Young last month finds that domestic revenues for medical technology firms grew 4 percent to $336 billion in 2013, the first year the tax went into effect - about the same rate from 2012, indicating that the industry seems financially stable for now.

Editor’s note: This post was originally featured in RealClear Markets on November 12, 2014. Click here for the original posting.

      




academic and careers

Breakthrough therapy designation: Exploring the qualifying criteria


Event Information

April 24, 2015
8:45 AM - 4:45 PM EDT

Ballroom
The Park Hyatt Hotel
24th and M Streets, NW
Washington, DC

Register for the Event

Established by the Food and Drug Administration Safety and Innovation Act of 2012, breakthrough therapy designation (BTD) is one of several programs developed by the U.S. Food and Drug Administration (FDA) to speed up the development and review of drugs and biologics that address unmet medical needs. In order to qualify for this designation, the treatment must address a serious or life-threatening illness. In addition, the manufacturer (i.e., sponsor) must provide early clinical evidence that the treatment is a substantial improvement over currently available therapies. The FDA is working to further clarify how it applies the qualifying criteria to breakthrough designation applications.

On April 24, under a cooperative agreement with FDA, the Center for Health Policy convened a public meeting to discuss the qualifying criteria for this special designation. Using examples from oncology, neurology, psychiatry, and hematology, the workshop highlighted considerations for the BTD application process, the evaluation process, and factors for acceptance or rejection. The discussion also focused on key strategies for ensuring that the qualifying criteria are understood across a broad range of stakeholder groups.


Video

Event Materials

      




academic and careers

Faster, more efficient innovation through better evidence on real-world safety and effectiveness


Many proposals to accelerate and improve medical product innovation and regulation focus on reforming the product development and regulatory review processes that occur before drugs and devices get to market. While important, such proposals alone do not fully recognize the broader opportunities that exist to learn more about the safety and effectiveness of drugs and devices after approval. As drugs and devices begin to be used in larger and more diverse populations and in more personalized clinical combinations, evidence from real-world use during routine patient care is increasingly important for accelerating innovation and improving regulation.

First, further evidence development from medical product use in large populations can allow providers to better target and treat individuals, precisely matching the right drug or device to the right patients. As genomic sequencing and other diagnostic technologies continue to improve, postmarket evidence development is critical to assessing the full range of genomic subtypes, comorbidities, patient characteristics and preferences, and other factors that may significantly affect the safety and effectiveness of drugs and devices. This information is often not available or population sizes are inadequate to characterize such subgroup differences in premarket randomized controlled trials.

Second, improved processes for generating postmarket data on medical products are necessary for fully realizing the intended effect of premarket reforms that expedite regulatory approval. The absence of a reliable postmarket system to follow up on potential safety or effectiveness issues means that potential signals or concerns must instead be addressed through additional premarket studies or through one-off postmarket evaluations that are more costly, slower, and likely to be less definitive than would be possible through a better-established infrastructure. As a result, the absence of better systems for generating postmarket evidence creates a barrier to more extensive use of premarket reforms to promote innovation.

These issues can be addressed through initiatives that combine targeted premarket reforms with postmarket steps to enhance innovation and improve evidence on safety and effectiveness throughout the life cycle of a drug or device. The ability to routinely capture clinically relevant electronic health data within our health care ecosystem is improving, increasingly allowing electronic health records, payer claims data, patient-reported data, and other relevant data to be leveraged for further research and innovation in care. Recent legislative proposals released by the House of Representatives’ 21st Century Cures effort acknowledge and seek to build on this progress in order to improve medical product research, development, and use. The initial Cures discussion draft included provisions for better, more systematic reporting of and access to clinical trials data; for increased access to Medicare claims data for research; and for FDA to promulgate guidance on the sources, analysis, and potential use of so-called Real World Evidence. These are potentially useful proposals that could contribute valuable data and methods to advancing the development of better treatments.

What remains a gap in the Cures proposals, however, is a more systematic approach to improving the availability of postmarket evidence. Such a systematic approach is possible now. Biomedical researchers and health care plans and providers are doing more to collect and analyze clinical and outcomes data. Multiple independent efforts – including the U.S. Food and Drug Administration’s Sentinel Initiative for active postmarket drug safety surveillance, the Patient-Centered Outcomes Research Institute’s PCORnet for clinical effectiveness studies, the Medical Device Epidemiology Network (MDEpiNet) for developing better methods and medical device registries for medical device surveillance and a number of dedicated, product-specific outcomes registries – have demonstrated the potential for large-scale, systematic postmarket data collection. Building on these efforts could provide unprecedented evidence on how medical products perform in the real-world and on the course of underlying diseases that they are designed to treat, while still protecting patient privacy and confidentiality.

These and other postmarket data systems now hold the potential to contribute to public-private collaboration for improved population-based evidence on medical products on a wider scale. Action in the Cures initiative to unlock this potential will enable the legislation to achieve its intended effect of promoting quicker, more efficient development of effective, personalized treatments and cures.

What follows is a set of both short- and long-term proposals that would bolster the current systems for postmarket evidence development, create new mechanisms for generating postmarket data, and enable individual initiatives on evidence development to work together as part of a broad push toward a truly learning health care system.

Downloads

      




academic and careers

Why legislative proposals to improve drug and device development must look beyond FDA approvals


Legislative proposals to accelerate and improve the development of innovative drugs and medical devices generally focus on reforming the clinical development and regulatory review processes that occur before a product gets to market. Many of these proposals – such as boosting federal funding for basic science, streamlining the clinical trials process, improving incentives for development in areas of unmet medical need, or creating expedited FDA review pathways for promising treatments – are worthy pursuits and justifiably part of ongoing efforts to strengthen biomedical innovation in the United States, such as the 21st Century Cures initiative in the House and a parallel effort taking shape in the Senate.

What has largely been missing from these recent policy discussions, however, is an equal and concerted focus on the role that postmarket evidence can play in creating a more robust and efficient innovation process. Data on medical product safety, efficacy, and associated patient outcomes accrued through routine medical practice and through practical research involving a broad range of medical practices could not only bolster our understanding of how well novel treatments are achieving their intended effects, but reinforce many of the premarket reforms currently under consideration. Below and in a new paper, we highlight the importance of postmarket evidence development and present a number of immediately achievable proposals that could help lay the foundation for future cures.

Why is postmarket evidence development important?

There are a number of reasons why evidence developed after a medical product’s approval should be considered an integral part of legislative efforts to improve biomedical innovation. First and foremost, learning from clinical experiences with medical products in large patient populations can allow providers to better target and treat individuals, matching the right drug or device to the right patient based on real-world evidence. Such knowledge can in turn support changes in care that lead to better outcomes and thus higher value realized by any given medical product.

Similarly, data developed on outcomes, disease progression, and associated genetic and other characteristics that suggest differences in disease course or response to treatment can form the foundation of future breakthrough medical products. As we continue to move toward an era of increasingly-targeted treatments, this important of this type of real-world data cannot be discounted.

Finally, organized efforts to improve postmarket evidence development can further establish infrastructure and robust data sources for ensuring the safety and effectiveness of FDA-approved products, protecting patient lives. This is especially important as Congress, the Administration, and others continue to seek novel policies for further expediting the pre-market regulatory review process for high-priority treatments. Without a reliable postmarket evidence development infrastructure in place, attempts to further shorten the time it takes to move a product from clinical development to FDA approval may run up against the barrier of limited capabilities to gather the postmarket data needed to refine a product’s safety and effectiveness profile. While this is particularly important for medical devices – the “life cycle” of a medical device often involves many important revisions in the device itself and in how and by whom it is used after approval – it is also important for breakthrough drugs, which may increasingly be approved based on biomarkers that predict clinical response and in particular subpopulations of patients.

What can be done now?

The last decade has seen progress in the availability of postmarket data and the production of postmarket evidence. Biomedical researchers, product developers, health care plans, and providers are doing more to collect and analyze clinical and outcomes data. Multiple independent efforts – including the U.S. Food and Drug Administration’s Sentinel Initiative for active postmarket drug safety surveillance, the Patient-Centered Outcomes Research Institute’s PCORnet for clinical effectiveness studies, the Medical Device Epidemiology Network (MDEpiNet) for developing better methods and medical device registries for medical device surveillance and a number of dedicated, product-specific outcomes registries – have demonstrated the powerful effects that rigorous, systematic postmarket data collection can have on our understanding of how medical products perform in the real-world and of the course of underlying diseases that they are designed to treat.

These and other postmarket data systems now hold the potential to contribute to data analysis and improved population-based evidence development on a wider scale. Federal support for strengthening the processes and tools through which data on important health outcomes can be leveraged to improve evidence on the safety, effectiveness, and value of care; for creating transparent and timely access to such data; and for building on current evidence development activities will help to make the use of postmarket data more robust, routine, and reliable.

Toward that end, we put forward a number of targeted proposals that current legislative efforts should consider as the 2015 policy agenda continues to take shape:

Evaluate the potential use of postmarket evidence in regulatory decision-making. The initial Cures discussion draft mandated FDA to establish a process by which pharmaceutical manufacturers could submit real-world evidence to support Agency regulatory decisions. While this is an important part of further establishing methods and mechanisms for harnessing data developed in the postmarket space, the proposed timelines (roughly 12 months to first Guidance for Industry) and wide scope of the program do not allow for a thoughtfully-, collaboratively-considered approach to utilizing real-world evidence. Future proposals should allow FDA to take a longer, multi-stakeholder approach to identify the current sources of real-world data, gaps in such collection activities, standards and methodologies for collection, and priority areas where more work is needed to understand how real-world data could be used.

Expand the Sentinel System’s data collection activities to include data on effectiveness. Established by Congress in 2007, Sentinel is a robust surveillance system geared toward monitoring the safety of drugs and biologics. In parallel to the program for evaluating the use of RWE outlined above, FDA could work with stakeholders to identify and pursue targeted extensions of the Sentinel system that begin to pilot collection of such data. Demonstration projects could enable faster and more effective RWE development to characterize treatment utilization patterns, further refine a product’s efficacy profile, or address pressing public health concerns – all by testing strategic linkages to data elements outside of Sentinel’s safety focus.

Establish an active postmarket safety surveillance system for medical devices. Congress has already acted once to establish device surveillance, mandating in 2012 that Sentinel be expanded to include safety data on medical devices. To date, however, there has been no additional support for such surveillance or even the capability of individually tracking medical devices in-use. With the recently finalized Unique Device Identifier rule going effect and the ability to perform such tracking on the horizon, the time is now to adopt recent proposals from FDA’s National Medical Device Postmarket Surveillance System Planning Board. With Congressional authorization for FDA to establish an implementation plan and adequate appropriations, the true foundation for such a system could finally be put into place.

These next steps are practical, immediately achievable, and key to fully realizing the intended effect of other policy efforts aimed at both improving the biomedical innovation process and strengthening the move to value-based health care.

Authors

      




academic and careers

Event recap: Lessons learned from two years of breakthrough therapy designation


The breakthrough therapy designation (BTD) program was initiated by the U.S. Food and Drug Administration (FDA) in 2012 to expedite the development of treatments for serious or life-threatening illness that demonstrate “substantial improvement” over existing therapies. The program has since become a widely supported mechanism for accelerating patient access to new drugs. As of March 2015, FDA has received a total of 293 requests for BTD. However, it has granted just  82 (28%), which indicates an ongoing lack of clarity over what exactly meets the criteria for the designation.

On April 24, the Center for Health Policy at Brookings convened a public meeting to explore the designation’s qualifying criteria and how FDA applies those criteria across therapeutic areas. Panelists used real-world and hypothetical case studies to frame the discussion, and highlighted major considerations for the application process, the FDA’s evaluation of the evidence, and the key factors for acceptance or rejection. The discussion also identified strategies to ensure that qualifying criteria are well understood. Here are the five big takeaways:

1.  The BTD program is viewed positively by drug companies, researchers, advocates, and others 

Across the board, participants expressed enthusiasm for the BTD program. Industry representatives noted that their experience had been extremely positive, and that the increased cooperation with and guidance from FDA were very helpful in streamlining their development programs. Receiving the designation can also raise a drug company’s profile, which can facilitate additional investment as well as clinical trial patient recruitment; this is particularly important for smaller companies with limited resources.

Patient and disease advocates were likewise supportive, and expressed hope that the early lessons learned from successful breakthrough therapy approvals (which have been mostly concentrated in the oncology and antiviral fields) could be translated to other disease areas with less success. However, while BTD is an important tool in expediting the development of new drugs, it is just one piece of broader scientific and regulatory policy landscape. Accelerating the pace of discovery and development of truly innovative new drugs will depend on a range of other factors, such as developing and validating new biomarkers that can be used to measure treatment effects at an earlier stage, as well as establishing networks that can streamline the clinical trial process. It will also be important to develop effective new approaches to collecting, analyzing, and communicating information about these treatments once they are on the market, as this information can potentially be used by FDA, providers, and patients to  further improve prescription drug policy and medical decision-making.

2.  BTD requests far outnumber those that actually meet the qualifying criteria

Since the program began, less than 30 percent of requests have received BTD designation. A substantial majority were denied at least in part due to either a lack of data or problems with the quality of the data, or some combination of the two. For example, some sponsors requested the designation before they had any clinical data, or submitted the request using clinical data that was incomplete or based on flawed study designs. Many requests also failed to meet the Agency’s bar for “substantial improvement” over existing therapies.

One reason for the high denial rate may be a lack of a clear regulatory or statutory bar that could be used as a definitive guide for sponsors to know what is needed to qualify for the designation. BTD denials are also confidential, which means that sponsors effectively have nothing to lose by submitting a request. Going forward, manufacturers may need to exercise more discretion in deciding to request the designation, as the process can be resource- and time-intensive for both sides.

3.  There is no single threshold for determining what defines a breakthrough therapy

About 53 percent of the 109 total BTD denials were due at least in part to the fact that the drug did not represent a substantial improvement over existing therapies. During the day’s discussion, FDA and sponsors both noted that this is likely because the criteria for BTD are inherently subjective. In practice, this means there is no clear threshold for determining when a new therapy represents a “substantial improvement” over existing therapies. Designation decisions are complex and highly dependent on the context, including the disease or condition being targeted, the availability of other treatments, the patient population, the outcomes being studied, and the overall reliability of the data submitted. Given the multiple factors at play, it can be difficult in some cases to determine when a new product is potentially “transformational” as opposed to “better,” especially for conditions that are poorly understood or have few or no existing treatments. In making its determinations, FDA considers the totality of the evidence submitted, rather than focusing on specific evidentiary requirements.

4.  Early communication with FDA is strongly recommended for BTD applicants

Roughly 72 percent of the BTD denials related at least in part to trial design or analysis problems, which led several people to suggest that sponsors engage with FDA prior to submitting their request. Though there are several formal mechanisms for interacting with the agency, informal consultations with the relevant review division could help sponsors to get a better  and much earlier sense of what kind of data FDA might need. This early communication could both strengthen viable BTD requests and reduce the number of frivolous requests.

5.  FDA may need more resources for implementing the BTD program

Drugs that receive breakthrough designation are subject to much more intensive FDA guidance and review. However, when the program was established in 2012, Congress did not allocate funding to cover its costs. There have been ongoing concerns that the program is exacting a significant toll on FDA’s already limited resources, and potentially affecting the timeline for other drug application reviews. These concerns were reiterated during the day’s discussion, and some suggested that Congress consider attaching a user fee to the BTD program when the Prescription Drug User Fee Act comes up for reauthorization in 2017.

Authors

      




academic and careers

Incorporating continuing education into single-drug REMS: Exploring the challenges and opportunities

Event Information

May 18, 2015
9:00 AM - 4:15 PM EDT

The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

The Risk Evaluation and Mitigation Strategies (REMS) program has become an important tool of the U.S. Food and Drug Administration (FDA) in ensuring that the benefits of a given medical product outweigh the associated risks, and has enabled FDA to approve a number of products that might not otherwise have been made available for patient use. Since the implementation of the REMS program, however, concerns have been raised regarding its impact on patient access to products and the associated burden on providers and health care systems. In an effort to address these concerns—and as part of its commitments under the Prescription Drug User Fee Act reauthorization of 2012—FDA has undertaken efforts to standardize and improve the effectiveness of REMS, and to better integrate REMS programs into the health system. As part of this broader initiative, the Agency is currently assessing the feasibility of integrating accredited continuing education (CE) programs and activities into REMS programs that have been developed for a single drug.

Under a cooperative agreement with the FDA, the Center for Health Policy held an expert workshop on May 18, titled “Incorporating Continuing Education into Single-Drug REMS: Exploring the Challenges and Opportunities”. This workshop provided an opportunity for pharmaceutical manufacturers, regulators, CE providers, accreditors, and other stakeholders to explore the ways that CE can be a valuable addition to the REMS toolkit, discuss potential barriers to the development and implementation of REMS-related CE for single products, and identify strategies for addressing those barriers.

Event Materials

      




academic and careers

Health policy 101: How the Trans-Pacific Partnership will impact prescription drugs


For the last several years, the US government has been negotiating a free-trade agreement known as the Trans-Pacific Partnership (TPP) with 11 other countries across the Asia-Pacific and Latin American regions, which could have major impact on the pharmaceutical market.  When finalized it will be the largest free-trade agreement in history, impacting up to one-third of world trade and roughly 40 percent of the global gross domestic product. The deal has attracted a fair share of criticism from a wide range of groups, including concerns over proposed regulations for biologic drugs in participating countries. Specifically, critics are concerned about the length of data exclusivity granted to the companies that hold the patents on these drugs. Below is a primer on biologics and how they are being addressed in the TPP.


What are biologics and biosimilars?

Biologic drugs include any therapy derived from a biological source; a group which includes vaccines, anti-toxins, proteins, and monoclonal antibodies. Because they are typically much larger and more structurally complex than traditional ‘small-molecule’ drugs, they are also more difficult—and much more costly—to develop and manufacture. Biologics are also among the most expensive drugs on the market, costing an average of 22 times more than nonbiologic drugs. Avastin, a cancer drug, can cost more than $50,000 a year, while the rheumatoid arthritis drug Remicade can cost up to $2,500 per injection.

Given these high costs, there is substantial interest in encouraging the development of biosimilars, a term used to describe follow-on versions of an original biologic. Estimates of the potential cost savings vary substantially, but some have predicted that competition from biosimilars could reduce US spending on biologics by $44 to $66 billion over the next ten years.  In the European Union, biosimilars have been on the market since 2006, and a 2013 analysis found that, for the 14 biosimilars on the market, the average price discount was about 25 percent. By 2020, the overall cost savings are projected to total $16-$43 billion.

After the Affordable Care Act (ACA) was passed in 2010, the US Food and Drug Administration (FDA) developed an accelerated approval pathway for biosimilars, modeled after the pathway used for the approval of small-molecule generics. In order to meet the criteria for biosimilarity, the drug must share the same mechanism of action for the approved condition of use, and there must be no clinically significant differences between the two drugs in terms of purity, safety, or potency. FDA recently approved its first biosimilar, Zarxio, which is a copy of the oncology drug Neupogen.

What issues are being raised over data exclusivity in the US?

Under current FDA regulations, biologic drugs are granted 12 years of data exclusivity following approval. During this period of exclusivity, the FDA may not approve a biosimilar application that relies on the data submitted as part of the original biologic application. This form of temporary monopoly is distinct from patent protection, which is granted well before approval and is not related to clinical data.  Data exclusivity does not prevent another company from generating the data independently, but drug companies are unlikely to go to the considerable (and costly) effort of replicating a full course of clinical trials for a drug that is already on the market. (Though biosimilars may need to undergo some additional clinical testing under current FDA regulations, the amount of data required to support approval would certainly be less than what is required for an original biologic approval.)

The 12-year exclusivity period for biologics was established in the ACA following intense debate, and has continued to attract criticism. (By contrast, the period of data exclusivity is just five years for small-molecule drugs.) Supporters argue that given the greater cost and difficulty of bringing a biologic to market a longer period of exclusivity is necessary to incentivize innovation. Others argue that the resulting restrictions on competition keep drug prices unnecessarily high, inevitably putting a strain on the health system and keeping potentially life-saving drugs out of reach for many patients.

How would the TPP affect data exclusivity?

For the 11 countries besides the U.S. that are involved in the TPP, current data exclusivity protections range from zero (Brunei) to eight years (Japan). Under the Obama Administration’s current proposal, participating countries would increase those periods to match the US standard of 12 years. Curiously, this proposal directly contradicts the administration’s ongoing domestic efforts to lower the period of data exclusivity. Since the ACA passed, the Obama administration has repeatedly proposed reducing it to seven, arguing that this would save Medicare $4.4 billion over the next decade. Some have noted that, once the 12-year period is enshrined in the TPP, it will become significantly more difficult to change it through the US legislative process. Furthermore, imposing US standards on the 11 member countries would inevitably restrict competition at the global level, and many patient advocacy and international humanitarian organizations have argued that doing so would undermine the efforts of US global health initiatives like the Vaccine Alliance and the Global Fund to Fight AIDS, Tuberculosis and Malaria, which rely on price competition to manage program costs.

It is unclear whether the US will be successful in its efforts. There have been reports that the issue of data exclusivity has become a significant point of contention, and the US delegation may seek to compromise on its demands. It may, for example, negotiate exceptions for the poorer countries involved in the negotiation, as the Washington Post notes. However, the details of the negotiations are largely confidential, which makes it challenging to assess the possibilities, their relative advantages, or how the US Trade Representative (which is leading the US negotiations) is balancing the need to ensure adequate incentives for innovation with the need to control drug costs and facilitate patient access to potentially life-saving therapies.

Editor's note: Elizabeth Richardson, a research associate in the Center for Health Policy, contributed to the research and writing of this post. 

       




academic and careers

Incorporating continuing education into single-drug REMS: Exploring the challenges and opportunities


The Risk Evaluation and Mitigation Strategies (REMS) program has become an important tool of the U.S. Food and Drug Administration (FDA) in ensuring that the benefits of a given medical product outweigh the associated risks, and has enabled FDA to approve a number of products that might not otherwise have been made available for patient use. Since the implementation of the REMS program, however, concerns have been raised regarding its impact on patient access to products and the associated burden on providers and health care systems. In an effort to address these concerns—and as part of its commitments under the Prescription Drug User Fee Act reauthorization of 2012—FDA has undertaken efforts to standardize and improve the effectiveness of REMS, and to better integrate REMS programs into the health system. As part of this broader initiative, the Agency is currently assessing the feasibility of integrating accredited continuing education (CE) programs and activities into REMS programs that have been developed for a single drug.

Under a cooperative agreement with the FDA, the Center for Health Policy held an expert workshop on May 18 titled, “Incorporating Continuing Education into Single-Drug REMS: Exploring the Challenges and Opportunities”. This workshop provided an opportunity for pharmaceutical manufacturers, regulators, CE providers, accreditors, and other stakeholders to explore the ways that CE can be a valuable addition to the REMS toolkit, discuss potential barriers to the development and implementation of REMS-related CE for single products, and identify strategies for addressing those barriers.

Downloads

Image Source: © Joshua Lott / Reuters
       




academic and careers

Engaging patients: Building trust and support for safety surveillance


Event Information

June 23, 2015
9:00 AM - 3:00 PM EDT

Washington Plaza Hotel
10 Thomas Circle, NW
Washington, DC 20005

The Sentinel System is a state of the art active surveillance system relying on a distributed data network to rapidly scale analysis of health care data collected from over 178 million patients nationwide. Sentinel is an important safety surveillance tool used by the U.S. Food and Drug Administration (FDA), and its underlying distributed data infrastructure is increasingly being recognized to have the potential to support the needs of diverse stakeholders including other public health agencies, health systems, regulated industry, and the clinical research enterprise. Despite Sentinel’s importance in safety surveillance, patients are largely unaware of Sentinel’s public health mission and commitment to protecting patient privacy. Therefore, it is both timely and critical to identify opportunities to raise awareness and build trust for Sentinel safety surveillance among patients, consumers, and the general public.

On June 23, the Center for Health Policy at Brookings, in collaboration with the FDA, hosted an expert workshop to discuss opportunities to raise awareness of the Sentinel System through improved communication to patients and consumers. Participants, including Sentinel Data Partners, patient focused organizations (e.g., consumer advocacy groups), experts in patient privacy, ethics, and health literacy, and representatives from the FDA explored possible opportunities where each stakeholder might be uniquely positioned to engage with patients, and how these communications could be designed and delivered effectively. Discussions from this workshop resulted in recommendations including a set of guiding principles, potential tools, and strategies to improve awareness of the Sentinel System, but more broadly, safety surveillance activities led by the FDA.

Event Materials

       




academic and careers

Risk evaluation and mitigation strategies (REMS): Building a framework for effective patient counseling on medication risks and benefits

Event Information

July 24, 2015
8:45 AM - 4:15 PM EDT

The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Under the Food and Drug Administration Amendments Act (FDAAA) of 2007, the FDA has the authority to require pharmaceutical manufacturers to develop Risk Evaluation and Mitigation Strategies (REMS) for drugs or biologics that carry serious potential or known risks. Since that time, the REMS program has become an important tool in ensuring that riskier drugs are used safely, and it has allowed FDA to facilitate access to a host of drugs that may not otherwise have been approved. However, concerns have arisen regarding the effects of REMS programs on patient access to products, as well as the undue burden that the requirements place on the health care system. In response to these concerns, FDA has initiated reform efforts aimed at improving the standardization, assessment, and integration of REMS within the health care system. As part of this broader initiative, the agency is pursuing four priority projects, one of which focuses on improving provider-patient benefit-risk counseling for drugs that have a REMS attached.

Under a cooperative agreement with FDA, the Center for Health Policy at Brookings held an expert workshop on July 24 titled, “Risk Evaluation and Mitigation Strategies (REMS): Building a Framework for Effective Patient Counseling on Medication Risks and Benefits”. This workshop was the first in a series of convening activities that will seek input from stakeholders across academia, industry, health systems, and patient advocacy groups, among others. Through these activities, Brookings and FDA will further develop and refine an evidence-based framework of best practices and principles that can be used to inform the development and effective use of REMS tools and processes.

Event Materials

       




academic and careers

Improving productivity in pharmaceutical research and development


Event Information

July 28, 2015
8:30 AM - 5:00 PM EDT

Ambassador Ball Room
Embassy Row Hotel
2015 Massachusetts Avenue
Washington, DC 20036

Register for the Event

The role of clinical pharmacology and experimental medicine



The high failure rate of investigational compounds during drug development, especially in late stages of the clinical development process, is widely seen as a key contributor to the outsize amount of time and resources necessary to develop new drugs. Advances in clinical pharmacology and experimental medicine have the potential to rebalance these trends by providing researchers with the tools to more efficiently and systematically identify promising targets and compounds, appropriate patient populations, and adequate doses for study much earlier in development. 

On July 28, the Center for Health Policy at Brookings, in collaboration with the International Consortium for Innovation & Quality in Pharmaceutical Development and the U.S. Food and Drug Administration (FDA), hosted a public meeting to tackle these issues. Through presentations and case studies, leading experts from industry, academia, and government agencies explored the evolving role of clinical pharmacology tools in pre-clinical and clinical development, existing gaps in the application of those tools, and how emerging science could be better leveraged to improve the efficiency of drug development programs and better optimize treatments. Discussion at this event will potentially be harnessed to inform downstream guidance documents, to establish best practices for the application of emerging clinical pharmacology tools, or to support academic publications. Speakers will convene privately to discuss such downstream deliverables and key takeaways from the conference.

Click here to access the full event agenda.

Video

Event Materials

       




academic and careers

Bioscience will accelerate East-West convergence in the century ahead


If current economic growth trends persist, the “great divergence” between Western Europe and East and South Asia in per capita income that commenced 200 years ago will close sometime this century. Key to the closing will be greater accessibility to technology, higher education in East and South Asia, and the relentless diffusion of knowledge including in the biosciences. Advances in the biosciences are poised to contribute in a major way to English economist Thomas Malthus’s four necessities of human life–food, fiber, fuel, and building materials–as well as to human and animal health, biodiversity conservation, and environmental remediation and sustainability.

As my coauthor Leo Furcht and I recently wrote in “Divergence, Convergence, and Innovation: East-West Bioscience in an Anxious Age”, 21st century history will describe the great economic and technological convergence between East and West. It will also further entwine the economic and ecological storylines of the human experience as the vast populations of China and India strive to enter the middle class. Environmentally sustainable economic growth will require putting knowledge of life code, cellular processes, biosynthesis, and biological regeneration to practical use. That prospect is at hand because the biosciences are in the midst of their own convergence–with information technology, nanotechnology, microelectronics, materials, artificial intelligence, robotics, architecture, and design.

From William Hoffman and Leo Furcht, "The Biologist’s Imagination: Innovation in the Biosciences" (Oxford University Press, 2014)

Biomolecules, brainpower, and Malthusian limits

Products arising from molecular biology constitute a growing share of the global economy with each passing year as technologies evolve, production processes improve, and markets expand. In recent years industrial biotechnology has grown faster than the biologic drugs and agricultural biotech sectors in the U.S.

 

U.S. biotech revenue in billions of U.S. dollars.
Source: Robert Carlson, "Nature Biotechnology", In press

Industrial biotechnology employs greener and cleaner technologies to make chemicals, solvents, fuels, and materials such as biocomposites and bioplastics. Growth in this sector can weaken the link between economic growth, environmental pollution, and greenhouse gas emissions. Genomics, synthetic biology and metabolic engineering are poised to accelerate growth in the design and manufacture of industrial enzymes and renewable bio-based products. East and South Asian production and consumption of industrial enzymes are on the rise as the Asian middle class expands.

Bioscience is enabling major cereal crops such as wheat, rice, and corn to adapt to a changing climate. Cereal crop yields need to grow by an estimated 70 percent by mid-century to feed the projected nine billion people expected to then inhabit our planet. The challenge of feeding nine billion people without further deforestation and environmental degradation has resurrected the specter of Malthusian limits to our planet’s ecological carrying capacity. These limits are expressed in food and water shortages, forced migrations, political instability, armed conflict, abatement and cleanup activities, and health care related to pollution and climate change. Even with the powerful tools of food crop bioscience–marker-assisted selection, targeted mutation-selection, genetic modification, and others–maintaining crop production levels at expected higher temperatures and with less water is highly questionable.

Precise genomic editing of cereal grains could equip rice, wheat, and corn with nitrogen fixation capabilities, thus reducing the need for synthetic fertilizers with their environmental and atmospheric costs. East and South Asia, facing major food production challenges, ecological limits, pollution from fertilizer use, and drought from climate change, may take the lead over the West in adopting innovative food crop technologies.

Meanwhile, hundreds of thousands of human beings of many ethnicities have had their genomes decoded over the past decade, with the number expected to increase exponentially as sequencing technologies grow in productivity and decline in price. Genomic information coupled with precise genomic editing and bioregenerative tools give us unprecedented power to shape the course of evolution, including our own.

             

Cost trend of sequencing a human-sized genome and Moore’s Law 2001 – 2015.
Source: Kris A. Wetterstrand, DNA Sequencing Costs: Data from the NHGRI Genome Sequencing Program.

The practice of technological innovation in the industrial era – the systematic application of ideas, inventions and technology to markets, trade, and social systems–is now being joined with the code of life, DNA, and the basic unit of life, the cell. Even as the economic gap between East and West narrows, no other convergence has such profound implications for our future and the future health of living systems and ecosystems. That makes the task for policymakers a daunting one.

Authors

  • William Hoffman
Image Source: © Rebecca Cook / Reuters
       




academic and careers

Defining and measuring innovation in a changing biomedical landscape

Event Information

October 14, 2015
9:00 AM - 2:30 PM EDT

Washington Plaza Hotel
10 Thomas Circle, NW
Washington, DC 20005

The biomedical innovation ecosystem continues to evolve and enhance the processes by which treatments are developed and delivered to patients. Given this changing biomedical innovation landscape, it is imperative that all stakeholders work to ensure that development programs, regulatory practices, and the policies that enable them are aligned on and achieving a common set of goals. This will require a thorough reexamination of our understanding of biomedical innovation – and the subsequent ways in which we seek to incentivize it – in order to more effectively bridge research and analysis of the process itself with the science and policy underpinning it.

Traditional research into the efficiency and effectiveness of drug development programs has tended to focus on the ‘inputs’ and process trends in product development, quantifying the innovation as discrete units. At the opposite end of the research spectrum are potential measures that could be categorized as “value” or “outcomes” metrics. Identifying the appropriate measures across this spectrum – from inputs and technological progress through outcomes and value – and how such metrics can be in conversation with each other to improve the innovation process will be the focus of this expert workshop. On October 14, the Center for Health Policy at Brookings, under a cooperative agreement with the U.S. Food and Drug Administration, convened a roundtable discussion that engaged key stakeholders from throughout the innovation ecosystem to explore the factors and characteristics that could improve our understanding of what constitutes modern “innovation” and how best to track its progress.

Event Materials

       




academic and careers

Promoting continuous manufacturing in the pharmaceutical sector


Event Information

October 19, 2015
9:00 AM - 4:00 PM EDT

The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Over the past decade, drug shortages and product recalls in the U.S. have occurred at unprecedented rates, limiting patient access to critical medicines and undermining health care. A majority of these shortages and recalls have been due to manufacturing quality issues. In response to these problems, and as part of its ongoing efforts to ensure a continuous supply of high-quality pharmaceuticals in the U.S., the U.S. Food and Drug Administration (FDA) is pursuing a range of strategies designed to improve the flexibility, reliability, and quality of pharmaceutical manufacturing. Among these strategies is the promotion of new manufacturing technologies, including continuous manufacturing. Continuous manufacturing offers several important advantages over current approaches to manufacturing and has the potential to significantly mitigate the risks of quality failures. At present, however, these technologies and processes are not widely used by the pharmaceutical industry, and there remain a number of barriers to their broader adoption. In collaboration with a range of stakeholders, FDA is currently exploring ways in which it can help to address these barriers and facilitate the uptake of new manufacturing technologies.

Under a cooperative agreement with FDA, the Center for Health Policy at Brookings held a workshop on October 19 entitled “Promoting Continuous Manufacturing in the Pharmaceutical Sector.” This workshop provided an opportunity for industry, academia, and government partners to identify the major barriers to the adoption of continuous manufacturing, discuss regulatory policies and strategies that could help to address those barriers, and explore approaches to improving public and private sector alignment and collaboration to promote the adoption of continuous manufacturing.

Event Materials

       




academic and careers

Facilitating biomarker development and qualification: Strategies for prioritization, data-sharing, and stakeholder collaboration


Event Information

October 27, 2015
9:00 AM - 5:00 PM EDT

Embassy Suites Convention Center
900 10th St NW
Washington, DC 20001

Strategies for facilitating biomarker development

The emerging field of precision medicine continues to offer hope for improving patient outcomes and accelerating the development of innovative and effective therapies that are tailored to the unique characteristics of each patient. To date, however, progress in the development of precision medicines has been limited due to a lack of reliable biomarkers for many diseases. Biomarkers include any defined characteristic—ranging from blood pressure to gene mutations—that can be used to measure normal biological processes, disease processes, or responses to an exposure or intervention. They can be extremely powerful tools for guiding decision-making in both drug development and clinical practice, but developing enough scientific evidence to support their use requires substantial time and resources, and there are many scientific, regulatory, and logistical challenges that impede progress in this area.

On October 27th, 2015, the Center for Health Policy at The Brookings Institution convened an expert workshop that included leaders from government, industry, academia, and patient advocacy groups to identify and discuss strategies for addressing these challenges. Discussion focused on several key areas: the development of a universal language for biomarker development, strategies for increasing clarity on the various pathways for biomarker development and regulatory acceptance, and approaches to improving collaboration and alignment among the various groups involved in biomarker development, including strategies for increasing data standardization and sharing. The workshop generated numerous policy recommendations for a more cohesive national plan of action to advance precision medicine.  


Event Materials

       




academic and careers

Gene editing: New challenges, old lessons


It has been hailed as the most significant discovery in biology since polymerase chain reaction allowed for the mass replication of DNA samples. CRISPR-Cas9 is an inexpensive and easy-to-use gene-editing method that promises applications ranging from medicine to industrial agriculture to biofuels. Currently, applications to treat leukemia, HIV, and cancer are under experimental development.1 However, new technical solutions tend to be fraught with old problems, and in this case, ethical and legal questions loom large over the future.

Disagreements on ethics

The uptake of this method has been so fast that many scientists have started to worry about inadequate regulation of research and its unanticipated consequences.2 Consider, for instance, the disagreement on research on human germ cells (eggs, sperm, or embryos) where an edited gene is passed onto offspring. Since the emergence of bioengineering applications in the 1970s, the scientific community has eschewed experiments to alter human germline and some governments have even banned them.3 The regulation regimes are expectedly not uniform: for instance, China bans the implantation of genetically modified embryos in women but not the research with embryos.

Last year, a group of Chinese researchers conducted gene-editing experiments on non-viable human zygotes (fertilized eggs) using CRISPR.4 News that these experiments were underway prompted a group of leading U.S. geneticists to meet in March 2015 in Napa, California, to begin a serious consideration of ethical and legal dimensions of CRISPR and called for a moratorium on research editing genes in human germline.5 Disregarding that call, the Chinese researchers published their results later in the year largely reporting a failure to precisely edit targeted genes without accidentally editing non-targets. CRISPR is not yet sufficiently precise.

CRISPR reignited an old debate on human germline research that is one of the central motivations (but surely not the only one) for an international summit on gene editing hosted by the U.S. National Academies of Sciences, the Chinese Academy of Sciences, and the U.K.'s Royal Society in December 2015. About 500 scientists as well as experts in the legal and ethical aspects of bioengineering attended.6 Rather than consensus, the meeting highlighted the significant contrasts among participants about the ethics of inquiry, and more generally, about the governance of science. Illustrative of these contrasts are the views of prominent geneticists Francis Collins, Director of the National Institutes of Health, and George Church, professor of genetics at Harvard. Collins argues that the “balance of the debate leans overwhelmingly against human germline engineering.” In turn, Church, while a signatory of the moratorium called by the Napa group, has nevertheless suggested reasons why CRISPR is shifting the balance in favor of lifting the ban on human germline experiments.7

The desire to speed up discovery of cures for heritable diseases is laudable. But tinkering with human germline is truly a human concern and cannot be presumed to be the exclusive jurisdictions of scientists, clinicians, or patients. All members of society have a stake in the evolution of CRISPR and must be part of the conversation about what kind of research should be permitted, what should be discouraged, and what disallowed. To relegate lay citizens to react to CRISPR applications—i.e. to vote with their wallets once applications hit the market—is to reduce their citizenship to consumer rights, and public participation to purchasing power.8 Yet, neither the NAS summit nor the earlier Napa meeting sought to solicit the perspectives of citizens, groups, and associations other than those already tuned in the CRISPR debates.9

The scientific community has a bond to the larger society in which it operates that in its most basic form is the bond of the scientist to her national community, is the notion that the scientist is a citizen of society before she is a denizen of science. This bond entails liberties and responsibilities that transcend the ethos and telos of science and, consequently, subordinates science to the social compact. It is worth recalling this old lesson from the history of science as we continue the public debate on gene editing. Scientists are free to hold specific moral views and prescriptions about the proper conduct of research and the ethical limits of that conduct, but they are not free to exclude the rest of society from weighing in on the debate with their own values and moral imaginations about what should be permitted and what should be banned in research. The governance of CRISPR is a question of collective choice that must be answered by means of democratic deliberation and, when irreconcilable differences arise, by the due process of democratic institutions.

Patent disputes

More heated than the ethical debate is the legal battle for key CRISPR patents that has embroiled prominent scientists involved in perfecting this method. The U.S. Patent and Trademark Office initiated a formal contestation process, called interference, in March 2016 to adjudicate the dispute. The process is likely to take years and appeals are expected to extend further in time. Challenges are also expected to patents filed internationally, including those filed with the European Patent Office.

To put this dispute in perspective, it is instructive to consider the history of CRISPR authored by one of the celebrities in gene science, Eric Lander.10 This article ignited a controversy because it understated the role of one of the parties to the patent dispute (Jennifer Doudna and Emmanuelle Charpentier), while casting the other party as truly culminating the development of this technology (Feng Zhang, who is affiliated to Lander’s Broad Institute). Some gene scientists accused Lander of tendentious inaccuracies and of trying to spin a story in a manner that favors the legal argument (and economic interest) of Zhang.

Ironically, the contentious article could be read as an argument against any particular claim to the CRISPR patents as it implicitly questions the fairness of granting exclusive rights to an invention. Lander tells the genesis of CRISPR that extends through a period of two decades and over various countries, where the protagonists are the many researchers who contributed to the cumulative knowledge in the ongoing development of the method. The very title of Lander’s piece, “The Heroes of CRISPR” highlights that the technology has not one but a plurality of authors.

A patent is a legal instrument that recognizes certain rights of the patent holder (individual, group, or organization) and at the same time denies those rights to everyone else, including those other contributors to the invention. Patent rights are thus arbitrary under the candle of history. I am not suggesting that the bureaucratic rules to grant a patent or to determine its validity are arbitrary; they have logical rationales anchored in practice and precedent. I am suggesting that in principle any exclusive assignation of rights that does not include the entire community responsible for the invention is arbitrary and thus unfair. The history of CRISPR highlights this old lesson from the history of technology: an invention does not belong to its patent holder, except in a court of law.

Some scientists may be willing to accept with resignation the unfair distribution of recognition granted by patents (or prizes like the Nobel) and find consolation in the fact that their contribution to science has real effects on people’s lives as it materializes in things like new therapies and drugs. Yet patents are also instrumental in distributing those real effects quite unevenly. Patents create monopolies that, selling their innovation at high prices, benefit only those who can afford them. The regular refrain to this charge is that without the promise of high profits, there would be no investments in innovation and no advances in life-saving medicine. What’s more, the biotech industry reminds us that start-ups will secure capital injections only if they have exclusive rights to the technologies they are developing. Yet, Editas Medicine, a biotech start-up that seeks to exploit commercial applications of CRISPR (Zhang is a stakeholder), was able to raise $94 million in its February 2016 initial public offering. That some of Editas’ key patents are disputed and were entering interference at USPTO was patently not a deterrent for those investors.

Towards a CRISPR democratic debate

Neither the governance of gene-editing research nor the management of CRISPR patents should be the exclusive responsibility of scientists. Yet, they do enjoy an advantage in public deliberations on gene editing that is derived from their technical competence and from the authority ascribed to them by society. They can use this advantage to close the public debate and monopolize its terms, or they could turn it into stewardship of a truly democratic debate about CRISPR.

The latter choice can benefit from three steps. A first step would be openness: a public willingness to consider and internalize public values that are not easily reconciled with research values. A second step would be self-restraint: publicly affirming a self-imposed ban on research with human germline and discouraging research practices that are contrary to received norms of prudence. A third useful step would be a public service orientation in the use of patents: scientists should pressure their universities, who hold title to their inventions, to preserve some degree of influence over research commercialization so that the dissemination and access to innovations is consonant with the noble aspirations of science and the public service mission of the university. Openness, self-restraint, and an orientation to service from scientists will go a long way to make of CRISPR a true servant of society and an instrument of democracy.


Other reading: See media coverage compiled by the National Academies of Sciences.

1Nature: an authoritative and accessible primer. A more technical description of applications in Hsu, P. D. et al. 2014. Cell, 157(6): 1262–1278.

2For instance, see this reflection in Science, and this in Nature.

3More about ethical concerns on gene editing here: http://www.geneticsandsociety.org/article.php?id=8711

4Liang, P. et al. 2015. Protein & Cell, 6, 363–372

5Science: A prudent path forward for genomic engineering and germline gene modification.

6Nature: NAS Gene Editing Summit.

7While Collins and Church participated in the summit, their views quoted here are from StatNews.com: A debate: Should we edit the human germline. See also Sciencenews.org: Editing human germline cells sparks ethics debate.

8Hurlbut, J. B. 2015. Limits of Responsibility, Hastings Center Report, 45(5): 11-14.

9This point is forcefully made by Sheila Jasanoff and colleagues: CRISPR Democracy, 2015 Issues in S&T, 22(1).

10Lander, E. 2016. The Heroes of CRISPR. Cell, 164(1-2): 18-28.

Image Source: © Robert Pratta / Reuters
       




academic and careers

Punching Below Its Weight: The U.S. Government Approach to Education in the Developing World

Summary

Global education plays an important role in contributing to U.S. foreign policy objectives. In a recent speech, Secretary of State Hillary Clinton highlighted education, along with health, agriculture, security, and local governance as the core areas for U.S. international development investment. She emphasized the importance of education, particularly of girls and youth, in improving global stability, speeding economic growth, and helping global health, all of which advance U.S. interests in the world.

But how effective has the U.S. government been in supporting global education? Unfortunately, its many good education activities and programs are not leveraged for maximum impact on the ground, especially in situations of armed conflict and state fragility. Challenges of U.S. foreign assistance—for example, fragmentation across multiple agencies, lack of policy coherence, diminished multilateral engagement—generally affects its work in education. Luckily some of the core strengths of U.S. assistance have an impact as well, specifically the large amount of resources (in total terms, if not relative terms) devoted to education and the vast breadth and depth of American academic, philanthropic and NGO partners engaged in pioneering work on education in the developing world.

This report analyzes the effectiveness of U.S. government education work specifically in relation to conflict-affected and fragile states. Findings across five domains—global reach, resources, technical expertise, policy and multilateral partnerships—show that U.S. education aid falls critically short of what it is capable of achieving. The U.S. government has substantial strengths in this area, especially in global reach, resources, and technical expertise, demonstrating a real comparative advantage in the field of education in situations of conflict and fragility. However, its fragmented policy across agencies and its limited multilateral engagement prevent it from maximizing its strengths, leaving it punching below its weight on this important issue. In this sense, the U.S. government is a classic underachiever, failing to efficiently deploy its many capabilities and potential for maximum impact.

There has never been a better time for looking at the aid-effectiveness of U.S. government education work. The Obama administration is bringing increased focus on the Paris Principles for Aid Effectiveness to its development initiatives. The U.S. Congress is actively engaged with pending legislative action to modernize foreign assistance and improve U.S. support for universal education. Two major reviews of foreign assistance are underway: the Quadrennial Diplomacy and Development Review led by the Department of State and USAID, and the Presidential Study Directive on U.S. Global Development Policy led by the White House.

Questions about foreign assistance reform asked in these two reviews can be applied to the education sector. For example, how can the U.S. government improve its education assistance by using a “whole-of-government” approach, by focusing on comparative advantages and strengths, and by improving coordination and by increasing multilateral engagement?

Careful analysis and answers to these questions can help propel the U.S. from its current position as an underachiever to being a leader in global education, specifically in contexts of conflict and state fragility.

This report makes nine specific recommendations, many of which could be achieved without any substantial increase in funding, that would enable the U.S. government to greatly increase the effectiveness of its education aid to populations living in contexts of conflict and state fragility.

Downloads

      
 
 




academic and careers

Beyond Madrasas: Assessing the Links Between Education and Militancy in Pakistan

Overview

Increasing educational attainment is likely to reduce conflict risk, especially in countries like Pakistan that have very low levels of primary and secondary school enrollment. Education quality, relevance and content also have a role to play in mitigating violence. Education reform must therefore be a higher priority for all stakeholders interested in a more peaceful and stable Pakistan. Debate within the country about education reform should not be left only to education policymakers and experts, but ought to figure front and center in national dialogues about how to foster security. The price of ignoring Pakistan’s education challenges is simply too great in a country where half the population is under the age of 17.

There has been much debate concerning the roots of militancy in Pakistan, and multiple factors clearly come into play. One risk factor that has attracted much attention both inside Pakistan and abroad is the dismal state of the national education sector. Despite recent progress, current school attainment and literacy levels remain strikingly low, as does education spending. The Pakistani education sector, like much of the country’s public infrastructure, has been in decline over recent decades. The question of how limited access to quality education may contribute to militancy in Pakistan is more salient now than ever, given the rising national and international security implications of continued violence.

The second half of 2009 witnessed not only the Pakistani government stepping up action against insurgents but also the release of a new Pakistan National Education Policy that aspires to far-reaching and important reforms, including a commitment to increase investment in education—from 2 to 7 percent of gross domestic product. Hundreds of millions of dollars in international education aid have been newly pledged by donor countries. This renewed emphasis on education represents a substantial opportunity to seek to improve security in Pakistan and potentially also globally over the medium to long term. Policymakers both inside and outside Pakistan should give careful consideration to whether and how education investments can promote peace and stability, taking into account what we now know about the state of the education sector and the roots of militancy.

This report takes a fresh look at the connection between schools, including but not limited to Pakistan’s religious seminaries, known as “madrasas,” and the rising militancy across the country. Poor school performance across Pakistan would seem an obvious area of inquiry as a risk factor for conflict. Yet to date, the focus has been almost exclusively on madrasas and their role in the mounting violence. Outside Pakistan, relatively little attention has been given to whether and how the education sector as a whole may be fueling violence, over and above the role of the minority of militant madrasas.

Downloads

      
 
 




academic and careers

Beyond Consultation: Civil Society and the Governance of International Institutions


EXECUTIVE SUMMARY

In the face of unprecedented global challenges, effective global cooperation increasingly requires a partnership between state and non-state actors. Many international institutions now involve non-state actors in arenas that were once the exclusive province of states. The paper analyzes the evolution of civil society participation in the governance of international institutions and highlights the shift from a model based on consultation toward a model of multistakeholder governance. The paper argues that consultation is a less effective approach to involving civil society in achieving the mission of these institutions and suggests that more robust forms of multi-stakeholder participation by civil society can foster greater accountability and better deliberation. It analyzes competing claims about the desirability of including civil society in the governance of international institutions and suggests that an emerging constituency model can promote more effective multi-stakeholder governance. Constituency structures are already central features of several global health institutions and are now being contemplated by institutions in other sectors, including by the Education for All—Fast Track Initiative.

Multi-stakeholder approaches to governance are likely to become more widespread in the years to come in order to harness the contributions of a plethora of private actors engaged in responding to a wide range of global challenges. Even with enhanced cooperation between states, it is increasingly clear that non-state actors are essential to responding to key challenges across a wide range of sectors. Although it is possible to imagine expanded cooperation between state and non-state actors without opening up the governance structures of international institutions, it is less likely that these institutions will be successful in the longrun without a shift toward greater multi-stakeholder involvement in the institutions themselves.

Downloads

Authors

Image Source: © Reuters Photographer / Reuters
      
 
 




academic and careers

A Global Education Challenge: Harnessing Corporate Philanthropy to Educate the World's Poor


Despite the undeniable benefits of education to society, the educational needs, particularly in the world’s poorest countries, remain strikingly great. There are more than 67 million children not enrolled in primary school around the world, millions of children who are enrolled in school but not really learning, and too few young people are advancing to secondary school (van der Gaag and Adams 2010). Consider, for instance, the number of children unable to read a single word of connected text at the end of grade two: more than 90 percent in Mali, more than 50 percent in Uganda, and nearly 33 percent in Honduras (USAID n.d.).

With more young people of age 12 to 24 years today than ever before who are passing through the global education system and looking for opportunities for economic and civic participation, the education community is at a crossroads. Of the 1.5 billion young people in this age group, 1.3 billion live in developing countries (World Bank 2007). The global community set the goal of achieving universal primary education by 2015 and has failed to mobilize the resources necessary, as UNESCO estimates that $16.2 billion in external resources will be need to reach this goal.

Read the full report »

Read the executive summary »

Results from this report were presented at an April 6 Center on Universal Education event at the Brookings Institution.

Learn more about the launch event »

Downloads

Image Source: © Oswaldo Rivas / Reuters
      
 
 




academic and careers

The Power of Circumstance: A New Approach to Measuring Education Inequality


INTRODUCTION

In recent years, there has been a resurgence of interest in the issue of inequality. Part of this resurgence can be traced to new evidence of persistent and widening wealth gaps. Average incomes may be converging globally as a result of high growth in emerging markets, stronger growth in many poor countries, and slow growth in rich countries. However, the evidence also shows that within countries a parallel process of income divergence, marginalization and rising inequality is also taking place. Put differently, the rising tide of global prosperity is not lifting all boats.

Much of the international debate on inequality focuses on the distribution of income across and within countries. Other dimensions of inequality have received less attention. This is unfortunate. Amartya Sen has described development as “a process of expanding the real freedoms that people enjoy” by building human capabilities or their capacity to lead the kind of life they value. Income is a means to that end but it is a limited indicator of well-being. Moreover, a person’s income reflects not just personal choice but also their opportunities for improving health, literacy, political participation and other areas. Education is one of the most basic building blocks for the “real freedoms” that Sen describes. People denied the chance to develop their potential through education face diminished prospects and more limited opportunities in areas ranging from health and nutrition, to employment, and participation in political processes. In other words, disparities in education are powerfully connected to wider disparities, including international and intra-country income inequalities. This is why education has been identified as one of the most critical factors in breaking down the disadvantages and social inequalities that are limiting progress toward the United Nations’ Millennium Development Goals (MDGs)—development targets adopted by the international community for 2015.

Understanding patterns of educational inequality is critical at many levels. Ethical considerations are of paramount importance. Most people would accept that children’s educational achievements should not be dictated by the wealth of their parents, their gender, their race or their ethnicity. Disparities in educational opportunities are not just inequalities in a technical sense, they are also fundamental in equities—they are unjust and unfair. In an influential paper, John Roemer differentiated between inequalities that reflect factors such as luck, effort and reasonable reward, and those attributable to circumstances that limit opportunity (Roemer 1988).1 While the dividing line may often be blurred, that distinction has an intuitive appeal. Most people have a high level of aversion to the restrictions on what people—especially children—are able to achieve as a result of disparities and inherited disadvantages that limit access to education, nutrition or health care (Wagstaff, 2002). There is a wide body of opinion across political science, philosophy and economics that equal opportunity—as distinct from equality of outcomes—is a benchmark of egalitarian social justice. The theories of distributive justice associated with thinkers such as Amartya Sen, John Rawls, Ronald Dworkin and John Roemer argue, admittedly from very different perspectives, that public policy should aim at equalizing opportunity to counteract disadvantages associated with exogenous circumstances over which individuals or social groups have no control. Given the role of education as a potential leveler of opportunity, it is a national focal point for redistributive social justice.

Considerations of economic efficiency reinforce the ethical case for equalizing educational opportunities. Education is a powerful driver of productivity, economic growth, and innovation. Econometric modeling for both rich and poor countries suggests that an increase in learning achievement (as measured by test score data) of one standard deviation is associated on average with an increase in the long-run growth rate of around 2 percent per capita annually (Hanushek and Wößmann, 2010; Hanushek, 2009; Hanushek and Wößmann, 2008). Such evidence points to the critical role of education and learning in developing a skilled workforce. Countries in which large sections of the population are denied a quality education because of factors linked to potential wealth, gender, ethnicity, language and other markers for disadvantage are not just limiting a fundamental human right. They are also wasting a productive resource and undermining or weakening the human capital of the economy.

International development commitments provide another rationale for equalizing educational opportunities. This is for two reasons. First, the commitments envisage education for all and achievement of universal primary education by 2015. Second, there is mounting evidence that inequality is acting as a brake on progress toward the 2015 goals. Since around 2005, the rate of decline in the out-of-school population has slowed dramatically. Based on current trends, there may be more children out of school in 2015 than there were in 2009. Caution has to be exercised in interpreting short-run trends, especially given the weakness of data. However, the past three editions of the UNESCO Education for All Global Monitoring Report (GMR) have highlighted the role of inequality in contributing to the slowdown with governments struggling to reach populations that face deeply entrenched disadvantages (UNESCO, 2008, 2010, 2011). Therefore, picking up the pace toward the 2015 goals requires a strengthened focus on equity and strategies that target the most marginalized groups and regions of the world (Sumner and Tiwari, 2010; UN-DESA, 2009; UNESCO, 2010). It should be added that disparities in education relate not just to access, but also to learning achievement levels.

Accelerated progress in education would generate wider benefits for the MDGs. Most of the world’s poorest countries are off-track for the 2015 MDG target of halving income poverty and a long way from reaching the targets on child survival, maternal health and nutrition. Changing this picture will require policy interventions at many levels. However, there is overwhelming evidence showing that education—especially of young girls and women—can act as a potent catalyst for change. On one estimate, if all of sub-Saharan Africa’s mothers attained at least some secondary education, there would be 1.8 million fewer child deaths in the region each year. Thus while education may lack the “quick fix” appeal of vaccinations, it can powerfully reinforce health policy interventions.

Downloads

Authors

      
 
 




academic and careers

Financing for a Fairer, More Prosperous Kenya: A Review of the Public Spending Challenges and Options for Selected Arid and Semi-Arid Counties


INTRODUCTION

In August, 2010 the government of Kenya adopted a new constitution. This followed a referendum in which an overwhelming majority of Kenyans voted for change. The decisive impetus for reform came from the widespread violence and political crisis that followed the 2007 election. While claims of electoral fraud provided the immediate catalyst for violence, the deeper causes were to be found in the interaction of a highly centralized ‘winner-take-all’ political system with deep social disparities based in part on group identity (Hanson 2008).

Provisions for equity figure prominently in the new constitution. Backed by a bill of rights that opens the door to legal enforcement, citizenship rights have been strengthened in many areas,including access to basic services. ‘Equitable sharing’ has been introduced as a guiding principle for public spending. National and devolved governments are now constitutionally required to redress social disparities, target disadvantaged areas and provide affirmative action for marginalized groups.

Translating these provisions into tangible outcomes will not be straightforward. Equity is a principle that would be readily endorsed by most policymakers in Kenya and Kenya’s citizens have provided their own endorsement through the referendum. However, there is an ongoing debate over what the commitment to equity means in practice, as well as over the pace and direction of reform. Much of that debate has centered on the constitutional injunction requiring ‘equitable sharing’ in public spending.

On most measures of human development, Kenya registers average outcomes considerably above those for sub-Saharan Africa as a region. Yet the national average masks extreme disparities—and the benefits of increased prosperity have been unequally shared.

There are compelling grounds for a strengthened focus on equity in Kenya. In recent years, the country has maintained a respectable, if less than spectacular, record on economic growth. Social indicators are also on an upward trend. On most measures of human development, Kenya registers average outcomes considerably above those for sub-Saharan Africa as a region. Yet the national average masks extreme disparities—and the benefits of increased prosperity have been unequally shared. Some regions and social groups face levels of deprivation that rank alongside the worst in Africa. Moreover, the deep fault lines running through society are widely perceived as a source of injustice and potential political instability.

High levels of inequality in Kenya raise wider concerns. There has been a tendency in domestic debates to see ‘equitable sharing’ as a guiding principle for social justice, rather than as a condition for accelerated growth and enhanced economic efficiency. Yet international evidence strongly suggests that extreme inequality—especially in opportunities for education— is profoundly damaging for economic growth. It follows that redistributive public spending has the potential to support growth.

The current paper focuses on a group of 12 counties located in Kenya’s Arid and Semi-Arid Lands (ASALs). They are among the most disadvantaged in the country. Most are characterized by high levels of income poverty, chronic food insecurity and acute deprivation across a wide range of social indicators.

Nowhere is the deprivation starker than in education. The ASAL counties account for a disproportionately large share of Kenya’s out-of-school children, pointing to problems in access and school retention. Gender disparities in education are among the widest in the country. Learning outcomes for the small number of children who get through primary school are for the most part abysmal, even by the generally low national average standards.

Unequal public spending patterns have played no small part in creating the disparities that separate the ASAL counties from the rest of Kenya—and ‘equitable sharing’ could play a role in closing the gap. But what would a more equitable approach to public spending look like in practice?

This paper addresses that question. It looks in some detail at education for two reasons. First, good quality education is itself a powerful motor of enhanced equity. It has the potential to equip children and youth with the skills and competencies that they need to break out of cycles of poverty and to participate more fully in national prosperity. If Kenya is to embark on a more equitable pattern of development, there are strong grounds for prioritizing the creation of more equal opportunities in education. Second, the education sector illustrates many of the wider challenges and debates that Kenya’s policymakers will have to address as they seek to translate constitutional provisions into public spending strategies. In particular, it highlights the importance of weighting for indicators that reflect need in designing formulae for budget allocations.

Our broad conclusion is that, while Kenya clearly needs to avoid public spending reforms that jeopardize service delivery in wealthier counties, redistributive measures are justified on the grounds of efficiency and equity.

The paper is organized as follows. Part 1 provides an overview of the approach to equity enshrined in the constitution. While the spirit of the constitution is unequivocal, the letter is open to a vast array of interpretations. We briefly explore the implications of a range of approaches. Our broad conclusion is that, while Kenya clearly needs to avoid public spending reforms that jeopardize service delivery in wealthier counties, redistributive measures are justified on the grounds of efficiency and equity. Although this paper focuses principally on basic services, we caution against approaches that treat equity as a matter of social sector financing to the exclusion of growth-oriented productive investment.

Part 2 provides an analysis of some key indicators on poverty, health and nutrition. Drawing on household expenditure data, the report locates the 12 ASAL counties in the national league table for the incidence and depth of poverty. Data on health outcomes and access to basic services provide another indicator of the state of human development. While there are some marked variations across counties and indicators, most of the 12 counties register levels of deprivation in poverty and basic health far in excess of those found in other areas.

Part 3 shifts the focus to education. Over the past decade, Kenya has made considerable progress in improving access to basic education. Enrollment rates in primary education have increased sharply since the elimination of school fees in 2003. Transition rates to secondary school are also rising. The record on learning achievement is less impressive. While Kenya lacks a comprehensive national learning assessment, survey evidence points to systemic problems in education quality. In both access and learning, children in the ASAL counties—especially female children—are at a considerable disadvantage. After setting out the national picture, the paper explores the distinctive problems facing these counties.

In Part 4 we look beyond Kenya to wider international experience. Many countries have grappled with the challenge of reducing disparities between less-favored and more-favored regions. There are no blueprints on offer. However, there are some useful lessons and guidelines that may be of some relevance to the policy debate in Kenya. The experience of South Africa may be particularly instructive given the weight attached to equity in the post-apartheid constitution.

Part 5 of the paper explores a range of approaches to financial allocations. Converting constitutional principle into operational practice will require the development of formulae-based approaches. From an equitable financing perspective there is no perfect model. Any formula that is adopted will involve trade-offs between different goals. Policymakers have to determine what weight to attach to different dimensions of equity (for example, gender, income, education and health), the time frame for achieving stated policy goals, and whether to frame targets in terms of outcomes or inputs. These questions go beyond devolved financing. The Kenyan constitution is unequivocal in stipulating that the ‘equitable sharing’ provision applies to all public spending. We therefore undertake a series of formula-based exercises illustrating the allocation patterns that would emerge under different formulae, with specific reference to the 12 ASAL focus counties and to education.

Downloads

Authors

Image Source: © Thomas Mukoya / Reuters
      
 
 




academic and careers

Learning First: A Research Agenda for Improving Learning in Low-Income Countries


EXECUTIVE SUMMARY

Parents, educators, government ministers and policymakers in all contexts and countries around the world are concerned with learning and how to improve it. There are many reasons for this, but none is more important than the fact that learning is at the heart of success at the individual, community and global levels. Learning First is the title of this report, with the strong implication that learning should be the foremost goal of education policies worldwide.

The present review seeks not only to explain why this is the case but also focuses on what we need to know—that is, what research is needed—in order to improve learning in the decades to come, particularly among those children most in need. This question is addressed in the following six sections.

  1. Learning Goals and Research. The first section begins with a historical synopsis of international education goals put forward in 1990 at the World Conference on Education for All in Jomtien (Thailand), in 2000 at the Education for All conference in Dakar, and later in 2000 as a part of the UN Millennium Development Goals for 2015. In 2011, the Center for Universal Education at the Brookings Institution published A Global Compact on Learning: Taking Action on Education in Developing Countries, which stated that there is a “global learning crisis—which affects children and youth who are out of school with limited learning opportunities and those who are in school but not learning the skills they need for their futures.” The present review of learning research in low-income countries follows from that report. The overall purpose is to explore the most pressing learning issues today that require further research attention in the years to come.
     
  2. Learning Definitions and Contexts. This section reviews how the field of education has defined learning over the years. Here, learning is defined as a modification of behavior due to experience—such as in knowledge, skills, attitudes and values. Three main principles of effective learning are suggested: individual active involvement, social participation, and meaningful engagement. As a way to emphasize the importance of learning contexts, three individual stories—Illa, a four-year-old Quechua-speaking girl in Peru; Pawan, an eight-year-old primary school student in urban India; and Rachida, a young illiterate woman in rural Morocco—are provided in order to better explain the importance of learning as a culturally specific phenomenon. These stories help to illustrate a more general learning framework, encompassing the relationship between two dimensions of learning—its processes and contexts. A discussion follows concerning the need to disaggregate learners and their learning contexts—between countries and within countries—as a way to overcome frequent and simplistic generalizations about how the “average” child learns.
     
  3. Global Change and the Contexts of Learning. This section considers the issue of global change on how learning and learning contexts are being transformed around the world. For example, researchers need to pay more attention to the impact of migration on children’s learning and on educational systems more broadly. In each instance of translocation, children confront the challenges of adapting to a new environment that may include different languages, dialects or cultures within the nonformal learning contexts of daily life. Similarly, in formal education contexts, student migrants have to cope with contrasts in culture, lifestyle and language of schooling, and demonstrate skills and achievement that may vary dramatically with their culture of origin. Other changes due to globalization include increased multilingualism in schools, growing overcrowding in classrooms, inability to keep up with teacher training, changes in intergenerational learning, and the growing importance of 21st-century skills. Based on these observations, it is suggested that learning contexts and needs should be understood as a shifting target.

Downloads

Authors

  • Daniel A. Wagner
  • Katie M. Murphy
  • Haley De Korne
Image Source: © Soe Zeya Tun / Reuters
      
 
 




academic and careers

The Education Link: Why Learning is Central to the Post-2015 Global Development Agenda


INTRODUCTION

With fewer than three years until the planned end-date of the United Nations Millennium Development Goals (MDGs), attention is rapidly turning to what will follow. The elaboration of the next global development agenda is a complex, multi-pronged process that is academic, political and practical, involving experts from a myriad of social and economic sectors and representing a cross-section of constituencies. While the formal U.N. process is still in the early stages, the ongoing discourse (predominantly occurring in the global north, but not exclusively) has introduced several potential frameworks for this agenda. This paper describes the leading frameworks proposed for the post-2015 global development agenda and discusses how education and learning fit within each of those frameworks. While many within the education community are working to develop a cohesive movement to advance an “access plus learning” agenda, it remains equally important to engage proactively with the broader development community to ensure that education fits within the agreed upon overarching organizing framework.

The frameworks described below represent a snapshot of current thinking in 2012. On the road to 2015, the education community will need to refine and sharpen its thinking with respect to how learning is incorporated into the prevailing framework. The seven frameworks that will be addressed in this paper are:

  1. Ending Absolute Poverty
  2. Equity and Inclusion
  3. Economic Growth and Jobs
  4. Getting to Zero
  5. Global Minimum Entitlements
  6. Sustainable Development
  7. Well-Being and Quality of Life

Downloads

Authors

  • Anda Adams
Image Source: © Adriane Ohanesian / Reuters
      
 
 




academic and careers

From Enrollment to Learning: The Way Forward


INTRODUCTION

In an earlier policy brief, Where is the Learning? Measuring Schooling Efforts in Developing Countries, we drew attention to what was labeled “the global learning crisis.” While tremendous progress has been made over the past couple of decades to get tens of millions of additional children to enroll in school, progress in improving learning outcomes has been considerably less impressive. Although, shockingly, comprehensive learning outcome data are not available for most of the developing world, the many small scale, local or, in some cases, national studies that have been done show a dismal picture. For instance, Uwezo, an East African initiative, found that in Tanzania, only 44 percent of students in Grade 4 were able to read a basic story from Grade 2. Similarly, the Annual Status of Education Report (ASER) facilitated by Pratham found that in rural India, less than half of Grade 4 students were able to do basic subtraction. These examples demonstrate the gravity of “the global learning crisis” as students fail to master competencies appropriate for their grade level, hindering the development of life skills and success in further schooling, as well as performance in the labor market.

With about 61 million children in the developing world still not yet in school, it is too early to declare victory on the “enrollment agenda”. But we would do a disservice to the 250 million children around the world who fail to reach Grade 4 or attain minimum learning standards, if we don’t step up efforts to improve learning outcomes.

This policy brief is part of a larger effort to link resources in the education sector with outcome measures. As we have documented elsewhere, few countries systematically collect comprehensive financial data on education, although fortunately an increasing number of initiatives is trying to address this issue by producing, for instance, National Education Accounts (NEAs). When the focus of the sector changes from enrollment to enrollment plus learning, efforts to better grasp the size and use of financial resources should evolve accordingly. For instance, much learning takes place outside of the classroom, especially in the early years. For NEAs to be a useful tool for adjusting the allocation of scarce resources, the “learning” sector should be defined more broadly than the education or “schooling” sector. We will address this and related issues in a subsequent policy brief.

Once our focus becomes enrollment plus learning, we have to broaden our view and look at the entire environment in which a child develops skills, starting with the households in which children are born. It has beenknown for many decades and throughout the world, that among the best predictors of future school performance are some basic household characteristics, such as income and mother’s education level.

Data from international assessments also show a relationship between income and educational performance, exemplified by intra and intercountry results. In Colombia, average Trends in International Mathematics and Science Study (TIMSS) math scores at Grade 8 for the richest quintile of students were close to 100 points higher than those from the poorest quintile. On the other hand, the difference in average scores between the poorest quintile in the United States and the richest quintile in Colombia was about 50 points. Income is not the only predictor of success, as exemplified in Peru, where children whose mothers have completed primary school and whose maternal language is Spanish rather than an indigenous language, have a greater probability of reaching the appropriate school grade for their age. In Kenya, Uwezo found that the higher their father’s educational attainment, the more likely children were able to read a story at Grade 3 or attend extra tutoring sessions.

In addition, the larger environment (such as the village or the urban neighborhood) in which the young child grows up also has a major and lasting impact. In Tanzania, urban students in Grade 3 are three times more likely than their rural counterparts to meet standards in literacy and numeracy. Related to the impact of the larger environment, data from Nigeria suggest that girls are more disadvantaged in school attendance, as parents may be reluctant to send girls to school because of perceived fears for their safety while traveling and concerns about the physical strength required for walking the distance.

Clearly, especially in the early years, most learning takes place outside of the classroom. Consequently, children who grow up in deprived circumstances will start life with a disadvantage leading to a lack of learning in the early grades, which will have lifetime effects.

In the next section, we will summarize the evidence that the early years (ages 0 to 5) are crucial for subsequent learning achievements. From this evidence we conclude that many of the problems with learning outcomes in the developing world (and in many developed countries) need to be addressed well before school age. Before delving into what happens in schools, we explore the relationship between enrollment, learning and dropout. As the crux of this brief is to lay out the evidence on what contributes to learning, we must acknowledge the factors leading to low enrollment and dropout. Next, we turn our attention to what happens in schools and what can be done to improve these activities, as well as try to summarize the evidence about the relationship between specific school-based inputs and learning outcomes. As it turns out, this evidence is, in many cases, rather feeble. Therefore, we will first focus on school-level inputs that are necessary for a good learning environment, i.e. without which we cannot expect any learning to take place. Most of these inputs are rather obvious, but they are worth mentioning. Subsequently, we will discuss additional inputs that have proven to contribute to learning outcomes in some cases, but not in others. Clearly how these inputs are applied matters.

Next, we address factors that contribute to learning outside of a formal environment, after which we review issues in health and nutrition that are closely linked to learning outcomes. We then review the need for the collection and dissemination of learning assessments in order to impact further improvements in these areas and we try to answer the question: what are the building blocks for an education sector that promote learning? Finally we explore needs for future research in learning.

Downloads

Authors

Image Source: © Swoan Parker / Reuters
      
 
 




academic and careers

A New Agenda for Education in Fragile States


In the 13 years since the dawn of the new millennium, significant progress has been made in addressing some of the world’s most important problems. One billion fewer people live in extreme poverty, 3 million children’s lives are saved annually and 610 million children in developing countries are enrolled in primary school, more than ever before. However, this progress has not been shared evenly around the globe. Populations affected by weak systems of governance and that suffer violence and disasters have systematically been left behind.

They are much less likely to enjoy progress vis-à-vis any of the United Nations’ Millennium Development Goals (MDGs), which include eradicating extreme poverty and hunger, improving children and women’s health, and enrolling children in school. No country classified as a “fragile state,” for example, has met all eight of the MDGs. Children born in low-income, conflict-affected countries are twice as likely to die before the age of five years, twice as likely to lack access to clean water and more than three times as likely to not attend school than children living in peaceful, low-income countries. People living in poverty, many of whom are affected by conflict, are more vulnerable to the effects of climate change and disasters. Children are especially affected, and those from the poorest families are up to 10 times more likely to bear the brunt of environmental disasters linked to climate change.

The needs of people living in fragile states are an urgent priority for our time, and thus will almost certainly be prominent in the next round of global development goals. As the global community reflects on the new agenda that will replace the MDGs when they expire in 2015, it will do well to take stock of the existing strategies for supporting the needs of populations in fragile states. A range of strategies are undoubtedly needed, and there is good reason why there is a heavy emphasis on the economic, legal and security dimensions of development efforts in fragile states. However, efforts in the social sphere are equally needed, and education is one important strategy for supporting populations in fragile states that was often overlooked until recently.

This report provides a broad review of the field of education in fragile states and charts a new agenda for maximizing education’s contribution to the development and well-being of people living in these contexts. We hope it serves as a comprehensive introduction to the topic for those coming to this issue for the first time as well as provides new insights for those already actively engaged in the subject. The arguments we make here are based on evidence developed both from careful analysis and synthesis of the latest available data as well as primary research.


Downloads

Authors

Image Source: © Ahmad Masood / Reuters
      
 
 




academic and careers

What coronavirus teaches us about addressing climate change

On this episode, Andrea Risotto, the associate vice president of communications at Brookings, interviews William Burke-White and Todd Stern about the connection between the global coronavirus pandemic and the international response to climate change. Burke-White is the Richard Perry Professor at the University of Pennsylvania Law School and a visiting fellow in foreign policy at Brookings. Stern is…

       




academic and careers

What Americans think about President Trump’s response to the coronavirus pandemic

In this special edition of the podcast, with Brookings Senior Fellows Bill Galston and Elaine Kamarck discuss President Trump’s handling of the coronavirus pandemic, his administration's response, and public opinion on that response. Also, what effect will the crisis and response to it have on the election in November? Galston is the Ezra K. Zilkha…

       




academic and careers

Why Boko Haram in Nigeria fights western education

The terrorist group Boko Haram has killed tens of thousands of people in Nigeria, displaced millions, and infamously kidnapped nearly 300 schoolgirls in 2014, many of whom remain missing. The phrase “boko haram” translates literally as “Western education is forbidden.” In this episode, the author of a new paper on Boko Haram talks about her research…

       




academic and careers

How US military services are responding to the coronavirus and the pandemic’s impact on military readiness

On this special edition of the podcast, four U.S. military officers who are participating in the 2019-2020 class of Federal Executive Fellows at Brookings share their expert insights about the effects that the coronavirus pandemic is having on the readiness of their respective services, and how their services are responding to the crisis. http://directory.libsyn.com/episode/index/id/14065544 Brookings…

       




academic and careers

How the US embassy in Prague aided Czechoslovakia’s Velvet Revolution

In late 1989, popular protests against the communist government in Czechoslovakia brought an end to one-party rule in that country and heralded the coming of democracy. The Velvet Revolution was not met with violent suppression as had happened in Prague in 1968. A new book from the Brookings Institution Press documents the behind the scenes…

       




academic and careers

Global China’s advanced technology ambitions

In this special edition of the Brookings Cafeteria Podcast, Lindsey Ford, a David M. Rubenstein Fellow in Foreign Policy, interviews two authors of the most recent release of papers in the Global China series focused on China's aspiration to be a global technology leader. Saif Khan and Remco Zwetsloot are both research fellows at the…

       




academic and careers

Kim Jong Un’s ascent to power in North Korea

In her new book, Becoming Kim Jong Un: A Former CIA Officer's Insights into North Korea's Enigmatic Young Dictator (Ballantine Books), Brookings Senior Fellow Jung Pak describes the rise of North Korea's ruler. In this episode, she is interviewed by Senior Fellow Michael O’Hanlon. Also on this episode, Senior Fellow Sarah Binder offers four lessons about how Congress…

       




academic and careers

Technology competition between the US and a Global China

In this special edition of the Brookings Cafeteria Podcast, Lindsey Ford, a David M. Rubenstein Fellow in Foreign Policy, interviews two scholars on some of the key issues in the U.S.-China technology competition, which is the topic of the most recent release of papers in the Global China series. Tom Stefanick is a visiting fellow…

       




academic and careers

Introducing Techstream: Where technology and policy intersect

On this episode, a discussion about a new Brookings resource called Techstream, a publication site on brookings.edu that puts technologists and policymakers in conversation. Chris Meserole, a fellow in Foreign Policy and deputy director of the Artificial Intelligence and Emerging Technology Initiative, explains what Techstream is and some of the issues it covers. Also on…

       




academic and careers

Addressing COVID-19 in resource-poor and fragile countries

Responding to the coronavirus as individuals, society, and governments is challenging enough in the United States and other developed countries with modern infrastructure and stable systems, but what happens when a pandemic strikes poor and unstable countries that have few hospitals, lack reliable electricity, water, and food supplies, don’t have refrigeration, and suffer from social…

       




academic and careers

Decoding declines in youth employment


Interpreting employment stats among young people can be tricky. No one expects employment rates among teens or people in their early 20s to reach those of prime-age workers. These are prime years for what economists call “investing in human capital,” an activity most people would describe as “going to school.”

Education requirements for good jobs are getting higher, so finishing high school and earning a post-secondary credential like two or four-year college degrees, apprenticeships, or certifications are top priorities. But early work experiences can allow young people to learn new skills, gain experience, and expand networks. Evidence suggests that it can improve employment prospects down the line. And the earlier that people are exposed to the workplace, the earlier they learn such skills as teamwork, communication, and dependability—skills that employers say are in short supply

The employment rate for teens fell from 43 percent in 2000 to 26 percent in 2014, and for young adults aged 20 to 24, it fell from 70 to 62 percent. These are big drops. In a new analysis, I take a deeper look at employment trends among young people. When employment rates are broken out by age and race/ethnicity, you see the same downward pattern, but also substantial variation among whites, blacks, Latinos, and Asians.

Do these declines spell trouble? The answer is, it depends:  on how young people spend their time, what resources and support are available to them, and how the person making the judgement values academics and enrichment relative to employment.

Some argue that workplace experience provides key developmental opportunities that benefit all young people. Robert Halpern, for example, wryly notes that high school students are isolated from the adult world “at just the moment when [they] need to begin learning about participating in it.” 

Others say that employment matters more for some young people than others. For example, disadvantaged youth—those not on track to earn a post-secondary credential and without strong family or community  networks to help them find jobs—can particularly benefit from formal programs that connect them to the labor market. As Jeylan Mortimer concluded about “low academic promise” high school students (those with poor grades and low educational goals): “[H]aving a positive work experience can help to turn you around.  For those who have a lot of disadvantages, any positive experience is likely to have a greater impact than on people with a lot of advantages already.” Research on Career Academies, high schools that combine academics with career development, support this view. Career Academy students, disproportionately low-income, black, and Latino, posted significant earnings gains eight years after graduation, and young male graduates also had higher rates of marriage and custodial parenthood.

And some would say that it’s appropriate to prioritize education over employment, especially for teens, who are typically not responsible for supporting themselves and their families.

So what do the data tell us? Voluntarily dropping out of the labor force to concentrate on academics as a young person can pay off when people enter their prime working years, generally considered to be 25 to 54. Though education and work are not necessarily incompatible, employment rates are generally lower among students than among those not enrolled in school. Among teens and young adults, Asians have the lowest employment rates, but they also have the highest school enrollment rates. 92 percent of Asian 16- to 19-year-olds and 63 percent of Asian 20- to 24-year-olds are in school, compared to 80 percent and 38 percent among all races. It follows, then, that Asians have high levels of educational attainment. In fact, 50 percent of 23- to 24-year-old Asians have a Bachelor’s degree, double the average rate. Given the strong correlation between education and employment, it is not a coincidence that prime-age Asians have high employment rates and low unemployment rates. Their low employment rates as young people do not, on the whole, seem to lead to problems as adults. (Of course, this is not to downplay the diversity of the Asian population and to suggest that all Asians are doing well economically.)

On the other hand, blacks have the second lowest employment rates as teens and young adults, and the lowest rate as prime age workers. They also have the highest unemployment rates, showing an active desire to work. Among black teens in 2014, the unemployment rate was 38 percent, compared to 23 percent overall, and it was 22 percent among black young adults, compared to 13 percent overall. The trend continues into prime working years: blacks have an unemployment rate of 11.4 percent, nearly double the overall rate of 6.2 percent.  The low employment rate among young black people is not driven by school enrollment. Latinos have similar (below-average) enrollment levels but higher employment rates, and whites have much higher employment rates but only slightly higher enrollment levels. The weaker employment outcomes of blacks at all ages is probably related to multiple factors: relatively low levels of educational attainment, discrimination, and the neighborhood effects of living in concentrated poverty.

Blacks and Latinos are disproportionately represented among so-called “disconnected youth,” young people aged 16 to 24 who are neither working nor in school. 17 percent of black young adults aged 20 to 24 are disconnected, as are 13 percent of Latinos, 7 percent of whites, and 4 percent of Asians. Half of disconnected young adults have a high school credential and another 20 percent have taken some college courses, suggesting that getting these young people on a better path involves not only reducing the high school dropout rate, but also strengthening the transition from high school to post-secondary education and the labor market. 

In short, employment rates among young people tell different stories that often track by race and ethnicity. Some voluntarily withdraw from the labor market to focus on academics and extra-curricular activities, others would really like a job but can’t find one, and some—the most disadvantaged—are alienated from both school and the labor market.

Authors

Image Source: sruss
     
 
 




academic and careers

Amidst unimpressive official jobs report for May, alternative measures make little difference


May’s jobs gains, released this morning, show that only 38,000 new jobs were added this May, down from an average of 178,000 over the first four months of the year, and the least new jobs added since September 2010.

This year’s monthly job gains and losses can indicate how the economy is doing once they are corrected to account for the pattern we already expect in a process called seasonal adjustment. The approach for this seasonal adjustment that is presently used by the Bureau of Labor Statistics (BLS) puts very heavy weight on the current and last two years of data in assessing what are the typical patterns for each month.

In my paper “Unseasonal Seasonals?” I argue that a longer window should be used to estimate seasonal effects. I found that using a different seasonal filter, known as the 3x9 filter, produces better results and more accurate forecasts by emphasizing more years of data. The 3x9 filter spreads weight over the most recent six years in estimating seasonal patterns, which makes them more stable over time than in the current BLS seasonal adjustment method.

I calculate the month-over-month change in total nonfarm payrolls, seasonally adjusted by the 3x9 filter, for the most recent month. The corresponding data as published by the BLS are shown for comparison purposes. According to the alternative seasonal adjustment, the economy actually lost about 4,000 jobs in May (column Wright SA), compared to the official BLS total of 38,000 gained (column BLS Official).

In addition to seasonal effects, abnormal weather can also affect month-to-month fluctuations in job growth. In my paper “Weather-Adjusting Economic Data” I and my coauthor Michael Boldin implement a statistical methodology for adjusting employment data for the effects of deviations in weather from seasonal norms. This is distinct from seasonal adjustment, which only controls for the normal variation in weather across the year. We use several indicators of weather, including temperature and snowfall.

We calculate that weather in May had a negligible effect on employment, bringing up the total by only 4,000 jobs (column Weather Effect). Our weather-adjusted total, therefore, is 34,000 jobs added for May (column Boldin-Wright SWA). This is not surprising, given that weather in May was in line with seasonal norms.

Unfortunately, neither the alternative seasonal adjustment, nor the weather adjustment, makes todays jobs report any more hopeful. They make little difference and, if anything, make the picture more gloomy.

a. Applies a longer window estimate of seasonal effects (see Wright 2013).
b. Includes seasonal and weather adjustments, where seasonal adjustments are estimated using the BLS window specifications (see Boldin & Wright 2015). The incremental weather effect in the last column is the BLS official number less the SWA number.

Authors

  • Jonathan Wright
Image Source: © Toru Hanai / Reuters
     
 
 




academic and careers

Mindsets for the 21st century and beyond


Editor’s note: In the "Becoming Brilliant" blog series, experts explore the six competencies that reflect how children learn and grow as laid out by Kathy Hirsh-Pasek and Roberta Golinkoff in their new book "Becoming Brilliant."

The world is morphing into a place that no one can foresee. How can we prepare students to live and work in that place?

Not long ago, people could learn job skills and use them indefinitely, but now jobs and skill sets are becoming obsolete at an alarming rate. This means that students, and later adults, need to expect and thrive on challenges and know how to turn failures into stepping stones to a brighter future.

When I was a beginning researcher I wanted to see how children coped with setbacks, so I gave 5th graders simple problems followed by hard problems—ones they couldn’t solve. Some hated the hard ones, some tolerated them, but, to my surprise, some relished them. One unforgettable child rubbed his hands together, smacked his lips, and declared, “I love a challenge!” Another said, “I was hoping this would be informative.” They didn’t think they were failing, they thought they were learning. Although this was years ago, they were already 21st century kids.

I knew then that I had to figure out their secret and, if possible, bottle it. With help from my graduate students, figure it out we did. And we are learning how to bottle it too. 

So, what was their secret?

Our research has shown that these children tend to have a “growth mindset.” They believe that their basic abilities, even their intelligence, can be developed through learning. That’s why they love challenges and remain confident through setbacks. Their more vulnerable counterparts, however, have more of a “fixed mindset.” They believe their basic abilities are just fixed—set in stone­. So their key goal is to look and feel smart (and never dumb). To accomplish this they often seek easy over hard tasks. And when they do encounter setbacks, they tend to feel inept and lose confidence. Research shows that even exerting effort can make them feel unintelligent. If you’re really good at something, they believe, you shouldn’t have to work at it.

These mindsets make a difference. In one study we tracked hundreds of students across the difficult transition to seventh grade, akin to entering a new world with harder work, higher standards, and a whole new structure. Those who entered with more of a growth mindset (the belief that they could develop their intelligence) fared better. Their math grades quickly jumped ahead of those of students with a fixed mindset and the gap became wider and wider over the next two years. This was true even though the two groups entered with equivalent past achievement test scores.

Recently, we were able to study all the 10th-graders in the country of Chile. We found that at every socioeconomic level students with a growth mindset were outperforming their peers with a fixed mindset. What was most striking was that when the poorest students held growth mindsets they were performing at the level of far richer students with fixed mindsets.

What’s exciting is that we have been able to teach a growth mindset to students through carefully designed workshops. In these workshops, students learn that their brain can grow new, stronger connections when they take on hard learning tasks and stick to them. They learn to avoid categorical smart-dumb thinking and instead focus on their own improvement over time. They hear from other students who have benefitted from learning a growth mindset. And they learn how to apply growth mindset thinking to their schoolwork. In these workshops students also do exercises, such as mentoring a struggling peer using what they learned about the growth mindset. Such workshops have been delivered both in person and online and have typically led to an increase in students’ motivation and achievement, particularly among students who are encountering challenges—such as difficult courses, school transitions, or negative stereotypes.

We have also studied how teachers and parents can foster a growth mindset in children. Sadly, many do not—even many of those who hold a growth mindset themselves! This is because adults, in their eagerness to motivate children and build their confidence, can tend to do things that foster a fixed mindset.

Here is what we’ve found:

  • Praising children’s intelligence conveys that intelligence is fixed and promotes a fixed mindset and its vulnerabilities. Praising the children’s learning process—their strategies, hard work, and focus—and linking it to their progress conveys a growth mindset.
  • Reacting to children’s failures with anxiety, false reassurances, or comfort for their lesser ability (“Don’t worry, not everyone can be good at math”) can foster a fixed mindset. Reacting with compassionate questions and plans for future learning conveys a growth mindset.
  • Research shows that how math teachers react to their students when the students are stuck is critical. Teachers can help students develop growth mindsets by sitting with them, trying to understand their thinking, and then collaborating with them on how to move forward and what to try next.

But how can teachers themselves develop more of a growth mindset?

In some quarters, a growth mindset became a “requirement.” This led many educators to claim a growth mindset without really understanding what it is or how to develop it. We have suggested that educators understand, first, that a growth mindset is the belief that everyone can develop their abilities. It is not simply about being open-minded or flexible. Second, they must understand that all people have both mindsets and that many situations, such as struggles or setbacks, can trigger a fixed mindset. Finally, they must learn how their own fixed mindset is triggered so that they can work to stay in a growth mindset more often.

As we prepare students to thrive in the new world, we can influence whether they see that world as overwhelming and threatening or whether they greet it with the confident words “I love a challenge.” The latter are the ones who can make the world, whatever it’s like, a better place.  

Authors

  • Carol Dweck
     
 
 




academic and careers

Jobs report weighs on Fed


Federal Reserve chair Janet Yellen just gave a speech in Philadelphia that acknowledges a weak jobs report last Friday. She rightly cautions us not to place too much weight on any single month’s numbers. Overall, her talk seems to have raised confidence in the financial markets about where the economy is heading, based on a wider range of recent indicators besides the numbers reported in the jobs report, for example, on consumer spending and home construction. These suggest areas of continued strength in the economy.

Still, the report last Friday was really lousy, and not really a one-off event (and not just due to the Verizon strike). Since the end of 2015, the job growth numbers have been on a pretty steady downward trend, with growth above 200,000 in only one month (February). A few other indicators in the report were also quite disappointing: the numbers of workers in part-time jobs because they can’t find full-time work bounced back up; and labor force participation dropped considerably. Indeed, about 800,000 workers have left the labor force in the last 2 months, including many who are well below retirement age. Their exit reverses some of the progress in labor force participation we had seen late last year and early this year.

On the other hand, not all numbers in the report were terrible. Wage growth held up reasonably well. Over the past 2 months, wages have risen by over 3 percent (on an annualized basis), suggesting a labor market that is tight enough to finally generate some earnings improvements for workers who already have jobs.

Overall, the different indicators highlight two sets of forces driving the labor market: Demand-side problems, suggesting employers still need too few workers to generate full employment; and supply-side problems, where worker availability and skills are starting to constrain the amount of hiring going on.

Indeed, the latter problems might explain the peculiar combination of weak and strong indicators we see in the report. For instance, if the drop in labor force participation is permanent, then the 4.7 percent unemployment rate accurately suggests that the amount of slack in the labor market is lower than we thought. Employers are likely finding it a bit harder to hire the workers with the skills and experience they really want. Consequently, our long economic expansion is finally translating into higher wage growth for those already employed or about to be.

Another possibility, perhaps, is that productivity growth is finally rebounding a bit, after the dismal numbers we’ve observed in the past few years, where GDP growth has been modest but employment growth has been strong. Indeed, some recovery in productivity is a precondition for real wage growth to last, even in a tighter labor market. In this scenario, output growth would now be more robust than employment growth, reversing some of the striking declines we’ve observed in the productivity numbers.

Of course, what might be most convincing is a combination of the labor demand- and supply-side stories, along the following lines: the labor market is gradually approaching capacity, though it is not there yet. I don’t think it will get there until we are closer to 4 percent unemployment. But employer difficulties finding skilled workers matter more in this type of market than in the earlier years of recovery after the Great Recession. Such pressure raises wages for employed workers and those with appropriate skill levels. On the other hand, workers with weaker skills still face dismal prospects, and their exit from the labor force reflects their bleak prospects. The growth of labor demand might really be shrinking, as productivity rebounds a bit and as weakness in business investment and export demand are felt in the job market.

We won’t know for sure how much of this story is accurate until we get more jobs and productivity reports over the next few months. But, in the meantime, my bet is on just such a mixed reading, which suggests that a very gradual lifting of economic stimulus is appropriate, without, however, moving too quickly in the opposite direction.

Authors

Image Source: © Jose Luis Magaua / Reuters
     
 
 




academic and careers

Understanding Ghana’s growth success story and job creation challenges


Ghana attained middle-income status after rebasing its National Accounts, pushing per capita gross domestic product (GDP) of the country above $1,000 in 2007. After recovering from economic recession in 1984 on account of the Bretton Woods sponsored economic reform introduced at that time, Ghana’s growth has been remarkably strong, with its lowest economic growth of 3.3 percent recorded in 1994. The country’s growth rate reached its peak of 15 percent in 2011 on the back of the commencement of commercial production of oil, making it one of the fastest growing economies globally during that year. This has translated into increased per capita income, which reached a high of about $1,900 in 2013.

The concern, however, has been the ability of the country to sustain this growth momentum given the level and quality of education and skills, and, more importantly, the failure of this strong growth performance to be translated into the creation of productive and decent jobs, improved incomes and livelihoods. The structure of the economy remains highly informal, with a shift in the country’s national output composition from agriculture to low-value service activities in the informal sector. The commencement of commercial production of oil raised the share of the industrial sector in national output. However, the continuous decline in manufacturing value added undermines Ghana’s economic transformation effort to promote high and secure incomes and improve the livelihoods of the people.

Structural change towards higher value added sectors, and upgrading of technologies in existing sectors, is expected to allow for better conditions of work, better jobs, and higher wages. But the low level and quality of human resources not only diverts the economy from its structural transformation path of development but also makes it difficult for the benefits of growth to be spread through the creation of gainful and productive employment. Thus, productive structural economic transformation hinges on the level and quality of education and labor skills. A highly skilled, innovative and knowledgeable workforce constitutes a key ingredient in the process of structural economic transformation, and as productive sectors apply more complex production technologies and research and development activities increase the demand for education and skills. However, the observed weak human capital base does not provide a strong foundation for structural economic transformation of Ghana.

Ghana’s employment growth lags behind economic growth, with an estimated employment elasticity of output of 0.47, suggesting that every 1 percent of annual economic growth yields 0.47 percent growth of total employment.

There is also widespread concern about the quality of the country’s growth in terms of employment and inequality, as well as general improvement in the livelihood of the people (see Alagidede et al. 2013; Aryeetey et al. 2014; Baah-Boateng 2013). A key indicator for measuring the extent to which macroeconomic growth results in gains in the welfare of the citizenry is the quality of jobs that the economy generates. Ghana’s employment growth lags behind economic growth, with an estimated employment elasticity of output of 0.47 (see Baah-Boateng 2013), suggesting that every 1 percent of annual economic growth yields 0.47 percent growth of total employment. Besides the slow rate of job creation is the dominance of vulnerable employment and the working poverty rate in the labor market. In 2010, 7 out of 10 jobs were estimated to be vulnerable while only 1 out of 5 jobs could be considered as productive jobs that meet the standard of decent work (Baah-Boateng and Ewusi 2013). Workers in vulnerable employment tend to lack formal work arrangements as well as elements associated with decent employment such as adequate social security and recourse to effective social dialogue mechanisms (Sparreboom and Baah-Boateng 2011). The working poverty rate remains a challenge with one out of every five persons employed belonging to poor households.

The article seeks to provide an analytical assessment of Ghana’s economic growth as one of Africa’s growth giants over a period of more than two decades and the implication for labour market and livelihood outcomes. Growth of labor productivity at the national and sectoral level is examined, as well as the sectoral contribution to aggregate productivity growth. The article also analyses the effect of growth on employment and the employment-poverty linkage in terms of elasticity within the growth-employment-poverty nexus in Ghana. It also delves into a discussion of the constraints on growth and productive employment from both demand and supply perspectives, and identifies skills gaps and the opportunities offered in the country, which has experienced strong growth performance. The article has five sections, with an overview of Ghana’s economic growth performance in Section 2, after this introductory section. This is followed by an overview of the developments in the labor market, specifically in the area of employment, unemployment, poverty, and inequality in Ghana in Section 3. The growth-employment-poverty linkage analysis is carried out in Section 4 followed by a discussion of constraints to growth and employment generation in Section 5. Section 6 provides a summary and conclusion, with some policy suggestions for the future.

Downloads

Authors

  • Ernest Aryeetey
  • William Baah-Boaten
     
 
 




academic and careers

African Lions: Ghana’s job creation successes and obstacles


Over the past two decades, Ghana’s economy experienced an average annual growth rate of 5.8 percent, and became a low-middle income country in 2007. Though Ghana’s average annual employment growth between 1993 and 2013 has been higher than sub-Saharan Africa’s—3.7 percent versus 3.0 percent—its overall employment growth has not kept up with its economic growth. Notably, Ghana’s impressive economic growth has largely stemmed from crude oil exports, mining, and financial intermediation—all sectors and subsectors in which labor absorption is low. Given these trends, Ghana’s ability to transform its growth gains into better livelihoods for its citizens is being challenged.

In their paper, Understanding Ghana’s growth success story and job creation challenges, Ernest Aryeetey and William Baah-Boateng examine the sustainability of the high growth Ghana has experienced over the last two decades and advise Ghanaian policymakers to rethink their growth strategy.

For a more in-depth look at these and related topics, such as labor productivity, you can read the full paper here.

Ghana’s labor trends

Like in many other sub-Saharan African countries, the agriculture sector is the largest employer in Ghana, though its employment share is decreasing from 61.1 percent in 1984 to 44.7 percent in 2013. In addition, while industry’s employment share has slightly grown from 13.7 percent to 14.6 percent over the same period (and the manufacturing subsector has decreased from 10.9 percent to 9.1 percent), services has grown from 25 percent to 40.9 percent—leading to what the authors refer as a “missing middle.”

As noted above, the authors emphasize that the sectors that have been driving Ghana’s growth are not labor-intensive, namely mining, oil extraction, and finance. While labor has been moving from agriculture to services, the authors note that the trend “may not reflect a structural and productive transformation,” largely because the jobs created in the services sector are mostly informal and have low productivity. Indeed, services sector maintained the lowest annual average growth of labor productivity between 1992 and 2013. As part of these shifts, informal employment—which represented 88 percent of Ghanaian employment in 2013—grew by 3.7 percent on average while formal employment grew by only 2.6 percent during this period.

Unemployment in Ghana remains low, at 5.2 percent, though has experienced significant swings from 2.8 percent in 1984 to 10.4 percent in 2000 to 3.1 percent in 2006. The authors note, though, that these numbers might be deceptive due to the high numbers of informal, vulnerable, and “discouraged workers” (those who are jobless and available for work but fail to make the effort to seek work for various reasons) in Ghana. In fact, they state that, in 2006, after accounting for discouraged workers the unemployment rate more than doubled from 3.1 percent to 6.5 percent.

Vulnerable employment and the working poor

Despite Ghana’s relatively low unemployment rate, many laborers still live in poverty: According to the authors, 22 percent of working people are poor. Many others work in “vulnerable employment”—which the authors define as “a measure of people employed under relatively precarious circumstances indicated by their status in employment. It consists of own account and contributing family work that are less likely to have formal work arrangements, access to benefit or social protection programs, and are more ‘at risk’ to economic cycles (ILO 2009).” This definition is opposed to “productive employment,” or “paid employment and self-employed with employees.” Vulnerable workers are usually found in the informal sector and tend to have lower earnings—a situation exacerbating the ever-widening earnings gap and growing income inequality.

According to the authors, working poverty is closely linked with vulnerable employment, for which seven of 10 jobs in Ghana qualify (Table 1).  Some policies, which could combat working poverty, have been somewhat ineffective in reducing poverty: For example, Ghana has been consistent in raising its minimum wage, keeping it largely above the rate of inflation, but this policy tends to only affect those in the formal sector, leaving out workers in the informal sector. This trend has also increased Ghana’s inequality: The Gini coefficient increased from 35.4 percent in 1987/88 to 42.3 percent in 2013.

Table 1: Quantity and quality of employment (percent of employed)

Economic sector

 1984  1992  1999 2000   2006 2010  2013 
Employment-to-population (ratio, SSA)  —  64.3  64.1  64.1  64.9  65.2  65.5
Employment-to-population (ratio, Ghana)  80.2  72.9  73.9  66.9  67.7  67.4  75.4
               
Economic sector              
Agriculture  61.1  62.2  55.0  53.1  54.9  41.6  44.7
Industry  13.7  10.0  14.0  15.5  14.2  15.4  14.6
Manufacturing (part of industry)  10.9  8.2  11.7  10.7  11.4  10.7  9.1
Service   25.2  27.8  31.0  31.5  30.9  43.0  40.9
               
Institutional sector              
Public   10.2  8.4  6.2  7.2  5.7  6.4  5.9
Private   6.0  6.1  7.5  8.9  7.0  7.4  6.1
Informal   83.8  85.5  86.1  83.9  87.3  86.2 88.0 
               
Type of employment               
Paid employees   16.2  16.8  13.8  16.0  17.5  18.2  22.5
Self-employment   69.6  81.3  68.7  73.4  59.5  60.8  52.6
Contributing family worker   12.5  1.9  17.2  6.8  20.4  11.6  22.3
Other   1.7   —  0.3  3.8  2.6  9.4  2.6
               
Quality of employment               
Gainful/productive employment*   20.9   —   —  21.2  22.0  23.1  28.7
Vulnerable employment**   77.4  82.5  80.8  74.9  75.4  67.5  68.7
Working poverty    —  48.7  35.4   —  25.6   —  22.3

Notes: * Gainful/productive employment comprises paid employment and self-employed with employees.

** Vulnerable employment comprises own account and contribution family work.

Source: Computed from Ghana Living Standards Survey (GLSS) 3, 4, 5, and 6; Population Census 1984, 2000, and 2010.

Overall, though, Ghana has made great strides. Vulnerable employment has been declining, and productive employment has risen, gains the author attributes to the drop in working poverty—down from 48.7 percent in 1992 to 22.3 percent two decades later (Table 1). However, they also hint that these improvements could have been even larger had job growth been concentrated in paid employment and self-employed with employees.

The skills gap

In their paper, the authors posit that job creation has occurred in less productive sectors due to a lack of skills and education in the workforce—and skill-intensive jobs/vacancies are instead getting filled by foreign laborers. While the proportion of the labor force with no formal education has significantly fallen from 44.1 percent in 1992 to 25.6 percent in 2013, post-primary education rates have barely risen—from 5.7 percent to 12.1 percent during that same period for secondary, vocational, and technical education. Tertiary is even less—from 2 percent to 5.4 percent. Ghanaian universities have not been training engineers, scientists, and technical workers that could increase the productivity and grow the industrial sector. A shortage of technical and vocational skills also limits this sector. Thus, the authors note, employers are forced to look outside of the country to find the workers with the skills required to do the job. The authors emphasize:

[P]roductive structural economic transformation hinges on the level and quality of education and labour skills. A highly skilled, innovative, and knowledgeable workforce constitutes a key ingredient in the process of structural economic transformation, and as productive sectors apply more complex production technologies and research and development activities increase the demand for education and skills. However, the observed weak human capital base does not provide a strong foundation for structural economic transformation of Ghana.

At the same time, the more educated in Ghana also tend to be more likely to be unemployed due to limited job creation for them in the formal sector. In 2013, the unemployment rates for those with secondary education and above (including tertiary) was over 6 percent. The unemployment rate for those with basic education or less was under 3.3 percent. The authors suggest that this trend is due to the fact that those with less education are more likely to take an informal job, while more educated laborers struggle to find jobs in the small formal sector.

Recommendations

Though Ghana has outperformed many of its sub-Saharan neighbors in terms of job creation and growth, its challenges with declining manufacturing, high informal employment, and low education attainment endanger its momentum. To tackle these obstacles, the authors recommend:

  1. Adjust the priorities of the growth strategy to promote manufacturing, and reconsider the goal of economic growth for growth’s sake by acknowledging that sustainable growth must be coupled with generation of productive and high-earning jobs for all.
  2. Create a manufacturing and business-friendly environment by addressing the country’s high interest rates, high taxes, and chronic energy problems, among others.
  3. Enact policies to enhance the high-productivity, high-labor-absorbing agricultural sector, such as improving agricultural extension, develop irrigation plans, among others.
  4. Develop policies to increase the number of secondary school graduates as well as students studying science, technology, engineering, and math.

For further discussion and recommendations, read the full paper here.

Note: The African Lions project is a collaboration among United Nations University-World Institute for Development Economics Research (UNU-WIDER), the University of Cape Town’s Development Policy Research Unit (DPRU), and the Brookings Africa Growth Initiative, that provides an analytical basis for policy recommendations and value-added guidance to domestic policymakers in the fast-growing economies of Africa, as well as for the broader global community interested in the development of the region. The six papers, covering Mozambique, Kenya, Ghana, South Africa, Ethiopia, and Nigeria, explore the key constraints facing African economies as they attempt to maintain a long-run economic growth and development trajectory.

Authors

  • Christina Golubski
     
 
 




academic and careers

Help wanted: Better pathways into the labor market


Employment is down among everyone between the ages of 16 and 64—particularly among teens, but with a great deal of variation by geography, race, and education. The disparity between blacks and whites is especially stark. For example, unemployment among white young adults peaked at 14% in 2010—still considerably lower than unemployment rates for black young adults at any point in the 2008 to 2014 time period. Unemployment for black 20- to 24-year-olds rose to 29.5% in 2010 and fell to 22.3% in 2014, compared to 10.3% among whites in 2014.

While there is no silver bullet, higher levels of education and work experience clearly improve job prospects down the line for young people. There are multiple strategies local and regional leaders can use to build more structured pathways into employment.

Teens and young adults (referring to 16- to 19-year-olds and 20- to 24-year-olds, respectively) are not monolithic populations. Age is an obvious differentiator, but so are a number of other factors, such as educational attainment, skill level, interests, parental support, and other life circumstances.  Schools, families, and neighborhoods all play a role in a young person’s trajectory—both positive and negative. But at the most basic level, a program for a 17-year-old high school student is likely not appropriate for a 23-year-old, regardless of educational attainment. Successful programs integrate education, training, work-readiness, and youth development principles, but the particular blend of these elements and settings vary: more school-based and educationally focused programs for younger youth, and more community-based and career-focused programs with strong ties to education for older youth.

An admittedly non-comprehensive review includes the following types of promising and proven programs:  

For high school students:

For out-of-school youth and young adults:

  • Highly structured programs offering work readiness and technical skills development, often in partnership with community colleges, and coupled with paid internships, such as Year Up, i.c.stars, npower, and Per Scholas
  • Programs that offer stipends and combine academics, job training, mentoring, and supportive services while carrying out community improvement projects, such as YouthBuild and Youth Corps

The sobering fact is that promoting employment and economic security among young people is not a straightforward proposition. To succeed in today’s economy and earn middle-class wages, a young person needs to complete several steps: graduate from high school or earn an alternate credential; enroll in and complete some post-secondary education or job training; preferably gain meaningful work experience; and enter the labor market with in-demand skills. (A decent economy and some luck help, too.) There are many points along that path from which a young person can get off-track, particularly young people of color and those from high-poverty neighborhoods. And while high youth unemployment is increasingly in the news these days, the difficulties youth without college degrees face in finding good jobs has been a problem for decades.

Programs such as the ones listed above are part of the solution. But they are not enough, given the magnitude of the problem. In order to produce better employment outcomes at scale, leaders from all sectors and levels of government need to make broader shifts in how education and workforce programs are designed, and how they interact with each other and employers. That is a heavy lift, but it is worth it to address the high costs imposed by the status quo: high unemployment, poverty, and untapped potential.  

Authors

Image Source: © Brian Snyder / Reuters
     
 
 




academic and careers

Syrian refugees and the promise of work permits


Issuing work permits to refugees in return for donor support for jobs is seen as a “win-win-win” for refugees, host countries, and the international community. It would stem the flow of refugees to Europe, decrease the dangers of radicalization, and prevent the exploitation of refugees as a source of cheap labor. At last February’s “Supporting Syria and the Region” conference co-hosted by the U.K., Germany, Kuwait, Norway, and the United Nations, former British Foreign Secretary David Miliband called for a million work permits to be made available to Syrians, 200,000 each in Jordan and Lebanon and 600,000 in Turkey.

Turkey issued a decree in January 2016 allowing work permits for Syrians. Jordan also agreed to provide work permits for up to 200,000 Syrians over a number of years in exchange for aid and the opening of European markets to goods produced or special economic zones—all this to lead to jobs for one million Jordanians as well when other aid and spending is added in. Lebanon, whose fragile confessional politics makes the one million plus Sunni refugees a more palpable threat, has chosen not to issue work permits. Yet, according to the International Labor Organization (ILO), “around half of (working age) Syrian refugees are economically active and just one-third have access to overwhelmingly informal and low-skilled employment.” That’s around 165, 000 employed informally. The number is around 160,000 in Jordan  with 1.3 million Syrians and over 400,000 in Turkey with 2.7 million Syrian refugees.  

In Turkey and Jordan, as elsewhere, work permits are tied to employers who apply on behalf of employees once residency, registration, and health requirements are met. In both countries, employers must pay the legal minimum wage and social security payments. The permits are renewed annually. But, for the majority of Syrians working in labor markets with an abundance of local and foreign low-skill, low-wage workers, the pay is nowhere near the minimum wage. As to the promised jobs in the special zones, those will take time to materialize, and we already know that, at least in the garment sector, up to 80 percent of the workers are young women from South Asia, largely residing in dorms but at least receiving the minimum wage. Whether Syrians can adapt to this model remains to be seen. In both Jordan and Turkey, there are certain limits on the percentage of Syrians versus locals in many manufacturing and services jobs; in Jordan there is some evidence that “ghost” Jordanian workers are used to get around this requirement.

Jordan already has over 240,000 foreign workers, mainly Egyptians and Asians, who have work permits, with the total number including those working illegally may be as high as a million. There is a move to get Syrians to replace the foreign workers with permits but that seems a bit uncertain. It seems unlikely that employers will be eager to replace employees, often of long standing and for whom they have gone to the expense of getting work permits.  In Turkey, with fewer foreign workers, many locals work informally, though they tend to get paid significantly more than Syrians. The chances of employers hiking up wages to legalize Syrian employees, whether in Jordan or Turkey, are slim and the record to date appears to confirm this.   

In Jordan, the government provided a three-month grace period for workers to receive permits free of charge. Less than 2,000 permits had been granted by April. An ILO survey in Jordan, which looked at workers in the construction and agriculture sectors, noted that while 90 percent of workers had heard about the grace period, none in the agriculture sector and only 85 percent in construction had work permits, though almost all knew that getting caught might mean detention at the Azraq refugee camp. And an inability to pay social security constituted a major barrier. Often a concern is to go through employers to get the permit.  

In Turkey, the numbers are not encouraging either: By May, only 10,000 had actually registered for work permits. Refugees International reports that Turkey’s work permit program may end up benefitting 40,000 Syrians or roughly 10 percent of those actually working. The government, though, thinks that the program will eventually help all those currently working informally.

The ILO, United Nations High Commissioner for Refugees, and Refugees’ International have praised the Jordanian and Turkish governments for granting work permits. The decision was not easy and was politically charged in both countries. But the political and psychological significance of providing an opening for Syrians to slowly integrate themselves and move towards a stable future is certainly worth pursuing, even if it doesn’t bring immediate rewards. Already, Turkey allows Syrian doctors and medical personnel to work in health centers serving refugees. Over 4000 Syrian teachers have received stipends from a Ministry of Education program funded by UNICEF and western donors. And agricultural workers no longer need work permits so long as provincial governors give their approval.

Eventually delinking work permits from employers will help, and the ILO urges Jordan to do so for agricultural and construction workers. In both Jordan and Turkey, lowering social security payments would also smooth the transition. More support to vocational training, health care, education for children are other ideas being pursued. While making work permits available is not the same as a blanket “right-to-work” law for refugees, a right protected under the U.N. 1951 Refugee Convention but accepted in full neither by Jordan nor Turkey (however, the key international treaty that protects the right to work in binding form is the International Covenant on Economic, Social and Cultural Rights to which Jordan and Turkey are signatories), this is an opening and one that the international community should monitor and support. Aside from the February conference, other agreements—such as the one between the EU and Turkey and the upcoming EU deal with Lebanon and Jordan—provide suitable platforms towards improving on this initial phase.   

Authors

  • Omer Karasapan
     
 
 




academic and careers

Where are the nonworking prime-age men?


On Monday, the White House Council of Economic Advisers (CEA) released a report examining the long-term decline in the share of prime-age men (aged 25 to 54) who are either working or actively looking for work. What economists call the labor force participation rate for this population decreased from 98 percent in 1954 to 88 percent today, the second largest decrease among OECD countries. This trend has raised concerns not only for its impact on economic growth, but also because it seems to track an increase in mortality over that time, particularly among white males, as economists Anne Case and Angus Deaton have found.

The CEA report documents a number of possible explanations for this trend, including increasing rates of women in the workforce, rising disability insurance claims, falling demand for less-skilled workers, and barriers to employment for those with criminal records.

The report’s national analysis alone, however, obscures tremendous variation across the United States in employment among this critical group.

According to data from the Census Bureau’s American Community Survey, in 2014, 81 percent of prime-age men nationwide were employed (this statistic differs from the labor force participation rate in that it omits those who are looking for, but not in, work). Yet among the nation’s 374 metropolitan areas for which data are available, that rate ranged from over 93 percent in the oil boomtown of Midland, TX, to just over 50 percent in Kings County in California’s Central Valley.

There are clear regional patterns to this important statistic. Many of the metro areas with the highest employment rates for prime-age men are smaller places located in the middle of the country, from the Upper Midwest, to energy-rich areas in Texas and the Plains states, to the Intermountain West. In several large, economically dynamic metro areas such as Denver, Houston, Minneapolis, San Jose, and Washington, D.C., rates of work among prime-age men are also very high.

Of much greater concern is the large number of metropolitan regions with very low rates of work among prime-age men. These include many small former industrial centers in states like Michigan, Indiana, and Ohio; areas of West Virginia and Louisiana that rely on declining-employment industries like mining; and long-struggling agricultural economies in Arkansas, Texas, and inland California.

These patterns echo findings from the CEA report that falling demand for labor is an important part of the long-term decline in prime-age male employment. In many places where a high school diploma alone once provided the gateway to a middle-class job, nearly one-third of men in this age group are out of work. This is also evident in the local relationship between educational attainment and work—where educational attainment rates are higher among prime-age men, members of that group are more likely to be employed. A 10-percentage point difference in employment rates separates the most highly-educated quarter of metro areas from the least highly-educated quarter.

Beyond education, size seems to matter, too. Large metro areas exhibit higher rates of work among prime-age men than small metro areas. Across the 100 largest U.S. metro areas, 83.2 percent of prime-age males are employed, compared to 79.8 percent in the smaller 274 metro areas. This relationship partly reflects that men in large metro areas have higher rates of educational attainment than those in small metro areas. Yet even men who have no more than a high-school diploma work at higher rates in large metro areas (64 percent) than similarly educated men in smaller metro areas (62 percent). Larger regional economies with greater economic diversity may stimulate stronger demand for workers at lower skill levels.

Several of the policies that the CEA report recommends to improve prime-age male labor force participation, such as bolstering investment in public infrastructure, reforming unemployment insurance, and boosting educational attainment could help boost rates of work in lagging U.S. metro areas. However, none directly addresses the fact that problems in male employment disproportionately affect small and often economically isolated U.S. regions. This evidence suggests that policies to help dislocated workers relocate to larger, more economically dynamic metro areas—particularly by improving the supply of affordable housing in those regions—should be part of a comprehensive strategy to help reverse the troubling long-term decline in men’s work.

Authors

     
 
 




academic and careers

When globalization goes digital


American voters are angry. But while the ill effects of globalization top their list of grievances, nobody is well served when complex economic issues are reduced to bumper-sticker slogans – as they have been thus far in the presidential campaign.

It is unfair to dismiss concerns about globalization as unfounded. America deserves to have an honest debate about its effects. In order to yield constructive solutions, however, all sides will need to concede some inconvenient truths – and to recognize that globalization is not the same phenomenon it was 20 years ago.

Protectionists fail to see how the United States’ eroding industrial base is compatible with the principle that globalization boosts growth. But the evidence supporting that principle is too substantial to ignore.

Recent research by the McKinsey Global Institute (MGI) echoes the findings of other academics: global flows of goods, foreign direct investment, and data have increased global GDP by roughly 10% compared to what it would have been had those flows never occurred. The extra value provided by globalization amounted to $7.8 trillion in 2014 alone.

And yet, the shuttered factories dotting America’s Midwestern “Rust Belt” are real. Even as globalization generates aggregate growth, it produces winners and losers. Exposing local industries to international competition spurs efficiency and innovation, but the resulting creative destruction exacts a substantial toll on families and communities.

Economists and policymakers alike are guilty of glossing over these distributional consequences. Countries that engage in free trade will find new channels for growth in the long run, the thinking goes, and workers who lose their jobs in one industry will find employment in another.

In the real world, however, this process is messy and protracted. Workers in a shrinking industry may need entirely new skills to find jobs in other sectors, and they may have to pack up their families and pull up deep roots to pursue these opportunities. It has taken a popular backlash against free trade for policymakers and the media to acknowledge the extent of this disruption.

That backlash should not have come as a surprise. Traditional labor-market policies and training systems have not been equal to the task of dealing with the large-scale changes caused by the twin forces of globalization and automation. The US needs concrete proposals for supporting workers caught up in structural transitions – and a willingness to consider fresh approaches, such as wage insurance.

Contrary to campaign rhetoric, simple protectionism would harm consumers. A recent study by the US President’s Council of Economic Advisers found that middle-class Americans gain more than a quarter of their purchasing power from trade. In any event, imposing tariffs on foreign goods will not bring back lost manufacturing jobs.

It is time to change the parameters of the debate and recognize that globalization has become an entirely different animal: The global goods trade has flattened for a variety of reasons, including plummeting commodity prices, sluggishness in many major economies, and a trend toward producing goods closer to the point of consumption. Cross-border flows of data, by contrast, have grown by a factor of 45 during the past decade, and now generate a greater economic impact than flows of traditional manufactured goods.

Digitization is changing everything: the nature of the goods changing hands, the universe of potential suppliers and customers, the method of delivery, and the capital and scale required to operate globally. It also means that globalization is no longer exclusively the domain of Fortune 500 firms.

Companies interacting with their foreign operations, suppliers, and customers account for a large and growing share of global Internet traffic. Already half of the world’s traded services are digitized, and 12% of the global goods trade is conducted via international e-commerce. E-commerce marketplaces such as Alibaba, Amazon, and eBay are turning millions of small enterprises into exporters. This remains an enormous untapped opportunity for the US, where fewer than 1% of companies export– a far lower share than in any other advanced economy.

Despite all the anti-trade rhetoric, it is crucial that Americans bear in mind that most of the world’s customers are overseas. Fast-growing emerging economies will be the biggest sources of consumption growth in the years ahead.

This would be the worst possible moment to erect barriers. The new digital landscape is still taking shape, and countries have an opportunity to redefine their comparative advantages. The US may have lost out as the world chased low labor costs; but it operates from a position of strength in a world defined by digital globalization.

There is real value in the seamless movement of innovation, information, goods, services, and – yes – people. As the US struggles to jump-start its economy, it cannot afford to seal itself off from an important source of growth.

US policymakers must take a nuanced, clear-eyed view of globalization, one that addresses its downsides more effectively, not only when it comes to lost jobs at home, but also when it comes to its trading partners’ labor and environmental standards. Above all, the US needs to stop retrying the past – and start focusing on how it can compete in the next era of globalization.

Editor's note: this piece first appeared on Project-Syndicate.org.

Publication: Project Syndicate
     
 
 




academic and careers

Metropolitan Lens: Youth employment in the Washington, D.C. region


In a recent analysis, I highlighted how employment and disconnection among young people vary by age, race, and place. In this podcast, I dig deeper into the data on the Washington, D.C. region. Although the area generally performs well on employment measures, not all young people are faring equally well.

Listen to the full podcast segment here: 

Authors

Image Source: © Keith Bedford / Reuters