c But Will It Work?: Implementation Analysis to Improve Government Performance By webfeeds.brookings.edu Published On :: Executive Summary Problems that arise in the implementation process make it less likely that policy objectives will be achieved in many government programs. Implementation problems may also damage the morale and external reputations of the agencies in charge of implementation. Although many implementation problems occur repeatedly across programs and can be predicted in advance, legislators… Full Article
c The Collapse of Canada? By webfeeds.brookings.edu Published On :: America's northern neighbor faces a severe constitutional crisis. Unprecedented levels of public support for sovereignty in the predominantly French-speaking province of Quebec could lead to the breakup of Canada. This crisis was precipitated by two Canadian provinces' failure in 1990 to ratify the Meech Lake Accord, a package of revisions to Canada's constitution that addressed… Full Article
c Policy Leadership and the Blame Trap: Seven Strategies for Avoiding Policy Stalemate By webfeeds.brookings.edu Published On :: Editor’s Note: This paper is part of the Governance Studies Management and Leadership Initiative. Negative messages about political opponents increasingly dominate not just election campaigns in the United States, but the policymaking process as well. And politics dominated by negative messaging (also known as blame-generating) tends to result in policy stalemate. Negative messaging is attractive… Full Article
c Technology Transfer: Highly Dependent on University Resources By webfeeds.brookings.edu Published On :: Tue, 04 Mar 2014 07:30:00 -0500 Policy makers at all levels, federal and state and local governments, are depositing great faith in innovation as a driver of economic growth and job creation. In the knowledge economy, universities have been called to play a central role as knowledge producers. Universities are actively seeking to accommodate those public demands and many have engaged an ongoing review of their educational programs and their research portfolios to make them more attuned to industrial needs. Technology transfer is a function that universities are seeking to make more efficient in order to better engage with the economy. By law, universities can elect to take title to patents from federally funded research and then license them to the private sector. For years, the dominant model of technology transfer has been to market university patents with commercial promise to prospect partners in industry. Under this model, very few universities have been able to command high licensing fees while the vast majority has never won the lottery of a “blockbuster” patent. Most technology transfer offices are cost centers for their universities. However, upon further inspection, the winners of this apparent lottery seem to be an exclusive club. Over the last decade only 37 universities have shuffled in the top 20 of the licensing revenue ranking. What is more, 5 of the top 20 were barely covering the expenses of their tech transfer offices; the rest were not even making ends meet.[i] It may seem that the blockbuster patent lottery is rigged. See more detail in my Brookings report. That appearance is due to the fact that landing a patent of high commercial value is highly dependent on the resources available to universities. Federal research funding is a good proxy variable to measure those resources. Figure 1 below shows side by side federal funding and net operating income of tech transfer offices. If high licensing revenues are a lottery; then it is one in which only universities with the highest federal funding can participate. Commercial patents may require a critical mass of investment to build the capacity to produce breakthrough discoveries that are at the same time mature enough for the private investors to take an interest. Figure 1. A rigged lottery? High federal research funding is the ticket to enter the blockbuster patent lottery Source: Author elaboration with AUTM data (2013) [ii] But now, let’s turn onto another view of the asymmetry of resources and licensing revenues of universities; the geographical dimension. In Figure 2 we can appreciate the degree of dispersion (or concentration) of both, federal research investment and licensing revenue, across the states. It is easy to recognize the well-funded universities on the East and West coast receiving most of federal funds, and it is easy to observe as well that it is around the same regions, albeit more scattered, that licensing revenues are high. If policymakers are serious about fostering innovation, it is time to discuss the asymmetries of resources among universities across the nation. Licensing revenues is a poor measure of technology transfer activity, because universities engage in a number of interactions with the private sector that do not involve patent licensing contracts. However, this data hints at the larger challenge: If universities are expected to be engines of growth for their regions and if technology transfer is to be streamlined, federal support must be allocated by mechanisms that balance the needs across states. This is not to suggest that research funding should be reallocated from top universities to the rest; that would be misguided policy. But it does suggest that without reform, the engines of growth will not roar throughout the nation, only in a few places. Figure 2. Tech Transfer Activites Depend on Resources Bubbles based on Metropolitan Statistical Areas and propotional to size of the variable [i] These figures are my calculation based on Association of Technology Managers survey data (AUTM, 2013). In 2012, 155 universities reported data to the survey; a majority of the 207 Carnegie classified universities as high or very high research activity. [ii] Note the patenting data is reported by some universities at the state system level (e.g. the UC system). The corresponding federal funding was aggregated across the same reporting universe. Authors Walter D. Valdivia Image Source: © Ina Fassbender / Reuters Full Article
c Can We Design A Good Technical Fix? By webfeeds.brookings.edu Published On :: Thu, 08 May 2014 07:30:00 -0400 Wouldn’t it be great if complex social problems could be solved by technology? Alvin Weinberg suggested in 1967 that technical engineering could work better than social engineering; the argument advocated quick fixes to the most urgent problems of humanity at least to alleviate pain while more complete solutions were worked out. However controversial was this idea, our reliance on technology has only increased since then. Still, over the same period, we have also come to appreciate better the unanticipated consequences of technological advancement. In light of our experience leaping forward as well as our tripping and tumbling along the way, we should make two considerations in designing a technological fix. Consideration 1: Serious attention to unwanted consequences A consideration of first-order is the study of unwanted effects and tradeoffs introduced by the technology. Take for instance nanoparticles—particles in the range of one to a hundred nanometers—that enable new properties in materials in which they are mixed; for instance, maintaining permeability in fine-particle filtration to make available inexpensive water purification devices for vulnerable populations. Once these nano-enabled filters reach the end of their usable life and are discarded, those minuscule particles could be released in the environment and exponentially increase the toxicity of the resulting waste. No less important than health and environmental effects are social, economic, and cultural consequences. Natural and social sciences are thus partners in the design of this kind of technological solution and transdisciplinary research is needed to improve our understanding of the various dimensions relevant to these projects. What is more, the incremental choices that set a particularly technology along a developmental pathway demand a different kind of knowledge because those choices are not merely technical, they involve values and preferences. Consideration 2: Stakeholder engagement But whose values and preferences matter? Surely everyone with a stake in the problem the fix is trying to solve will want to answer that question. If tech fixes are meant to address a specific social problem, those who will live with the consequences must have a say in the development of that solution. This prescription does not imply doing away with the current division of labor in technological development completely. Scientists and engineers need a degree of autonomy to work productively. Yet, input from and participation by stakeholders must occur far in advance of the completion of the development process because along the way a host of questions arise as to what trade-offs are acceptable. Non-experts are perfectly capable of answering questions about their values and preferences. The market system provides to some extent this kind of check for technologies advancing incrementally. In an ideal market scenario, one of high competition, the stakeholders on the demand side vote with their wallets, and companies refine their products to gain market share. But the development of a technological fix is neither incremental nor distributed in that manner. It is generally concentrated in a few hands and it is, by design, disruptive and revolutionary. That’s why stakeholders must have a say in key developmental decisions so as to calibrate carefully those technologies to the values and preferences of the very people they intend to help. Translating these considerations into policy The federal government first funded in 1989 a program for the analysis of Ethical, Legal, Social implications (ELSI) within the Human Genome Project. The influence this program had in the direction and key decisions of the HGP was at best modest; rather, it practically institutionalized a separation between the hard science and the understanding of human and social dimensions of the science.[i] By the time the National Nanotechnology Initiative was launched in 1999, some ELSI-type programs sought to breach the separation. With grants from the National Science Foundation, two centers for the study of nanotechnology in society were established at the University of California Santa Barbara and Arizona State University. CNS-UCSB and CNS-ASU have become hubs for research on the governance of technological development that integrate the technical, social, and human dimensions. One such effort is a pilot program of real-time technology assessment (RTTA) that achieved a more robust engagement with the various stakeholders of emerging nanotechnologies (see citizens tech-forum) and tested interventions at several points in the research and development of nanotechnologies to integrate concerns from the social sciences and humanities (see socio-technical integration). Building upon those experiences, the future of federal funding of technological fixes must include ELSI analyses more like the aforementioned RTTA program, that contrary to being an addendum to technical programs are fully integrated in the decision structure of research and development efforts. Whenever emerging technologies such as additive manufacturing, synthetic biology, big data, or climate engineering are considered as the kernel of a technological fix, developers must understand that engineering the artifact itself does not suffice. An effective solution requires also the careful analysis of unwanted effects and a serious effort for stakeholder engagement, lest the solution be worse than the problem. [i] See the ELSI Research Planning and Evaluation Group (ERPEG) final report published in 2000. ERPEG was created in 1997 by the NIH’s Advisory Council on human genome research (NACHGR) and DOE’s Advisory Committee on biology and environment (BERAC) to evaluate ELSI within the HGP and propose new directions for the 1998 five-year plan. After the final report NIH and DOE ran ELSI programs separately, although with the ostensible intention to coordinate efforts. The separation between the technical and the social/human dimensions of scientific advancement institutionalized by the HGP ELSI program and the radical alternative to it proposed by RTTA within NNI, is elegantly described in Brice Laurent’s The Constitutional Effect of the Ethics of Emerging Technologies (2013, Ethics and Politics XV(1), 251-271). Authors Walter D. Valdivia Image Source: © Suzanne Plunkett / Reuters Full Article
c The Study of the Distributional Outcomes of Innovation: A Book Review By webfeeds.brookings.edu Published On :: Mon, 05 Jan 2015 07:30:00 -0500 Editors Note: This post is an extended version of a previous post. Cozzens, Susan and Dhanaraj Thakur (Eds). 2014. Innovation and Inequality: Emerging technologies in an unequal world. Northampton, Massachusetts: Edward Elgar. Historically, the debate on innovation has focused on the determinants of the pace of innovation—on the premise that innovation is the driver of long-term economic growth. Analysts and policymakers have taken less interest on how innovation-based growth affects income distribution. Less attention even has received the question of how innovation affects other forms of inequality such as economic opportunity, social mobility, access to education, healthcare, and legal representation, or inequalities in exposure to insalubrious environments, be these physical (through exposure to polluted air, water, food or harmful work conditions) or social (neighborhoods ridden with violence and crime). The relation between innovation, equal political representation and the right for people to have a say in the collective decisions that affect their lives can also be added to the list of neglect. But neglect has not been universal. A small but growing group of analysts have been working for at least three decades to produce a more careful picture of the relationship between innovation and the economy. A distinguished vanguard of this group has recently published a collection of case studies that illuminates our understanding of innovation and inequality—which is the title of the book. The book is edited by Susan Cozzens and Dhanaraj Thakur. Cozzens is a professor in the School of Public Policy and Vice Provost of Academic Affairs at Georgia Tech. She has studied innovation and inequality long before inequality was a hot topic and led the group that collaborated on this book. Thakur is a faculty member of the school College of Public Service and Urban Affairs at Tennessee State University (while writing the book he taught at the University of West Indies in Jamaica). He is an original and sensible voice in the study of social dimensions of communication technologies. We’d like to highlight here three aspects of the book: the research design, the empirical focus, and the conceptual framework developed from the case studies in the book. Edited volumes are all too often a collection of disparate papers, but not in this case. This book is patently the product of a research design that probes the evolution of a set of technologies across a wide variety of national settings and, at the same time, it examines the different reactions to new technologies within specific countries. The second part of the book devotes five chapters to study five emerging technologies—recombinant insulin, genetically modified corn, mobile phones, open-source software, and tissue culture—observing the contrasts and similarities of their evolution in different national environments. In turn, part three considers the experience of eight countries, four of high income—Canada, Germany, Malta, and the U.S.—and four of medium or low income—Argentina, Costa Rica, Jamaica, and Mozambique. The stories in part three tell how these countries assimilated these diverse technologies into to their economies and policy environments. The second aspect to highlight is the deliberate choice of elements for empirical focus. First, the object of inquiry is not all of technology but a discreet set of emerging technologies gaining a specificity that would otherwise be negated if they were to handle the unwieldy concept of “technology” broadly construed. At the same time, this choice reveals the policy orientation of the book because these new entrants have just started to shape the socio-technical spaces they inhabit while the spaces of older technologies have likely ossified. Second, the study offers ample variance in terms of jurisdictions under study, i.e. countries of all income levels; a decision that makes at the same time theory construction more difficult and the test of general premises more robust.[i] We can add that the book avoids sweeping generalizations. Third, they focus on technological projects and their champions, a choice that increases the rigor of the empirical analysis. This choice, naturally, narrows the space of generality but the lessons are more precise and the conjectures are presented with according modesty. The combination of a solid design and clear empirical focus allow the reader to obtain a sense of general insight from the cases taken together that could not be derived from any individual case standing alone. Economic and technology historians have tackled the effects of technological advancement, from the steam engine to the Internet, but those lessons are not easily applicable to the present because emerging technologies intimate at a different kind of reconfiguration of economic and social structures. It is still too early to know the long-term effects of new technologies like genetically modified crops or mobile phone cash-transfers, but this book does a good job providing useful concepts that begin to form an analytical framework. In addition, the mix of country case studies subverts the disciplinary separation between the economics of innovation (devoted mostly to high-income countries) and development studies (interested in middle and low income economies). As a consequence of these selections, the reader can draw lessons that are likely to apply to technologies and countries other than the ones discussed in this book. The third aspect we would like to underscore in this review is the conceptual framework. Cozzens, Thakur and their colleagues have done a service to anyone interested in pursuing the empirical and theoretical analysis of innovation and inequality. For these authors, income distribution is only one part of the puzzle. They observe that inequalities are also part of social, ethnic, and gender cleavages in society. Frances Stewart, from Oxford University, introduced the notion of horizontal inequalities or inequalities at the social group level (for instance, across ethnic groups or genders). She developed the concept to contrast vertical inequalities or inequalities operating at the individual level (such as household income or wealth). The authors of this book borrow Stewart’s concept and pay attention to horizontal inequalities in the technologies they examine and observe that new technologies enter marketplaces that are already configured under historical forms of exclusion. A dramatic example is the lack of access to recombinant insulin in the U.S., because it is expensive and minorities are less likely to have health insurance (see Table 3.1 in p. 80).[ii] Another example is how innovation opens opportunities for entrepreneurs but closes them for women in cultures that systematically exclude women from entrepreneurial activities. Another key concept is that of complementary assets. A poignant example is the failure of recombinant insulin to reach poor patients in Mozambique who are sent home with old medicine even though insulin is subsidized by the government. The reason why doctors deny the poor the new treatment is that they don’t have the literacy and household resources (e.g. a refrigerator, a clock) necessary to preserve the shots, inject themselves periodically, and read sugar blood levels. Technologies aimed at fighting poverty require complementary assets to be already in place and in the absence of them, they fail to mitigate suffering and ultimately ameliorate inequality. Another illustration of the importance of complementary assets is given by the case of Open Source Software. This technology has a nominal price of zero; however, only individuals who have computers and the time, disposition, and resources to learn how to use open source operative systems benefit. Likewise, companies without the internal resources to adapt open software will not adopt it and remain economically tied to proprietary software. These observations lead to two critical concepts elaborated in the book: distributional boundaries and the inequalities across technological transitions. Distributional boundaries refer to the reach of the benefits of new technologies, boundaries that could be geographic (as in urban/suburban or center/periphery) or across social cleavages or incomes levels. Standard models of technological diffusion assume the entire population will gradually adopt a new technology, but in reality the authors observe several factors intervene in limiting the scope of diffusion to certain groups. The most insidious factors are monopolies that exercise sufficient control over markets to levy high prices. In these markets, the price becomes an exclusionary barrier to diffusion. This is quite evident in the case of mobile phones (see table 5.1, p. 128) where monopolies (or oligopolies) have market power to create and maintain a distributional boundary between post-pay and high-quality for middle and high income clients and pre-pay and low-quality for poor customers. This boundary renders pre-pay plans doubly regressive because the per-minute rates are higher than post-pay and phone expenses represent a far larger percentage in poor people’s income. Another example of exclusion happens in GMOs because in some countries subsistence farmers cannot afford the prices for engineering seeds; a disadvantage that compounds to their cost and health problems as they have to use more and stronger pesticides. A technological transition, as used here, is an inflection point in the adoption of a technology that re-shapes its distributional boundaries. When smart phones were introduced, a new market for second-hand or hand-down phones was created in Maputo; people who could not access the top technology get stuck with a sub-par system. By looking at tissue culture they find that “whether it provides benefits to small farmers as well as large ones depends crucially on public interventions in the lower-income countries in our study” (p. 190). In fact, farmers in Costa Rica enjoy much better protections compare to those in Jamaica and Mozambique because the governmental program created to support banana tissue culture was designed and implemented as an extension program aimed at disseminating know-how among small-farmers and not exclusively to large multinational-owned farms. When introducing the same technology, because of this different policy environment, the distributional boundaries were made much more extensive in Costa Rica. This is a book devoted to present the complexity of the innovation-inequality link. The authors are generous in their descriptions, punctilious in the analysis of their case studies, and cautious and measured in their conclusions. Readers who seek an overarching theory of inequality, a simple story, or a test of causality, are bound to be disappointed. But those readers may find the highest reward from carefully reading all the case studies presented in this book, not only because of the edifying richness of the detail herein but also because they will be invited to rethink the proper way to understand and address the problem of inequality.[iii] [i] These are clearly spelled out: “we assumed that technologies, societies, and inequalities co-evolved; that technological projects are always inherently distributional; and that the distributional aspects of individual projects and portfolios of projects are open to choice.” (p. 6) [ii] This problem has been somewhat mitigated since the Affordable Healthcare Act entered into effect. [iii] Kevin Risser contributed to this posting. Authors Walter D. Valdivia Image Source: © Akhtar Soomro / Reuters Full Article
c Innovation and manufacturing labor: a value-chain perspective By webfeeds.brookings.edu Published On :: Fri, 06 Mar 2015 00:00:00 -0500 Policies and initiatives to promote U.S. manufacturing would be well advised to take a value chain perspective of this economic sector. Currently, our economic statistics do not include pre-production services to manufacturing such as research and development or design or post-production services such as repair and maintenance or sales. Yet, manufacturing firms invest heavily in these services because they are crucial to the success of their business. In a new paper, Kate Whitefoot and Walter Valdivia offer a fresh insight into the sector’s labor composition and trends by examining employment in manufacturing from a value chain perspective. While the manufacturing sector shed millions of jobs in the 2002-2010 period—a period that included the Great Recession—employment in upstream services expanded 26 percent for market analysis, 13 percent for research and development, and 23 percent for design and technical services. Average wages for these services increased over 10 percent in that period. Going forward, this pattern is likely to be repeated. Technical occupations, particularly in upstream segments are expected to have the largest increases in employment and wages. In light of the findings, the authors offer the following recommendations: Federal manufacturing policy: Expand PCAST’s Advanced Manufacturing Partnership recommendations—specifically, for developing a national system of certifications for production skills and establishing a national apprenticeship program for skilled trades in manufacturing—to include jobs outside the factory such as those in research and development, design and technical services, and market analysis. Higher education: Institutions of higher education should consider some adjustment to their curriculum with a long view of the coming changes to high-skill occupations, particularly with respect to problem identification and the management of uncertainty in highly automated work environments. In addition, universities and colleges should disseminate information among prospect and current students about occupations where the largest gains of employment and higher wage premiums are expected. Improve national statistics: Supplement the North American Industry Classification System (NAICS) with data that permits tracking the entire value chain, including the development of a demand-based classification system. This initiative could benefit from adding survey questions to replicate the data collection of countries with a Value Added Tax—without introducing the tax, that is—allowing in this manner a more accurate estimation of the value added by each participant in a production network. Whitefoot and Valdivia stress that any collective efforts aimed at invigorating manufacturing must seize the opportunities throughout the entire value chain including upstream and downstream services to production. Downloads Download the paper Authors Kate S. WhitefootWalter D. ValdiviaGina C. Adam Image Source: © Jeff Tuttle / Reuters Full Article
c Technology transfer in an open society By webfeeds.brookings.edu Published On :: Mon, 23 Mar 2015 07:30:00 -0400 Recently the University of Massachusetts Amherst courted controversy when it announced that it would not admit Iranian students into some programs in the College of Engineering and in the College of Natural Sciences. The rule sought to comply with sanctions on Iran, but facing strong criticism from faculty and students the university reversed itself and replaced the ban with a more flexible policy that would craft a special curriculum for Iranian students in the fields relevant to the ban. It is not yet clear how that policy will be implemented, but what has become patently clear is that a blanket ban on students by national origin is a transgression of the principles of an open society including academic freedom. Very rarely will the knowledge created and taught at universities present a security risk that justifies the outright exclusion of an entire nationality from participating in the research and learning enterprise. A controversial ban Section 501 of the Iran Threat Reduction and Syria Human Rights Act of 2012 explicitly denies visas to Iranian nationals seeking study in fields related to nuclear engineering or the energy sector. After the controversy and in consultation with the State Department, the university replaced the ban for a policy of “individualized study plans” for Iranian students in the sanctioned fields. Questions remain as to the practicality of crafting study plans that exclude the kind of knowledge Iranians are not supposed to learn. One can imagine the inherent difficulty of asking some students to skip a few chapters of the textbook or to take a coffee break outside the lab when certain experiments are conducted. In a recent column, philosopher Behnam Taebi reminded us of a similar controversy when the Dutch government tried to restrict admission of Iranian students. He offers a valuable lesson from both experiences: “the Iranian academic community has traditionally been a bastion of reformism—a tendency Western governments and universities have every interest in encouraging” and correctly concludes that a ban of Iranian students is self-defeating. Universities export knowledge and values The costs of constraining technology transfer could indeed outweigh the benefits of study programs that entail technical and cultural exchange at the same time. American universities export knowledge and technology but also they export American values. Surely, not all values for export are exactly the height of civilization. Skeptics may point out that conspicuous consumption and reality TV are not worth disseminating but these critics would do well recalling that neither social posing nor voyeurism were invented in the U.S.; what we see here are just new bottles for very old wine. In contrast, the best values for export are those of the American political tradition. Living in the U.S. affords international students a regular exposure to that tradition in informal settings such as community life and churchgoing, and in more formal ones, through the stupendous collections of university libraries and the campus curriculum on American history and political thought. Aside of the lofty and the frivolous, however, there are a few values that are inherent to university life. Of course, the U.S. does not have a monopoly on those values—they are inherent to all universities in stable democracies—but they are certainly part of the experience of any international student. Consider these three: Stability: Students appreciate the relative quietude of university life. In the U.S., most campuses are physically designed as a refuge from the frantic pace of modern life and provide the peace and safety necessary to allow the mind to concentrate, grow, and discover. Students coming from countries troubled by political instability and conflict are able to stop worrying about questions of subsistence or survival and can devote their attention to solve the puzzles of nature and society. Meritocracy: Another value characteristic of academia is meritocracy. The system has its flaws but academia more than other walks of life assigns rewards based on clear standards of performance. There are systemic problems and no absence of prejudice, but hard work and talent tend to be given their due. Social awareness: A third value is a collective concern with public affairs in the local, national, and global spheres. Not everyone in the academic community is socially engaged, but within campus there is a steady supply of debate on contemporary issues and ample opportunity for voluntary work. Visitors will find it easy to engage friends and colleagues in relevant debates and join them in meaningful action on and off campus. Technology transfer is good diplomacy Many international students remain in the U.S. after concluding their training but they also keep ties to their families and scientific communities in their countries of origin. Others return home and may seek to reproduce there the stability, meritocracy, and engagement with social issues that were constitutive of their time at an American university. Some will seek reform within their own universities and a few will go further and press for reform to their country's political system. Spreading the values of academic life in democratic societies is a legitimate and powerful approach to spreading democratic values around the world. Technology transfer as a term of art has evolved to recognize the two-way exchange of knowledge between research and industrial organizations. Likewise, values move both ways and international students enrich American life by injecting their spheres with their own values for export. The policy of American universities of remaining open to all nationalities is both instrument and symbol of an open society. Technology transfer by means of advanced training is indeed good diplomacy. Authors Walter D. ValdiviaMarga Gual Soler Image Source: © Christian Hartmann / Reuters Full Article
c University-industry partnerships can help tackle antibiotic resistant bacteria By webfeeds.brookings.edu Published On :: Wed, 25 Mar 2015 07:30:00 -0400 An academic-industrial partnership published last January in the prestigious journal Nature the results of the development of antibiotic teixobactin. The reported work is still at an early preclinical stage but it is nevertheless good news. Over the last decades the introduction of new antibiotics has slowed down nearly to a halt and over the same period we have seen a dangerous increase in antibiotic resistant bacteria. Such is the magnitude of the problem that it has attracted the attention of the U.S. government. Accepting several recommendations presented by the President’s Council of Advisors on Science and Technology (PCAST) in their comprehensive report, the Obama Administration issued last September an Executive Order establishing an interagency Task Force for combating antibiotic resistant bacteria and directing the Secretary of Human and Health Services (HHS) to establish an Advisory Council on this matter. More recently the White House issued a strategic plan to tackle this problem. Etiology of antibiotic resistance Infectious diseases have been a major cause of morbidity and mortality from time immemorial. The early discovery of sulfa drugs in the 1930s and then antibiotics in the 1940s significantly aided the fight against these scourges. Following World War II society experienced extraordinary gains in life expectancy and overall quality of life. During that period, marked by optimism, many people presumed victory over infectious diseases. However, overuse of antibiotics and a slowdown of innovation, allowed bacteria to develop resistance at such a pace that some experts now speak of a post-antibiotic era. The problem is manifold: overuse of antibiotics, slow innovation, and bacterial evolution. The overuse of antibiotics in both humans and livestock also facilitated the emergence of antibiotic resistant bacteria. Responsibility falls to health care providers who prescribed antibiotics liberally and patients who did not complete their prescribed dosages. Acknowledging this problem, the medical community has been training physicians to avoid pressures to prescribe antibiotics for children (and their parents) with infections that are likely to be viral in origin. Educational efforts are also underway to encourage patients to complete their full course of every prescribed antibiotic and not to halt treatment when symptoms ease. The excessive use of antibiotics in food-producing animals is perhaps less manageable because it affects the bottom line of farm operations. For instance, the FDA reported that even though famers were aware of the risks, antibiotics use in feedstock increased by 16 percent from 2009 to 2012. The development of antibiotics—perhaps a more adequate term would be anti-bacterial agents—indirectly contributed to the problem by being incremental and by nearly stalling two decades ago. Many revolutionary innovations in antibiotics were introduced in a first period of development that started in the 1940s and lasted about two decades. Building upon scaffolds and mechanisms discovered theretofore, a second period of incremental development followed over three decades, through to 1990s, with roughly three new antibiotics introduced every year. High competition and little differentiations rendered antibiotics less and less profitable and over a third period covering the last 20 years pharmaceutical companies have cut development of new antibiotics down to a trickle. The misguided overuse and misuse of antibiotics together with the economics of antibiotic innovation compounded the problem taking place in nature: bacteria evolves and adapts rapidly. Current policy initiatives The PCAST report recommended federal leadership and investment to combat antibiotic-resistant bacteria in three areas: improving surveillance, increasing the longevity of current antibiotics through moderated usage, and picking up the pace of development of new antibiotics and other effective interventions. To implement this strategy PCAST suggested an oversight structure that includes a Director for National Antibiotic Resistance Policy, an interagency Task Force for Combating Antibiotic Resistance Bacteria, and an Advisory Council to be established by the HHS Secretary. PCAST also recommended increasing federal support from $450 million to $900 million for core activities such as surveillance infrastructure and development of transformative diagnostics and treatments. In addition, it proposed $800 million in funding for the Biomedical Advanced Research and Development Authority to support public-private partnerships for antibiotics development. The Obama administration took up many of these recommendations and directed their implementation with the aforementioned Executive Order. More recently, it announced a National Strategy for Combating Antibiotic Resistant Bacteria to implement the recommendations of the PCAST report. The national strategy has five pillars: First, slow the emergence and spread of resistant bacteria by decreasing the abusive usage of antibiotics in health care as well as in farm animals; second, establish national surveillance efforts that build surveillance capability across human and animal environments; third, advance development and usage of rapid and innovative diagnostics to provide more accurate care delivery and data collection; forth, seek to accelerate the invention process for new antibiotics, other therapeutics and vaccines across all stages, including basic and applied research and development; finally, emphasize the importance of international collaboration and endorse the World Health Organization Action Plan to address antimicrobial resistance. University-Industry partnerships Therefore, an important cause of our antibiotic woes seems to be driven by economic logic. On one hand, pharmaceutical companies have by and large abandoned investment in antibiotic development; competition and high substitutability have led to low prices and in their financial calculation, pharmaceutical companies cannot justify new developmental efforts. On the other hand, farmers have found the use of antibiotics highly profitable and thus have no financial incentives to halt their use. There is nevertheless a mirror explanation of a political character. The federal government allocates about $30 billion for research in medicine and health through the National Institutes of Health. The government does not seek to crowd out private research investment; rather, the goal is to fund research the private sector would not conduct because the financial return of that research is too uncertain. Economic theory prescribes government intervention to address this kind of market failure. However, it is also government policy to privatize patents to discoveries made with public monies in order to facilitate their transfer from public to private organizations. An unanticipated risk of this policy is the rebalancing of the public research portfolio to accommodate the growing demand for the kind of research that feeds into attractive market niches. The risk is that the more aligned public research and private demand become, the less research attention will be directed to medical needs without great market prospects. The development of new antibiotics seems to be just that kind of neglected medical public need. If antibiotics are unattractive to pharmaceutical companies, antibiotic development should be a research priority for the NIH. We know that it is unlikely that Congress will increase public spending for antibiotic R&D in the proportion suggested by PCAST, but the NIH could step in and rebalance its own portfolio to increase antibiotic research. Either increasing NIH funding for antibiotics or NIH rebalancing its own portfolio, are political decisions that are sure to meet organized resistance even stronger than antibiotic resistance. The second mirror explanation is that farmers have a well-organized lobby. It is no surprise that the Executive Order gingerly walks over recommendations for the farming sector and avoid any hint at an outright ban of antibiotics use, lest the administration is perceived as heavy-handed. Considering the huge magnitude of the problem, a political solution is warranted. Farmers’ cooperation in addressing this national problem will have to be traded for subsidies and other extra-market incentives that compensate for loss revenues or higher costs. The administration will do well to work out the politics with farmer associations first before they organize in strong opposition to any measure to curb antibiotic use in feedstock. Addressing this challenge adequately will thus require working out solutions to the economic and political dimensions of this problem. Public-private partnerships, including university-industry collaboration, could prove to be a useful mechanism to balance the two dimensions of the equation. The development of teixobactin mentioned above is a good example of this prescription as it resulted from collaboration between the university of Bonn Germany, Northeastern University, and Novobiotic Pharmaceutical, a start-up in Cambridge Mass. If the NIH cannot secure an increase in research funding for antibiotics development and cannot rebalance substantially its portfolio, it can at least encourage Cooperative Research and Development Agreements as well as university start-ups devoted to develop new antibiotics. In order to promote public-private and university-industry partnerships, policy coordination is advised. The nascent enterprises will be assisted greatly if the government can help them raise capital connecting them to venture funding networks or implementing a loan guarantees programs specific to antibiotics. It can also allow for an expedited FDA approval which would lessen the regulatory burden. Likewise, farmers may be convinced to discontinue the risky practice if innovation in animal husbandry can effectively replace antibiotic use. Public-private partnerships, particularly through university extension programs, could provide an adequate framework to test alternative methods, scale them up, and subsidize the transition to new sustainable practices that are not financially painful to farmers. Yikun Chi contributed to this post More TechTank content available here Authors Walter D. ValdiviaMichael S. Kinch Image Source: © Reuters Staff / Reuters Full Article
c Responsible innovation: A primer for policymakers By webfeeds.brookings.edu Published On :: Tue, 05 May 2015 00:00:00 -0400 Technical change is advancing at a breakneck speed while the institutions that govern innovative activity slog forward trying to keep pace. The lag has created a need for reform in the governance of innovation. Reformers who focus primarily on the social benefits of innovation propose to unmoor the innovative forces of the market. Conversely, those who deal mostly with innovation’s social costs wish to constrain it by introducing regulations in advance of technological developments. In this paper, Walter Valdivia and David Guston argue for a different approach to reform the governance of innovation that they call "Responsible Innovation" because it seeks to imbue in the actors of the innovation system a more robust sense of individual and collective responsibility. Responsible innovation appreciates the power of free markets in organizing innovation and realizing social expectations but is self-conscious about the social costs that markets do not internalize. At the same time, the actions it recommends do not seek to slow down innovation because they do not constrain the set of options for researchers and businesses, they expand it. Responsible innovation is not a doctrine of regulation and much less an instantiation of the precautionary principle. Innovation and society can evolve down several paths and the path forward is to some extent open to collective choice. The aim of a responsible governance of innovation is to make that choice more consonant with democratic principles. Valdivia and Guston illustrate how responsible innovation can be implemented with three practical initiatives: Industry: Incorporating values and motivations to innovation decisions that go beyond the profit motive could help industry take on a long-view of those decisions and better manage its own costs associated with liability and regulation, while reducing the social cost of negative externalities. Consequently, responsible innovation should be an integral part of corporate social responsibility, considering that the latter has already become part of the language of business, from the classroom to the board room, and that is effectively shaping, in some quarters, corporate policies and decisions. Universities and National Laboratories: Centers for Responsible Innovation, fashioned after the institutional reform of Internal Review Boards to protect human subjects in research and the Offices of Technology Transfer created to commercialize academic research, could organize existing responsible innovation efforts at university and laboratory campuses. These Centers would formalize the consideration of impacts of research proposals on legal and regulatory frameworks, economic opportunity and inequality, sustainable development and the environment, as well as ethical questions beyond the integrity of research subjects. Federal Government: Federal policy should improve its protections and support of scientific research while providing mechanisms of public accountability for research funding agencies and their contractors. Demanding a return on investment for every research grant is a misguided approach that devalues research and undermines trust between Congress and the scientific community. At the same time, scientific institutions and their advocates should improve public engagement and demonstrate their willingness and ability to be responsive to societal concerns and expectations about the public research agenda. Second, if scientific research is a public good, by definition, markets are not effective commercializing it. New mechanisms to develop practical applications from federal research with little market appeal should be introduced to counterbalance the emphasis the current technology transfer system places on research ready for the market. Third, federal innovation policy needs to be better coordinated with other federal policy, including tax, industrial, and trade policy as well as regulatory regimes. It should also improve coordination with initiatives at the local and state level to improve the outcomes of innovation for each region, state, and metro area. Downloads Download the paper Authors Walter D. ValdiviaDavid H. Guston Full Article
c NASA considers public values in its Asteroid Initiative By webfeeds.brookings.edu Published On :: Tue, 19 May 2015 07:30:00 -0400 NASA’s Asteroid Initiative encompasses efforts for the human exploration of asteroids—as well as the Asteroid Grand Challenge—to enhance asteroid detection capabilities and mitigate their threat to Earth. The human space flight portion of the initiative primarily includes the Asteroid Redirect Mission (ARM), which is a proposal to put an asteroid in orbit of the moon and send astronauts to it. The program originally contemplated two alternatives for closer study: capturing a small 10m diameter asteroid versus simply recovering a boulder from a much larger asteroid. Late in March, NASA offered an update of its plans. It has decided to retrieve a boulder from an asteroid near Earth’s orbit—candidates are the asteroids 2008 EV5, Bennu, and Itokawa—and will place the boulder on the moon’s orbit to further study it. This mission will help NASA develop a host of technical capabilities. For instance, Solar Electric Propulsion uses solar electric power to charge atoms for spacecraft propulsion—in the absence of gravity, even a modicum of force can alter the trajectory of a body in outer space. Another related capability under development is the gravity tractor, which is based on the notion that even the modest mass of a spacecraft can exert sufficient gravitational force over an asteroid to ever so slightly change its orbit. The ARM spacecraft mass could be further increased by its ability to capture a boulder from the asteroid that is steering clear of the Earth, enabling a test of how humans might prevent asteroid threats in the future. Thus, NASA will have a second test of how to deflect near-Earth objects on a hazardous trajectory. The first test, implemented as part of the Deep Impact Mission, is a kinetic impactor; that is, crashing a spacecraft on an approaching object to change its trajectory. The Asteroid Initiative is a partner of the agency’s Near Earth Object Observation (NEOO) program. The goal of this program is to discover and monitor space objects traveling on a trajectory that could pose the risk of hitting Earth with catastrophic effects. The program also seeks to develop mitigation strategies. The capabilities developed by ARM could also support other programs of NASA, such as the manned exploration of Mars. NEOO has recently enjoyed an uptick of public support. It used to be funded at about $4 million in the 1990s and in 2010 was allocated a paltry $6 million. But then, a redirection of priorities—linked to the transition from the Bush to the Obama administrations—increased funding for NEOO to about $20 million in 2012 and $40 million in 2014—and NASA is seeking $50 million for 2015. It is clear that NASA officials made a compelling case for the importance of NEOO; in fact, what they are asking seems quite a modest amount if indeed asteroids pose an existential risk to life on earth. At the same time, the instrumental importance of the program and the public funds devoted to it beg the question as to whether taxpayers should have a say in the decisions NASA is making regarding how to proceed with the program. NASA has done something remarkable to help answer this question. Last November, NASA partnered with the ECAST network (Expert and Citizen Assessment of Science and Technology) to host a citizen forum assessing the Asteroid Initiative. ECAST is a consortium of science policy and advocacy organizations which specializes in citizen deliberations on science policy. The forum consisted of a dialogue with 100 citizens in Phoenix and Boston who learned more about the asteroid initiative and then commented on various aspects of the project. The participants, who were selected to approximate the demographics of the U.S. population, were asked to assess mitigation strategies to protect against asteroids. They were introduced to four strategies: civil defense, gravity tractor, kinetic impactor, and nuclear blast deflection. As part of the deliberations, they were asked to consider the two aforementioned approaches to perform ARM. A consensus emerged about the boulder retrieval option primarily because citizens thought that option offered better prospects for developing planetary defense technologies. This preference existed despite the excitement of capturing a full asteroid, which could potentially have additional economic impacts. The participants showed interest in promoting the development of mitigation capabilities at least as much as they wanted to protect traditional NASA goals such as the advancement of science and space flight technology. This is not surprising given that concerns about doomsday should reasonably take precedence over traditional research and exploration concerns. NASA could have decided to set ARM along the path of boulder retrieval exclusively on technical merits, but having conducted a citizen forum, the agency is now able to claim that this decision is also socially robust, which is to say, is responsive to public values of consensus. In this manner, NASA has shown a promising method by which research mission federal agencies can increase their public accountability. In the same spirit of responsible research and innovation, a recent Brookings paper I authored with David Guston—who is a co-founder of ECAST—proposes a number of other innovative ways in which the innovation enterprise can be made more responsive to public values and social expectations. Kudos to NASA for being at the forefront of innovation in space exploration and public accountability. Authors Walter D. Valdivia Image Source: © Handout . / Reuters Full Article
c The politics of federal R&D: A punctuated equilibrium analysis By webfeeds.brookings.edu Published On :: Wed, 17 Jun 2015 00:00:00 -0400 The fiscal budget has become a casualty of political polarization and even functions that had enjoyed bipartisan support, like research and development (R&D), are becoming divisive issues on Capitol Hill. As a result, federal R&D is likely to grow pegged to inflation or worse, decline. With the size of the pie fixed or shrinking, requests for R&D funding increases will trigger an inter-agency zero-sum game that will play out as pointless comparisons of agencies’ merit, or worse, as a contest to attract the favor of Congress or the White House. This insidious politics will be made even more so by the growing tendency of equating public accountability with the measurement of performance. Political polarization, tight budgets, and pressure for quantifiable results threaten to undermine the sustainability of public R&D. The situation begs the question: What can federal agencies do to deal with the changing politics of federal R&D? In a new paper, Walter D. Valdivia and Benjamin Y. Clark apply punctuated equilibrium theory to examine the last four decades of federal R&D, both at the aggregate and the agency level. Valdivia and Clark observe a general upward trend driven by gradual increases. In turn, budget leaps or punctuations are few and far in between and do no appear to have lasting effects. As the politics of R&D are stirred up, federal departments and agencies are sure to find that proposing punctuations is becoming more costly and risky. Consequently, agencies will be well advised in securing stable growth in their R&D budgets in the long run rather than pushing for short term budget leaps. While appropriations history would suggest the stability of R&D spending resulted from the character of the budget politics, in the future, stability will need the stewardship of R&D champions who work to institutionalize gradualism, this time, in spite of the politics. Downloads Download the paper Authors Walter D. ValdiviaBenjamin Y. Clark Full Article
c Patent infringement suits have a reputational cost for universities By webfeeds.brookings.edu Published On :: Tue, 10 Nov 2015 07:30:00 -0500 Universities cash handsome awards on infringement cases Last month, a jury found Apple Inc. guilty of infringing a patent of the University of Wisconsin-Madison (UW) and ordered the tech giant to pay $234 million. The university scored a big financial victory, but this hardly meant any gain for the good name of the university. The plaintiffs argued successfully in court that Apple infringed their 1998 patent on a predictor circuit that greatly improved the efficiency of microchips used in the popular iPhone 5s, 6, and 6 Plus. Apple first responded by challenging the validity of the patent, but the US Patent and Trademark Office ruled in favor of the university. Apple plans to appeal, but the appellate court is not likely to reverse the lower court’s decision. This is not the first time this university has asserted its patents rights (UW sued Intel in 2008 for this exact same patent and reportedly settled for $110 million). Nor is this the first time universities in general have taken infringers to court. Prominent cases in recent memory include Boston University, which sued several companies for infringement of a patent for blue light-emitting diodes and settled out of court with most of them, and Carnegie Mellon, who was awarded $237 million by the federal appellate court on its infringement suit against Marvell, a semiconductor company, for its use of an enhanced detector of data in hard drives called Kavcic detectors. Means not always aligned with aims in patent law When university inventions emerge from federal research grants, universities can also sue the infringers, but in those cases they would be testing the accepted interpretations of current patent law. The Bayh-Dole Act of 1980 extended patent law and gave small-business and universities the right to take title to patents from federal grants—later it was amended to extend the right to all federal grantees regardless of size. The ostensible aim of this act is to “to promote the utilization of inventions arising from federally supported research or development.” Under the law, a condition for universities to keep their exclusive rights on those patents is that they or their licensees take “effective steps to achieve practical application” of those patents. Bayh-Dole was not designed to create a new source of revenue for universities. If companies are effectively using university technologies, Bayh-Dole’s purpose is served without need of the patents. To understand this point, consider a counterfactual: What if the text of Bayh-Dole had been originally composed to grant a conditional right to patents for federal research grantees? The condition could be stated like this: “This policy seeks to promote the commercialization of federally funded research and to this end it will use the patent system. Grantees may take title to patents if and only if other mechanisms for disseminating and developing those inventions into useful applications prove unsuccessful.” Under this imagined text, the universities could still take title to patents on their inventions if they or the U.S. Patent and Trademark Office were not aware that the technologies were being used in manufactures. But no court would find their infringement claim meritorious if the accused companies could demonstrate that, absent of willful infringement, they had in fact used the technologies covered by university patents in their commercial products. In this case, other mechanisms for disseminating and developing the technologies would have proven successful indeed. The reality that Bayh-Dole did not mandate such a contingent assignation of rights creates a contradiction between its aims and the means chosen to advance those aims for the subset of patents that were already in use by industry. I should clarify that the predictor circuit, the blue-light diode, and the Kavcic detectors are not in that subset of patents. But even in they were, there is no indication that the University of Wisconsin-Madison would have exercised its patent rights with any less vigor just because the original research was funded by public funds. Today, it is fully expected from universities to aggressively assert their patent rights regardless of the source of funding for the original research. You can have an answer for every question and still lose the debate It is this litigious attitude that puts off many observers. While the law may very well allow universities to be litigious, universities could still refuse to exercise their rights under circumstances in which those rights are not easily reconciled with the public mission of the university. Universities administrators, tech transfer personnel, and particularly the legal teams winning infringement cases have legitimate reasons to wonder why universities are publicly scorned. After all, they are acting within the law and simply protecting their patent rights; they are doing what any rational person would do. They may be really surprised when critics accuse universities of becoming allies of patent trolls, or of aiding and abetting their actions. Such accusations are unwarranted. Trolls are truants; the universities are venerable institutions. Patent trolls would exploit the ambiguities of patent law and the burdens of due process to their own benefit and to the detriment of truly productive businesses and persons. In stark contrast, universities are long established partners of democracy, respected beyond ideological divides for their abundant contributions to society. The critics may not be fully considering the intricacies of patent law. Or they may forget that universities are in need of additional revenue—higher education has not seen public financial support increase in recent years, with federal grants roughly stagnated and state funding falling drastically in some states. Critics may also ignore that revenues collected from licensing of patents, favorable court rulings, and out-of-court settlements, are to a large extent (usually two thirds of the total) plugged back into the research enterprise. University attorneys may have an answer for every point that critics raise, but the overall concern of critics should not be dismissed outright. Given that many if not most university patents can be traced back to research funded by tax dollars, there is a legitimate reason for observers to expect universities to manage their patents with a degree of restraint. There is also a legitimate reason for public disappointment when universities do not seem to endeavor to balance the tensions between their rights and duties. Substantive steps to improve the universities’ public image Universities can become more responsive to public expectations about their character not only by promoting their good work, but also by taking substantive steps to correct misperceptions. First, when universities discover a case of proven infringement, they should take companies to court as a measure of last resort. If a particular company refuses to negotiate in good faith and an infringement case ends up in court, the universities should be prepared to demonstrate to the court of public opinion that they have tried, with sufficient insistence and time, to negotiate a license and even made concessions in pricing the license. In the case of the predictor circuit patent, it seems that the University of Wisconsin-Madison tried to license the technology and Apple refused, but the university would be in a much better position if it could demonstrate that the licensing deals offered to Apple would have turned to be far less expensive for the tech company. Second, universities would be well advised not to join any efforts to lobby Congress for stronger patent protection. At least two reasons substantiate this suggestion. First, as a matter of principle, the dogmatic belief that without patents there is no innovation is wrong. Second, as a matter of material interest, universities as a group do not have a financial interest in patenting. It’s worth elaborating these points a bit more. Neither historians nor social science researchers have settled the question about the net effects of patents on innovation. While there is evidence of social benefits from patent-based innovation, there is also evidence of social costs associated with patent-monopolies, and even more evidence of momentous innovations that required no patents. What’s more, the net social benefit varies across industries and over time. Research shows economic areas in which patents do spur innovation and economic sectors where it actually hinders them. This research explains, for instance, why some computer and Internet giants lobby Congress in the opposite direction to the biotech and big pharma industries. Rigorous industrial surveys of the 1980s and 1990s found that companies in most economic sectors did not use patents as their primary tool to protect their R&D investments. Yet patenting has increased rapidly over the past four decades. This increase includes industries that once were uninterested in patents. Economic analyses have shown that this new patenting is a business strategy against patent litigation. Companies are building patent portfolios as a defensive strategy, not because they are innovating more. The university’s public position on patent policy should acknowledge that the debate on the impact of patents on innovation is not settled and that this impact cannot be observed in the aggregate, but must be considered in the context of each specific economic sector, industry, or even market. From this vantage point, universities could then turn up or down the intensity with which they negotiate licenses and pursue compensation for infringement. Universities would better assert their commitment to their public mission if they compute on a case by case basis the balance between social benefits and costs for each of its controversial patents. As to the material interest in patents, it is understandable that some patent attorneys or the biotech lobby publicly espouse the dogma of patents, that there is no innovation without patents. After all, their livelihood depends on it. However, research universities as a group do not have any significant financial interest in stronger patent protection. As I have shown in a previous Brookings paper, the vast majority of research universities earn very little from their patent portfolios and about 87% of tech transfer offices operate in the red. Universities as a group receive so little income from licensing and asserting their patents relative to the generous federal support (below 3%), that if the federal government were to declare that grant reviewers should give a preference to universities that do not patent, all research universities would stop the practice at once. It is true that a few universities (like the University of Wisconsin-Madison) raise significant revenue from their patent portfolio, and they will continue to do so regardless of public protestations. But the majority of universities do not have a material interest in patenting. Time to get it right on anti-troll legislation Last year, the House of Representative passed legislation closing loopholes and introducing disincentives for patent trolls. Just as mirror legislation was about to be considered in the Senate, Sen. Patrick Leahy withdrew it from the Judiciary Committee. It was reported that Sen. Harry Reid forced the hand of Mr. Leahy to kill the bill in committee. In the public sphere, the shrewd lobbying efforts to derail the bill were perceived to be pro-troll interests. The lobbying came from pharmaceutical companies, biotech companies, patent attorneys, and, to the surprise of everyone, universities. Little wonder that critics overreacted and suggested universities were in partnership with trolls: even if they were wrong, these accusations stung. University associations took that position out of a sincere belief in the dogma of patents and out of fear that the proposed anti-troll legislation limited their ability to sue patent infringers. However, their convictions stand on shaky ground and their material interests are not those of the vast majority of universities. A reversal of that position is not only possible, but would be timely. When anti-troll legislation is again introduced in Congress, universities should distance themselves from efforts to protect the policy status quo that so benefits patent trolls. It is not altogether improbable that Congress sees fit to exempt universities from some of the requirements that the law would impose. University associations could show Congress the merit of such exemptions in consideration of the universities’ constant and significant contributions to states, regions, and the nation. However, no such concessions could ever be expected if the universities continue to place themselves in the company of those who profit from patent management. No asset is more valuable for universities than their prestige. It is the ample recognition of their value in society that guarantees tax dollars will continue to flow into universities. While acting legally to protect their patent rights, universities are nevertheless toying with their own legitimacy. Let those universities that stand to gain from litigation act in their self-interest, but do not let them speak for all universities. When university associations advocate for stronger patent protection, they do the majority of universities a disservice. These associations should better represent the interests of all their members by advocating a more neutral position about patent reform, by publicly praising universities’ restraint on patent litigation, and by promoting a culture and readiness in technology transfer offices to appraise each patent not by its market value but by its social value. At the same time, the majority of universities that obtain neither private nor social benefits from patenting should press their political representatives to adopt a more balanced approach to policy advocacy, lest they squander the reputation of the entire university system. Authors Walter D. Valdivia Image Source: © Stephen Lam / Reuters Full Article
c Patent infringement suits have a reputational cost for universities By webfeeds.brookings.edu Published On :: Fri, 04 Dec 2015 07:30:00 -0500 This post originally appeared on the Center for Technology Innovation’s TechTank blog. Universities cash handsome awards on infringement cases This October, a jury found Apple Inc. guilty of infringing a patent of the University of Wisconsin-Madison (UW) and ordered the tech giant to pay $234 million. The university scored a big financial victory, but this hardly meant any gain for the good name of the university. The plaintiffs argued successfully in court that Apple infringed their 1998 patent on a predictor circuit that greatly improved the efficiency of microchips used in the popular iPhone 5s, 6, and 6 Plus. Apple first responded by challenging the validity of the patent, but the US Patent and Trademark Office ruled in favor of the university. Apple plans to appeal, but the appellate court is not likely to reverse the lower court’s decision. This is not the first time this university has asserted its patents rights (UW sued Intel in 2008 for this exact same patent and reportedly settled for $110 million). Nor is this the first time universities in general have taken infringers to court. Prominent cases in recent memory include Boston University, which sued several companies for infringement of a patent for blue light-emitting diodes and settled out of court with most of them, and Carnegie Mellon, who was awarded $237 million by the federal appellate court on its infringement suit against Marvell, a semiconductor company, for its use of an enhanced detector of data in hard drives called Kavcic detectors. Means not always aligned with aims in patent law When university patented inventions emerge from federal research grants, infringement suits test the accepted interpretations of current patent law. The Bayh-Dole Act of 1980 extended patent law and gave small-business and universities the right to take title to patents from federal research grants—later it was amended to extend the right to all federal grantees regardless of size. The ostensible aim of this act is to “to promote the utilization of inventions arising from federally supported research or development.” Under the law, a condition for universities (or any other government research performers) to keep their exclusive rights on those patents is that they or their licensees take “effective steps to achieve practical application” of those patents. Bayh-Dole was not designed to create a new source of revenue for universities. If companies are effectively using university technologies, Bayh-Dole’s purpose is served without need of patents. To understand this point, consider a counterfactual: What if the text of Bayh-Dole had been originally composed to grant a conditional right to patents for federal research grantees? The condition could be stated like this: “This policy seeks to promote the commercialization of federally funded research and to this end it will use the patent system. Grantees may take title to patents if and only if other mechanisms for disseminating and developing those inventions into useful applications prove unsuccessful.” Under this imagined text, the universities could still take title to patents on their inventions if they or the U.S. Patent and Trademark Office were not aware that the technologies were being used in manufactures. But no court would find their infringement claim meritorious if the accused companies could demonstrate that, absent of willful infringement, they had in fact used the technologies covered by university patents in their commercial products. In this case, other mechanisms for disseminating and developing the technologies would have proven successful indeed. The reality that Bayh-Dole did not mandate such a contingent assignation of rights creates a contradiction between its aims and the means chosen to advance those aims for the subset of patents that were already in use by industry. I should remark that UW’s predictor circuit resulted from grants from NSF and DARPA and there is no indication that the university exercised its patent rights with any less vigor just because the original research was funded by public funds. In fact, it is fully expected from universities to aggressively assert their patent rights regardless of the source of funding for the original research. You can have an answer for every question and still lose the debate It is this litigious attitude that puts off many observers. While the law may very well allow universities to be litigious, universities could still refuse to exercise their rights under circumstances in which those rights are not easily reconciled with the public mission of the university. Universities administrators, tech transfer personnel, and particularly the legal teams winning infringement cases have legitimate reasons to wonder why universities are publicly scorned. After all, they are acting within the law and simply protecting their patent rights; they are doing what any rational person would do. They may be really surprised when critics accuse universities of becoming allies of patent trolls, or of aiding and abetting their actions. Such accusations are unwarranted. Trolls are truants; the universities are venerable institutions. Patent trolls would exploit the ambiguities of patent law and the burdens of due process to their own benefit and to the detriment of truly productive businesses and persons. In stark contrast, universities are long established partners of democracy, respected beyond ideological divides for their abundant contributions to society. The critics may not be fully considering the intricacies of patent law. Or they may forget that universities are in need of additional revenue—higher education has not seen public financial support increase in recent years, with federal grants roughly stagnated and state funding falling drastically in some states. Critics may also ignore that revenues collected from licensing of patents, favorable court rulings, and out-of-court settlements, are to a large extent (usually two thirds of the total) plugged back into the research enterprise. University attorneys may have an answer for every point that critics raise, but the overall concern of critics should not be dismissed outright. Given that many if not most university patents can be traced back to research funded by tax dollars, there is a legitimate reason for observers to expect universities to manage their patents with a degree of restraint. There is also a legitimate reason for public disappointment when universities do not seem to endeavor to balance the tensions between their rights and duties. Substantive steps to improve the universities’ public image Universities can become more responsive to public expectations about their character not only by promoting their good work, but also by taking substantive steps to correct misperceptions. First, when universities discover a case of proven infringement, they should take companies to court as a measure of last resort. If a particular company refuses to negotiate in good faith and an infringement case ends up in court, the universities should be prepared to demonstrate to the court of public opinion that they have tried, with sufficient insistence and time, to negotiate a license and even made concessions in pricing the license. In the case of the predictor circuit patent, it seems that the University of Wisconsin-Madison tried to license the technology and Apple refused, but the university would be in a much better position if it could demonstrate that the licensing deals offered to Apple would have turned to be far less expensive for the tech company. Second, universities would be well advised not to join any efforts to lobby Congress for stronger patent protection. At least two reasons substantiate this suggestion. First, as a matter of principle, the dogmatic belief that without patents there is no innovation is wrong. Second, as a matter of material interest, universities as a group do not have a financial interest in patenting. It’s worth elaborating these points a bit more. Neither historians nor social science researchers have settled the question about the net effects of patents on innovation. While there is evidence of social benefits from patent-based innovation, there is also evidence of social costs associated with patent-monopolies, and even more evidence of momentous innovations that required no patents. What’s more, the net social benefit varies across industries and over time. Research shows economic areas in which patents do spur innovation and economic sectors where it actually hinders them. This research explains, for instance, why some computer and Internet giants lobby Congress in the opposite direction to the biotech and big pharma industries. Rigorous industrial surveys of the 1980s and 1990s found that companies in most economic sectors did not use patents as their primary tool to protect their R&D investments. Yet patenting has increased rapidly over the past four decades. This increase includes industries that once were uninterested in patents. Economic analyses have shown that this new patenting is a business strategy against patent litigation. Companies are building patent portfolios as a defensive strategy, not because they are innovating more. The university’s public position on patent policy should acknowledge that the debate on the impact of patents on innovation is not settled and that this impact cannot be observed in the aggregate, but must be considered in the context of each specific economic sector, industry, or even market. From this vantage point, universities could then turn up or down the intensity with which they negotiate licenses and pursue compensation for infringement. Universities would better assert their commitment to their public mission if they compute on a case by case basis the balance between social benefits and costs for each of its controversial patents. As to the material interest in patents, it is understandable that some patent attorneys or the biotech lobby publicly espouse the dogma of patents, that there is no innovation without patents. After all, their livelihood depends on it. However, research universities as a group do not have any significant financial interest in stronger patent protection. As I have shown in a previous Brookings paper, the vast majority of research universities earn very little from their patent portfolios and about 87% of tech transfer offices operate in the red. Universities as a group receive so little income from licensing and asserting their patents relative to the generous federal support (below 3%), that if the federal government were to declare that grant reviewers should give a preference to universities that do not patent, all research universities would stop the practice at once. It is true that a few universities (like the University of Wisconsin-Madison) raise significant revenue from their patent portfolio, and they will continue to do so regardless of public protestations. But the majority of universities do not have a material interest in patenting. Time to get it right on anti-troll legislation Last year, the House of Representative passed legislation closing loopholes and introducing disincentives for patent trolls. Just as mirror legislation was about to be considered in the Senate, Sen. Patrick Leahy withdrew it from the Judiciary Committee. It was reported that Sen. Harry Reid forced the hand of Mr. Leahy to kill the bill in committee. In the public sphere, the shrewd lobbying efforts to derail the bill were perceived to be pro-troll interests. The lobbying came from pharmaceutical companies, biotech companies, patent attorneys, and, to the surprise of everyone, universities. Little wonder that critics overreacted and suggested universities were in partnership with trolls: even if they were wrong, these accusations stung. University associations took that position out of a sincere belief in the dogma of patents and out of fear that the proposed anti-troll legislation limited the universities’ ability to sue patent infringers. However, their convictions stand on shaky ground and only a few universities sue for infringement. In taking that policy position, university associations are representing neither the interests nor the beliefs of the vast majority of universities. A reversal of that position is not only possible, but would be timely. When anti-troll legislation is again introduced in Congress, universities should distance themselves from efforts to protect the policy status quo that so benefits patent trolls. It is not altogether improbable that Congress sees fit to exempt universities from some of the requirements that the law would impose. University associations could show Congress the merit of such exemptions in consideration of the universities’ constant and significant contributions to states, regions, and the nation. However, no such concessions could ever be expected if the universities continue to place themselves in the company of those who profit from patent management. No asset is more valuable for universities than their prestige. It is the ample recognition of their value in society that guarantees tax dollars will continue to flow into universities. While acting legally to protect their patent rights, universities are nevertheless toying with their own legitimacy. Let those universities that stand to gain from litigation act in their self-interest, but do not let them speak for all universities. When university associations advocate for stronger patent protection, they do the majority of universities a disservice. These associations should better represent the interests of all their members by advocating a more neutral position about patent reform, by publicly praising universities’ restraint on patent litigation, and by promoting a culture and readiness in technology transfer offices to appraise each patent not by its market value but by its social value. At the same time, the majority of universities that obtain neither private nor social benefits from patenting should press their political representatives to adopt a more balanced approach to policy advocacy, lest they squander the reputation of the entire university system. Editor's Note: The post was corrected to state that UW’s predictor circuit did originate from federally funded research. Authors Walter D. Valdivia Image Source: © Stephen Lam / Reuters Full Article
c Stuck in a patent policy rut: Considerations for trade agreements By webfeeds.brookings.edu Published On :: Thu, 17 Dec 2015 07:30:00 -0500 International development debates of the last four decades have ascribed ever greater importance to intellectual property rights (IPRs). There has also been a significant effort on the part of the U.S. to encourage its trade partners to introduce and enforce patent law modeled after American intellectual property law. Aside from a discussion on the impact of patents on innovation, there are some important consequences of international harmonization regarding the obduracy of the terms of trade agreements. The position of the State Department on patents when negotiating trade agreements has consistently been one of defending stronger patent protection. However, the high-tech sector is under reorganization, and the most innovative industries today have strong disagreements about the value of patents for innovation. This situation begs the question as to why the national posture on patent law is so consistent in favor of industries such as pharmaceuticals or biotech to the detriment of software developers and Internet-based companies. The State Department defends this posture, arguing that the U.S. has a comparative advantage in sectors dependent on patent protection. Therefore, to promote exports, our national trade policy should place incentives for partners to come in line with national patent law. This posture will become problematic when America’s competitive advantage shifts to sectors that find patents to be a hindrance to innovation, because too much effort will have already been invested in twisting the arm of our trade partners. It will be hard to undo those chapters in trade agreements particularly after our trade partners have taken pains in passing laws aligned to American law. Related to the previous concern, the policy inertia effect and inflexibility applies to domestic policy as much as it does to trade agreements. When other nations adopt policy regimes following the American model, advocates of stronger patent protection will use international adoption as an argument in favor of keeping the domestic policy status quo. The pressure we place on our trade partners to strengthen patent protection (via trade agreements and other mechanisms like the Special 301 Report) will be forgotten. Advocates will present those trade partners as having adopted the enlightened laws of the U.S., and ask why American lawmakers would wish to change law that inspires international emulation. Innovation scholar Timothy Simcoe has correctly suggested that harmonization creates inflexibility in domestic policy. Indeed, in a not-too-distant future the rapid transformation of the economy, new big market players, and emerging business models may give policymakers the feeling that we are stuck in a patent policy rut whose usefulness has expired. In addition, there are indirect economic effects from projecting national patent law onto trade agreements. If we assume that a club of economies (such as OECD) generate most of the innovation worldwide while the rest of countries simply adopt new technologies, the innovation club would have control over the global supply of high value-added goods and services and be able to preserve a terms-of-trade advantage. In this scenario, stronger patent protection may be in the interest of the innovation club to the extent that their competitive advantage remains in industries dependent of patent protection. But should the world economic order change and the innovation club become specialized in digital services while the rest of the world takes on larger segments of manufactures, the advantage may shift outside the innovation club. This is not a far-fetched scenario. Emerging economies have increased their service economy in addition to their manufacturing capacity; overall they are better integrated in global supply chains. What is more, these emerging economies are growing consumption markets that will become increasingly more relevant globally as they continue to grow faster than rich economies. What is more, the innovation club will not likely retain a monopoly on global innovation for too long. Within emerging economies, another club of economies is placing great investments in developing innovative capacity. In particular, China, India, Brazil, Mexico, and South Africa (and possibly Russia) have strengthened their innovation systems by expanding public investments in R&D and introducing institutional reforms to foster entrepreneurship. The innovation of this second club may, in a world of harmonized patent law, increase their competitive advantage by securing monopolistic control of key high-tech markets. As industries less reliant on patents flourish and the digital economy transforms US markets, an inflexibly patent policy regime may actually be detrimental to American terms of trade. I should stress that these kind of political and economic effects of America’s posture on IPRs in trade policy are not merely speculative. Just as manufactures displaced the once dominant agricultural sector, and services in turn took over as the largest sector of the economy, we can fully expect that the digital economy—with its preference for limited use of patents—will become not only more economic relevant, but also more politically influential. The tensions observed in international trade and especially the aforementioned considerations merit revisiting the rationale for America’s posture on intellectual property policy in trade negotiations. Elsie Bjarnason contributed to this post. Authors Walter D. Valdivia Image Source: © Romeo Ranoco / Reuters Full Article
c State of the Union’s challenge: How to make tech innovation work for us? By webfeeds.brookings.edu Published On :: Thu, 14 Jan 2016 07:30:00 -0500 Tuesday night, President Obama presented four critical questions about the future of America and I should like to comment on the first two: How to produce equal opportunity, emphasizing economic security for all. In his words, “how do we make technology work for us, and not against us,” particularly to meet the “urgent challenges” of our days. The challenges the president wishes to meet by means of technological development are climate change and cancer. Let’s consider cancer first. There are plenty of reasons to be skeptical: this is not the first presidential war against cancer, President Nixon tried that once and, alas cancer still has the upper hand. It is ironic that Mr. Obama chose this particular ”moonshot”, because not only are the technical aspects of cancer more uncertain than those of space travel, political support for the project is vastly different and we cannot be sure that even another Democrat in the White House would see this project to fruition. In effect, neither Mr. Obama nor his appointed “mission control”, Vice President Biden, have time in office to see fruits from their efforts on this front. The second challenge the president wishes to address with technology is problematic beyond technical and economic feasibility (producing renewable energy at competitive prices); curbing carbon emissions has become politically intractable. The president correctly suggested that being leaders in the renewable energy markets of the future makes perfect business sense, even for global warming skeptics. Nevertheless, markets have a political economy, and current energy giants have a material interest in not allowing any changes to the rules that so favor them (including significant federal subsidies). Only when the costs of exploration, extraction, and distribution of fossil fuels rise above those of renewable sources, we can expect policy changes enabling an energy transition to become feasible. When renewables are competitive on a large scale, it is not very likely that their production will be controlled by new industrial players. Such is the political economy of free markets. What’s more, progressives should be wary of standard solutions that would raise the cost of energy (such as a tax on carbon emissions), because low income families are quite sensitive to energy prices; the cost of electricity, gas, and transportation is a far larger proportion of their income than that of their wealthier neighbors. It’s odd that the president proposes technological solutions to challenges that call for a political solution. Again, in saying this, I’m allowing for the assumption that the technical side is manageable, which is not necessarily a sound assumption to make. The technical and economic complexity of these problems should only compound political hurdles. If I’m skeptical that technological fixes would curb carbon emissions or cure cancer, I am simply vexed by the president’s answer to the question on economic opportunity and security: expand the safety net. It is not that it wouldn’t work; it worked wonders creating prosperity and enlarging the middle-class in the post-World War II period. The problem is that enacting welfare state policies promises to be a hard political battle that, even if won, could result in pyrrhic victories. The greatest achievement of Mr. Obama expanding the safety net was, of course, the Affordable Care Act. But his policy success came at a very high cost: a majority of the voters have questions about the legitimacy of that policy. Even its eponymous name, Obamacare, was coined as a term of derision. It is bizarre that opposition to this reform is often found amidst people who benefit from it. We can blame the systematic campaign against it in every electoral contest, the legal subterfuges brought up to dismantle it (that ACA survived severely bruised), and the AM radio vitriol, but even controlling for the dirty war on healthcare reform, passing such as monumental legislation strictly across party lines has made it the lighting rod of distrust in government. Progressives are free to try to increase economic opportunity following the welfare state textbook. They will meet the same opposition that Mr. Obama encountered. However, where progressives and conservatives could agree is about increasing opportunities for entrepreneurs, and nothing gives an edge to free enterprise more than innovation. Market competition is the selection mechanism by which an elite of enterprises rises from a legion created any given year; this elite, equipped with a new productive platform, can arm-wrestle markets from the old guard of incumbents. This is not the only way innovation takes place: monopolies and cartels can produce innovation, but with different outcomes. In competitive markets, innovation is the instrument of product differentiation; therefore, it improves quality and cuts consumer prices. In monopolistic markets, innovation also takes place, but generally as a monopolist’s effort to raise barriers to entry and secure high profits. Innovation can take place preserving social protections to the employees of the new industries, or it can undermine job security of its labor force (a concern with the sharing economy). These different modes of innovation are a function of the institutions that govern innovation, including industrial organization, labor and consumer protections. What the President did not mention is that question two can answer question one: technological development can improve economic opportunity and security, and that is likely to be more politically feasible than addressing the challenges of climate change and cancer. Shaping the institutions that govern innovative activity to favor modes of innovation that benefit a broad base of society is an achievable goal, and could indeed be a standard by which his and future administrations are measured. This is so because these are not the province of the welfare state. They are policy domains that have historically enjoyed bipartisan consensus (such as federal R&D funding, private R&D tax credits) or low contestation (support for small business, tech transfer, loan guarantees). As Mr. Obama himself suggested, technology can be indeed be made to work for us, all of us. Authors Walter D. Valdivia Image Source: © POOL New / Reuters Full Article
c Why should I buy a new phone? Notes on the governance of innovation By webfeeds.brookings.edu Published On :: Fri, 22 Jan 2016 20:00:00 -0500 A review essay of “Governance of Socio-technical Systems: Explaining Change”, edited by Susana Borrás and Jakob Edler (Edward Elgar, 2014, 207 pages). Phasing-out a useful and profitable technology I own a Nokia 2330; it’s a small brick phone that fits comfortably in the palm of my hand. People have feelings about this: mostly, they marvel at my ability to survive without a smart-phone. Concerns go beyond my wellbeing; once a friend protested that I should be aware of the costs I impose onto my friends, for instance, by asking them for precise directions to their houses. Another suggested that I cease trying to be smarter than my phone. But my reason is simple: I don’t need a smart phone. Most of the time, I don’t even need a mobile phone. I can take and place calls from my home or my office. And who really needs a phone during their commute? Still, my device will meet an untimely end. My service provider has informed me via text message that it will phase out all 2G service and explicitly encouraged me to acquire a 3G or newer model. There is a correct if simplistic explanation for this announcement: my provider is not making enough money with my account and should I switch to a newer device, they will be able to sell me a data plan. The more accurate and more complex explanation is that my mobile device is part of a communications system that is integrated to other economic and social systems. As those other systems evolve, my device is becoming incompatible with them; my carrier has determined that I should be integrated. The system integration is easy to understand from a business perspective. My carrier may very well be able to make a profit keeping my account as is, and the accounts of the legion of elderly and low-income customers who use similar devices, and still they may not find it advantageous in the long run to allow 2G devices in their network. To understand this business strategy, we need to go back no farther than the introduction of the iPhone, which in addition to being the most marketable mobile phone set a new standard platform for mobile devices. Its introduction accelerated a trend underway in the core business of carriers: the shift from voice communication to data streaming because smart phones can support layers of overlapping services that depend on fast and reliable data transfer. These services include sophisticated log capabilities, web search, geo-location, connectivity to other devices, and more recently added bio-monitoring. All those services are part of systems of their own, so it makes perfect business sense for carriers to seamlessly integrate mobile communications with all those other systems. Still, the economic rationale explains only a fraction of the systems integration underway. The communication system of mobile telephony is also integrated with regulatory, social, and cultural systems. Consider the most mundane examples: It’s hard to imagine anyone who, having shifted from paper-and-pencil to an electronic agenda, decided to switch back afterwards. We are increasingly dependent of GPS services; while it may have once served tourists who did not wish to learn how to navigate a new city, it is now a necessity for many people who without it are lost in their home town. Not needing to remember phone numbers, the time of our next appointment, or how to go back to that restaurant we really liked, is a clear example of the integration of mobile devices into our value systems. There are coordination efforts and mutual accommodation taking place: tech designers seek to adapt to changing values and we update our values to the new conveniences of slick gadgets. Government officials are engaged in the same mutual accommodation. They are asking how many phone booths must be left in public places, how to reach more people with public service announcements, and how to provide transit information in real-time when commuters need it. At the same time, tech designers are considering all existing regulations so their devices are compliant. Communication and regulatory systems are constantly being re-integrated. The will behind systems integration The integration of technical and social systems that results from innovation demands an enormous amount of planning, effort, and conflict resolution. The people involved in this process come from all quarters of the innovation ecology, including inventors, entrepreneurs, financiers, and government officials. Each of these agents may not be able to contemplate the totality of the system integration problem but they more or less understand how their respective system must evolve so as to be compatible with interrelated systems that are themselves evolving. There is a visible willfulness in the integration task that scholars of innovation call the governance of socio-technical systems. Introducing the term governance, I should emphasize that I do not mean merely the actions of governments or the actions of entrepreneurs. Rather, I mean the effort of all agents involved in the integration and re-integration of systems triggered by innovation; I mean all the coordination and mutual accommodation of agents from interrelated systems. And there is no single vehicle to transport all the relevant information for these agents. A classic representation of markets suggests that prices carry all the relevant information agents need to make optimal decisions. But it is impossible to project this model onto innovation because, as I suggested above, it does not adhere exclusively to economic logic; cultural and political values are also at stake. The governance task is therefore fragmented into pieces and assigned to each of the participants of the socio-technical systems involved, and they cannot resolve it as a profit-maximization problem. Instead, the participants must approach governance as a problem of design where the goal could be characterized as reflexive adaptation. By adaptation I mean seeking to achieve inter-system compatibility. By reflexive I mean that each actor must realize that their actions trigger adaption measures in other systems. Thus, they cannot passively adapt but rather they must anticipate the sequence of accommodations in the interaction with other agents. This is one of the most important aspects of the governance problem, because all too often neither technical nor economic criteria will suffice; quite regularly coordination must be negotiated, which is to say, innovation entails politics. The idea of governance of socio-technical systems is daunting. How do we even begin to understand it? What kinds of modes of governance exist? What are the key dimensions to understand the integration of socio-technical systems? And perhaps more pressing, who prevails in disputes about coordination and accommodation? Fortunately, Susana Borrás, from the Copenhagen Business School, and Jakob Edler, from the University of Manchester, both distinguished professors of innovation, have collected a set of case studies that shed light on these problems in an edited volume entitled Governance of Socio-technical Change: Explaining Change. What is more, they offer a very useful conceptual framework of governance that is worth reviewing here. While this volume will be of great interest to scholars of innovation—and it is written in scholarly language—I think it has great value for policymakers, entrepreneurs, and all agents involved in a practical manner in the work of innovation. Organizing our thinking on the governance of change The first question that Borrás and Edler tackle is how to characterize the different modes of governance. They start out with a heuristic typology across the two central categories: what kinds of agents drive innovation and how the actions of these agents are coordinated. Agents can represent the state or civil society, and actions can be coordinated via dominant or non-dominant hierarchies. Change led by state actors Change led by societal actors Coordination by dominant hierarchies Traditional deference to technocratic competence: command and control. Monopolistic or oligopolistic industrial organization. Coordination by non-dominant hierarchies State agents as primus inter pares. More competitive industries with little government oversight. Source: Adapted from Borrás and Adler (2015), Table 1.2, p. 13. This typology is very useful to understand why different innovative industries have different dynamics; they are governed differently. For instance, we can readily understand why consumer software and pharmaceuticals are so at odds regarding patent law. The strict (and very necessary) regulation of drug production and commercialization coupled with the oligopolistic structure of that industry creates the need and opportunity to advocate for patent protection; which is equivalent to a government subsidy. In turn, the highly competitive environment of consumer software development and its low level of regulation foster an environment where patents hinder innovation. Government intervention is neither needed nor wanted; the industry wishes to regulate itself. This typology is also useful to understand why open source applications have gained currency much faster in the consumer segment than the contractor segment of software producers. Examples of the latter is industry specific software (e.g. to operate machinery, the stock exchange, and ATMs) or software to support national security agencies. These contractors demand proprietary software and depend on the secrecy of the source code. The software industry is not monolithic, and while highly innovative in all its segments, the innovation taking place varies greatly by its mode of governance. Furthermore, we can understand the inherent conflicts in the governance of science. In principle, scientists are led by curiosity and organize their work in a decentralized and organic fashion. In practice, most of science is driven by mission-oriented governmental agencies and is organized in a rigid hierarchical system. Consider the centrality of prestige in science and how it is awarded by peer-review; a system controlled by the top brass of each discipline. There is nearly an irreconcilable contrast between the self-image of science and its actual governance. Using the Borrás-Edler typology, we could say that scientists imagine themselves as citizens of the south-east quadrant while they really inhabit the north-west quadrant. There are practical lessons from the application of this typology to current controversies. For instance, no policy instrument such as patents can have the same effect on all innovation sectors because the effect will depend on the mode of governance of the sector. This corollary may sound intuitive, yet it really is at variance with the current terms of the debate on patent protection, where assertions of its effect on innovation, in either direction, are rarely qualified. The second question Borrás and Edler address is that of the key analytical dimensions to examine socio-technical change. To this end, they draw from an ample selection of social theories of change. First, economists and sociologists fruitfully debate the advantage of social inquiry focused on agency versus institutions. Here, the synthesis offered is reminiscent of Herbert Simon’s “bounded rationality”, where the focus turns to agent decisions constrained by institutions. Second, policy scholars as well as sociologists emphasize the engineering of change. Change can be accomplished with discreet instruments such as laws and regulations, or diffused instruments such as deliberation, political participation, and techniques of conflict resolution. Third, political scientists underscore the centrality of power in the adjudication of disputes produced by systems’ change and integration. Borrás and Edler have condensed these perspectives in an analytical framework that boils down to three clean questions: who drives change? (focus on agents bounded by institutions), how is change engineered? (focus on instrumentation), and why it is accepted by society? (focus on legitimacy). The case studies contained in this edited volume illustrate the deployment of this framework with empirical research. Standards, sustainability, incremental innovation Arthur Daemmrich (Chapter 3) tells the story of how the German chemical company BASF succeeded marketing the biodegradable polymer Ecoflex. It is worth noting the dependence of BASF on government funding to develop Ecoflex, and on the German Institute for Standardization (DIN), making a market by setting standards. With this technology, BASF capitalized on the growing demand in Germany for biodegradables, and with its intense cooperation with DIN helped establish a standard that differentiate Ecoflex from the competition. By focusing on the enterprise (the innovation agent) and its role in engineering the market for its product by setting standards that would favor them, this story reveals the process of legitimation of this new technology. In effect, the certification of DIN was accepted by agribusinesses that sought to utilize biodegradable products. If BASF is an example of innovation by standards, Allison Loconto and Marc Barbier (Chapter 4) show the strategies of governing by standards. They take the case of the International Social and Environmental Accreditation and Labelling alliance (ISEAL). ISEAL, an advocate of sustainability, positions itself as a coordinating broker among standard developing organizations by offering “credibility tools” such as codes of conduct, best practices, impact assessment methods, and assurance codes. The organization advocates what is known as the tripartite system regime (TSR) around standards. TSR is a system of checks and balances to increase the credibility of producers complying with standards. The TSR regime assigns standard-setting, certification, and accreditation of the certifiers, to separate and independent bodies. The case illustrates how producers, their associations, and broker organizations work to bestow upon standards their most valuable attribute: credibility. The authors are cautious not to conflate credibility with legitimacy, but there is no question that credibility is part of the process of legitimizing technical change. In constructing credibility, these authors focus on the third question of the framework –legitimizing innovation—and from that vantage point, they illuminate the role of actors and instruments that will guide innovations in sustainability markets. While standards are instruments of non-dominant hierarchies, the classical instrument of dominant hierarchies is regulation. David Barberá-Tomás and Jordi Molas-Gallart tell the tragic consequences of an innovation in hip-replacement prosthesis that went terribly wrong. It is estimated that about 30 thousand replaced hips failed. The FDA, under the 1976 Medical Device Act, allows incremental improvements in medical devices to go into the market after only laboratory trials, assuming that any substantive innovations have already being tested in regular clinical trials. This policy was designed as an incentive for innovation, a relief from high regulatory costs. However, the authors argue, when products have been constantly improved for a number of years after an original release, any marginal improvement comes at a higher cost or higher risk—a point they refer to as the late stage of the product life-cycle. This has tilted the balance in favor of risky improvements, as illustrated by the hip prosthesis case. The story speaks to the integration of technical and cultural systems: the policy that encourages incremental innovation may alter the way medical device companies assess the relative risk of their innovations, precisely because they focus on incremental improvements over radical ones. Returning to the analytical framework, the vantage point of regulation—instrumentation—elucidates the particular complexities and biases in agents’ decisions. Two additional case studies discuss the discontinuation of the incandescent light bulb (ILB) and the emergence of translational research, both in Western Europe. The first study, authored by Peter Stegmaier, Stefan Kuhlmann and Vincent R. Visser (Chapter 6), focuses on a relatively smooth transition. There was wide support for replacing ILBs that translated in political will and a market willing to purchase new energy efficient bulbs. In effect, the new technical system was relatively easy to re-integrate to a social system in change—public values had shifted in Europe to favor sustainable consumption—and the authors are thus able to emphasize how agents make sense of the transition. Socio-technical change does not have a unique meaning: for citizens it means living in congruence with their values; for policy makers it means accruing political capital; for entrepreneurs it means new business opportunities. The case by Etienne Vignola-Gagné, Peter Biegelbauer and Daniel Lehner (Chapter 7) offers a similar lesson about governance. My reading of their multi-site study of the implementation of translational research—a management movement that seeks to bridge laboratory and clinical work in medical research—reveals how the different agents involved make sense of this organizational innovation. Entrepreneurs see a new market niche, researchers strive for increasing the impact of their work, and public officials align their advocacy for translation with the now regular calls for rendering publicly funded research more productive. Both chapters illuminate a lesson that is as old as it is useful to remember: technological innovation is interpreted in as many ways as the number of agents that participate in it. Innovation for whom? The framework and illustrations of this book are useful for those of us interested in the governance of system integration. The typology of different modes of governance and the three vantage points from which empirical analysis can be deployed are very useful indeed. Further development of this framework should include the question of how political power is redistributed by effect of innovation and the system integration and re-integration that it triggers. The question is pressing because the outcomes of innovation vary as power structures are reinforced or debilitated by the emergence of new technologies—not to mention ongoing destabilizing forces such as social movements. Put another way, the framework should be expanded to explain in which circumstances innovation exacerbates inequality. The expanded framework should probe whether the mutual accommodation is asymmetric across socio-economic groups, which is the same as asking: are poor people asked to do more adapting to new technologies? These questions have great relevance in contemporary debates about economic and political inequality. I believe that Borrás and Edler and their colleagues have done us a great service organizing a broad but dispersed literature and offering an intuitive and comprehensive framework to study the governance of innovation. The conceptual and empirical parts of the book are instructive and I look forward to the papers that will follow testing this framework. We need to better understand the governance of socio-technical change and the dynamics of systems integration. Without a unified framework of comparison, the ongoing efforts in various disciplines will not amount to a greater understanding of the big picture. I also have a selfish reason to like this book: it helps me make sense of my carrier’s push for integrating my value system to their technical system. If I decide to adapt to a newer phone, I could readily do so because I have time and other resources. But that may not be the case for many customers of 2G devices who have neither the resources nor the inclination to learn to use more complex devices. For that reason alone, I’d argue that this sort of innovation-led systems integration could be done more democratically. Still, I could meet the decision of my carrier with indifference: when the service is disconnected, I could simply try to get by without the darn toy. Note: Thanks to Joseph Schuman for an engaging discussion of this book with me. Authors Walter D. Valdivia Image Source: © Dominic Ebenbichler / Reuters Full Article
c The fair compensation problem of geoengineering By webfeeds.brookings.edu Published On :: Tue, 23 Feb 2016 09:00:00 -0500 The promise of geoengineering is placing average global temperature under human control, and is thus considered a powerful instrument for the international community to deal with global warming. While great energy has been devoted to learning more about the natural systems that it would affect, questions of political nature have received far less consideration. Taking as a given that regional effects will be asymmetric, the nations of the world will only give their consent to deploying this technology if they can be given assurances of a fair compensation mechanism, something like an insurance policy. The question of compensation reveals that the politics of geoengineering are far more difficult than the technical aspects. What is Geoengineering? In June 1991, Mount Pinatubo exploded, throwing a massive amount of volcanic sulfate aerosols into the high skies. The resulting cloud dispersed over weeks throughout the planet and cooled its temperature on average 0.5° Celsius over the next two years. If this kind of natural phenomenon could be replicated and controlled, the possibility of engineering the Earth’s climate is then within reach. Spraying aerosols in the stratosphere is one method of solar radiation management (SRM), a class of climate engineering that focuses on increasing the albedo, i.e. reflectivity, of the planet’s atmosphere. Other SRM methods include brightening clouds by increasing their content of sea salt. A second class of geo-engineering efforts focuses on carbon removal from the atmosphere and includes carbon sequestration (burying it deep underground) and increasing land or marine vegetation. Of all these methods, SRM is appealing for its effectiveness and low costs; a recent study put the cost at about $5 to $8 billion per year.1 Not only is SRM relatively inexpensive, but we already have the technological pieces that assembled properly would inject the skies with particles that reflect sunlight back into space. For instance, a fleet of modified Boeing 747s could deliver the necessary payload. Advocates of geoengineering are not too concerned about developing the technology to effect SRM, but about its likely consequences, not only in terms of slowing global warming but the effects on regional weather. And there lies the difficult question for geoengineering: the effects of SRM are likely to be unequally distributed across nations. Here is one example of these asymmetries: Julia Pongratz and colleagues at the department of Global Ecology of the Carnegie Institution for Science estimated a net increase in yields of wheat, corn, and rice from SRM modified weather. However, the study also found a redistributive effect with equatorial countries experiencing lower yields.2 We can then expect that equatorial countries will demand fair compensation to sign on the deployment of SRM, which leads to two problems: how to calculate compensation, and how to agree on a compensation mechanism. The calculus of compensation What should be the basis for fair compensation? One view of fairness could be that, every year, all economic gains derived from SRM are pooled together and distributed evenly among the regions or countries that experience economic losses. If the system pools gains from SRM and distributes them in proportion to losses, questions about the balance will only be asked in years in which gains and losses are about the same. But if losses are far greater than the gains; then this would be a form of insurance that cannot underwrite some of the incidents it intends to cover. People will not buy such an insurance policy; which is to say, some countries will not authorize SRM deployment. In the reverse, if the pool has a large balance left after paying out compensations, then winners of SRM will demand lower compensation taxes. Further complicating the problem is the question of how to separate gains or losses that can be attributed to SRM from regional weather fluctuations. Separating the SRM effect could easily become an intractable problem because regional weather patterns are themselves affected by SRM. For instance, any year that El Niño is particularly strong, the uncertainty about the net effect of SRM will increase exponentially because it could affect the severity of the oceanic oscillation itself. Science can reduce uncertainty but only to a certain degree, because the better we understand nature, the more we understand the contingency of natural systems. We can expect better explanations of natural phenomena from science, but it would be unfair to ask science to reduce greater understanding to a hard figure that we can plug into our compensation equation. Still, greater complexity arises when separating SRM effects from policy effects at the local and regional level. Some countries will surely organize better than others to manage this change, and preparation will be a factor in determining the magnitude of gains or losses. Inherent to the problem of estimating gains and losses from SRM is the inescapable subjective element of assessing preparation. The politics of compensation Advocates of geoengineering tell us that their advocacy is not about deploying SRM; rather, it is about better understanding the scientific facts before we even consider deployment. It’s tempting to believe that the accumulating science on SRM effects would be helpful. But when we consider the factors I just described above, it is quite possible that more science will also crystalize the uncertainty about exact amounts of compensation. The calculus of gain or loss, or the difference between the reality and a counterfactual of what regions and countries will experience requires certainty, but science only yields irreducible uncertainty about nature. The epistemic problems with estimating compensation are only to be compounded by the political contestation of those numbers. Even within the scientific community, different climate models will yield different results, and since economic compensation is derived from those models’ output, we can expect a serious contestation of the objectivity of the science of SRM impact estimation. Who should formulate the equation? Who should feed the numbers into it? A sure way to alienate scientists from the peoples of the world is to ask them to assert their cognitive authority over this calculus. What’s more, other parts of the compensation equation related to regional efforts to deal with SRM effect are inherently subjective. We should not forget the politics of asserting compensation commensurate to preparation effort; countries that experience low losses may also want compensation for their efforts preparing and coping with natural disasters. Not only would a compensation equation be a sham, it would be unmanageable. Its legitimacy would always be in question. The calculus of compensation may seem a way to circumvent the impasses of politics and define fairness mathematically. Ironically, it is shot through with subjectivity; is truly a political exercise. Can we do without compensation? Technological innovations are similar to legislative acts, observed Langdon Winner.3 Technical choices of the earliest stage in technical design quickly “become strongly fixed in material equipment, economic investment, and social habit, [and] the original flexibility vanishes for all practical purposes once the initial commitments are made.” For that reason, he insisted, "the same careful attention one would give to the rules, roles, and relationships of politics must also be given to such things as the building of highways, the creation of television networks, and the tailoring of seeming insignificant features on new machines." If technological change can be thought of as legislative change, we must consider how such a momentous technology as SRM can be deployed in a manner consonant with our democratic values. Engineering the planet’s weather is nothing short of passing an amendment to Planet Earth’s Constitution. One pesky clause in that constitutional amendment is a fair compensation scheme. It seems so small a clause in comparison to the extent of the intervention, the governance of deployment and consequences, and the international commitments to be made as a condition for deployment (such as emissions mitigation and adaptation to climate change). But in the short consideration afforded here, we get a glimpse of the intractable political problem of setting up a compensation scheme. And yet, if the clause were not approved by a majority of nations, a fair compensation scheme has little hope to be consonant with democratic aspirations. 1McClellan, Justin, David W Keith, Jay Apt. 2012. Cost analysis of stratospheric albedo modification delivery systems. Environmental Research Letters 7(3): 1-8. 2Pongratz, Julia, D. B. Lobell, L. Cao, K. Caldeira. 2012. Nature Climate Change 2, 101–105. 3Winner, Langdon. 1980. Do artifacts have politics? Daedalus (109) 1: 121-136. Authors Walter D. Valdivia Image Source: © Antara Photo Agency / Reuters Full Article
c Gene editing: New challenges, old lessons By webfeeds.brookings.edu Published On :: Tue, 15 Mar 2016 07:30:00 -0400 It has been hailed as the most significant discovery in biology since polymerase chain reaction allowed for the mass replication of DNA samples. CRISPR-Cas9 is an inexpensive and easy-to-use gene-editing method that promises applications ranging from medicine to industrial agriculture to biofuels. Currently, applications to treat leukemia, HIV, and cancer are under experimental development.1 However, new technical solutions tend to be fraught with old problems, and in this case, ethical and legal questions loom large over the future. Disagreements on ethics The uptake of this method has been so fast that many scientists have started to worry about inadequate regulation of research and its unanticipated consequences.2 Consider, for instance, the disagreement on research on human germ cells (eggs, sperm, or embryos) where an edited gene is passed onto offspring. Since the emergence of bioengineering applications in the 1970s, the scientific community has eschewed experiments to alter human germline and some governments have even banned them.3 The regulation regimes are expectedly not uniform: for instance, China bans the implantation of genetically modified embryos in women but not the research with embryos. Last year, a group of Chinese researchers conducted gene-editing experiments on non-viable human zygotes (fertilized eggs) using CRISPR.4 News that these experiments were underway prompted a group of leading U.S. geneticists to meet in March 2015 in Napa, California, to begin a serious consideration of ethical and legal dimensions of CRISPR and called for a moratorium on research editing genes in human germline.5 Disregarding that call, the Chinese researchers published their results later in the year largely reporting a failure to precisely edit targeted genes without accidentally editing non-targets. CRISPR is not yet sufficiently precise. CRISPR reignited an old debate on human germline research that is one of the central motivations (but surely not the only one) for an international summit on gene editing hosted by the U.S. National Academies of Sciences, the Chinese Academy of Sciences, and the U.K.'s Royal Society in December 2015. About 500 scientists as well as experts in the legal and ethical aspects of bioengineering attended.6 Rather than consensus, the meeting highlighted the significant contrasts among participants about the ethics of inquiry, and more generally, about the governance of science. Illustrative of these contrasts are the views of prominent geneticists Francis Collins, Director of the National Institutes of Health, and George Church, professor of genetics at Harvard. Collins argues that the “balance of the debate leans overwhelmingly against human germline engineering.” In turn, Church, while a signatory of the moratorium called by the Napa group, has nevertheless suggested reasons why CRISPR is shifting the balance in favor of lifting the ban on human germline experiments.7 The desire to speed up discovery of cures for heritable diseases is laudable. But tinkering with human germline is truly a human concern and cannot be presumed to be the exclusive jurisdictions of scientists, clinicians, or patients. All members of society have a stake in the evolution of CRISPR and must be part of the conversation about what kind of research should be permitted, what should be discouraged, and what disallowed. To relegate lay citizens to react to CRISPR applications—i.e. to vote with their wallets once applications hit the market—is to reduce their citizenship to consumer rights, and public participation to purchasing power.8 Yet, neither the NAS summit nor the earlier Napa meeting sought to solicit the perspectives of citizens, groups, and associations other than those already tuned in the CRISPR debates.9 The scientific community has a bond to the larger society in which it operates that in its most basic form is the bond of the scientist to her national community, is the notion that the scientist is a citizen of society before she is a denizen of science. This bond entails liberties and responsibilities that transcend the ethos and telos of science and, consequently, subordinates science to the social compact. It is worth recalling this old lesson from the history of science as we continue the public debate on gene editing. Scientists are free to hold specific moral views and prescriptions about the proper conduct of research and the ethical limits of that conduct, but they are not free to exclude the rest of society from weighing in on the debate with their own values and moral imaginations about what should be permitted and what should be banned in research. The governance of CRISPR is a question of collective choice that must be answered by means of democratic deliberation and, when irreconcilable differences arise, by the due process of democratic institutions. Patent disputes More heated than the ethical debate is the legal battle for key CRISPR patents that has embroiled prominent scientists involved in perfecting this method. The U.S. Patent and Trademark Office initiated a formal contestation process, called interference, in March 2016 to adjudicate the dispute. The process is likely to take years and appeals are expected to extend further in time. Challenges are also expected to patents filed internationally, including those filed with the European Patent Office. To put this dispute in perspective, it is instructive to consider the history of CRISPR authored by one of the celebrities in gene science, Eric Lander.10 This article ignited a controversy because it understated the role of one of the parties to the patent dispute (Jennifer Doudna and Emmanuelle Charpentier), while casting the other party as truly culminating the development of this technology (Feng Zhang, who is affiliated to Lander’s Broad Institute). Some gene scientists accused Lander of tendentious inaccuracies and of trying to spin a story in a manner that favors the legal argument (and economic interest) of Zhang. Ironically, the contentious article could be read as an argument against any particular claim to the CRISPR patents as it implicitly questions the fairness of granting exclusive rights to an invention. Lander tells the genesis of CRISPR that extends through a period of two decades and over various countries, where the protagonists are the many researchers who contributed to the cumulative knowledge in the ongoing development of the method. The very title of Lander’s piece, “The Heroes of CRISPR” highlights that the technology has not one but a plurality of authors. A patent is a legal instrument that recognizes certain rights of the patent holder (individual, group, or organization) and at the same time denies those rights to everyone else, including those other contributors to the invention. Patent rights are thus arbitrary under the candle of history. I am not suggesting that the bureaucratic rules to grant a patent or to determine its validity are arbitrary; they have logical rationales anchored in practice and precedent. I am suggesting that in principle any exclusive assignation of rights that does not include the entire community responsible for the invention is arbitrary and thus unfair. The history of CRISPR highlights this old lesson from the history of technology: an invention does not belong to its patent holder, except in a court of law. Some scientists may be willing to accept with resignation the unfair distribution of recognition granted by patents (or prizes like the Nobel) and find consolation in the fact that their contribution to science has real effects on people’s lives as it materializes in things like new therapies and drugs. Yet patents are also instrumental in distributing those real effects quite unevenly. Patents create monopolies that, selling their innovation at high prices, benefit only those who can afford them. The regular refrain to this charge is that without the promise of high profits, there would be no investments in innovation and no advances in life-saving medicine. What’s more, the biotech industry reminds us that start-ups will secure capital injections only if they have exclusive rights to the technologies they are developing. Yet, Editas Medicine, a biotech start-up that seeks to exploit commercial applications of CRISPR (Zhang is a stakeholder), was able to raise $94 million in its February 2016 initial public offering. That some of Editas’ key patents are disputed and were entering interference at USPTO was patently not a deterrent for those investors. Towards a CRISPR democratic debate Neither the governance of gene-editing research nor the management of CRISPR patents should be the exclusive responsibility of scientists. Yet, they do enjoy an advantage in public deliberations on gene editing that is derived from their technical competence and from the authority ascribed to them by society. They can use this advantage to close the public debate and monopolize its terms, or they could turn it into stewardship of a truly democratic debate about CRISPR. The latter choice can benefit from three steps. A first step would be openness: a public willingness to consider and internalize public values that are not easily reconciled with research values. A second step would be self-restraint: publicly affirming a self-imposed ban on research with human germline and discouraging research practices that are contrary to received norms of prudence. A third useful step would be a public service orientation in the use of patents: scientists should pressure their universities, who hold title to their inventions, to preserve some degree of influence over research commercialization so that the dissemination and access to innovations is consonant with the noble aspirations of science and the public service mission of the university. Openness, self-restraint, and an orientation to service from scientists will go a long way to make of CRISPR a true servant of society and an instrument of democracy. Other reading: See media coverage compiled by the National Academies of Sciences. 1Nature: an authoritative and accessible primer. A more technical description of applications in Hsu, P. D. et al. 2014. Cell, 157(6): 1262–1278. 2For instance, see this reflection in Science, and this in Nature. 3More about ethical concerns on gene editing here: http://www.geneticsandsociety.org/article.php?id=8711 4Liang, P. et al. 2015. Protein & Cell, 6, 363–372 5Science: A prudent path forward for genomic engineering and germline gene modification. 6Nature: NAS Gene Editing Summit. 7While Collins and Church participated in the summit, their views quoted here are from StatNews.com: A debate: Should we edit the human germline. See also Sciencenews.org: Editing human germline cells sparks ethics debate. 8Hurlbut, J. B. 2015. Limits of Responsibility, Hastings Center Report, 45(5): 11-14. 9This point is forcefully made by Sheila Jasanoff and colleagues: CRISPR Democracy, 2015 Issues in S&T, 22(1). 10Lander, E. 2016. The Heroes of CRISPR. Cell, 164(1-2): 18-28. Authors Walter D. Valdivia Image Source: © Robert Pratta / Reuters Full Article
c Alternative perspectives on the Internet of Things By webfeeds.brookings.edu Published On :: Fri, 25 Mar 2016 07:30:00 -0400 Editor's Note: TechTakes is a new series that collects the diverse perspectives of scholars around the Brookings Institution on technology policy issues. This first post in the series features contributions from Scott Andes, Susan Hennessey, Adie Tomer, Walter Valdivia, Darrell M. West, and Niam Yaraghi on the Internet of Things. In the coming years, the number of devices around the world connected to the Internet of Things (IoT) will grow rapidly. Sensors located in buildings, vehicles, appliances, and clothing will create enormous quantities of data for consumers, corporations, and governments to analyze. Maximizing the benefits of IoT will require thoughtful policies. Given that IoT policy cuts across many disciplines and levels of government, who should coordinate the development of new IoT platforms? How will we secure billions of connected devices from cyberattacks? Who will have access to the data created by these devices? Below, Brookings scholars contribute their individual perspectives on the policy challenges and opportunities associated with the Internet of Things. The Internet of Things will be everywhere Darrell M. West is vice president and director of Governance Studies and founding director of the Center for Technology Innovation. Humans are lovable creatures, but prone to inefficiency, ineffectiveness, and distraction. They like to do other things when they are driving such as listening to music, talking on the phone, texting, or checking email. Judging from the frequency of accidents though, many individuals believe they are more effective at multi-tasking than is actually the case. The reality of these all too human traits is encouraging a movement from communication between computers to communication between machines. Driverless cars soon will appear on the highways in large numbers, and not just as a demonstration project. Remote monitoring devices will transmit vital signs to health providers, who then can let people know if their blood pressure has spiked or heart rhythm has shifted in a dangerous direction. Sensors in appliances will let individuals know when they are running low on milk, bread, or cereal. Thermostats will adjust their energy settings to the times when people actually are in the house, thereby saving substantial amounts of money while also protecting natural resources. With the coming rise of a 5G network, the Internet of Things will unleash high-speed devices and a fully connected society. Advanced digital devices will enable a wide range of new applications from energy and transportation to home security and healthcare. They will help humans manage the annoyances of daily lives such as traffic jams, not being able to find parking places, or keeping track of physical fitness. The widespread adoption of smart appliances, smart energy grids, resource management tools, and health sensors will improve how people connect with one another and their electronic devices. But they also will raise serious security, privacy, and policy issues. Implications for surveillance Susan Hennessey is Fellow in National Security in Governance Studies at the Brookings Institution. She is the Managing Editor of the Lawfare blog, which is devoted to sober and serious discussion of "Hard National Security Choices.” As the debate over encryption and diminished law enforcement access to communications enters the public arena, some posit the growing Internet of Things as a solution to “Going Dark.” A recently released Harvard Berkman Center report, “Don’t Panic,” concludes in part that losses of communication content will be offset by the growth of IoT and networked sensors. It argues IoT provides “prime mechanisms for surveillance: alternative vectors for information-gathering that could more than fill many of the gaps left behind by sources that have gone dark – so much so that they raise troubling questions about how exposed to eavesdropping the general public is poised to become.” Director of National Intelligence James Clapper agrees that IoT has some surveillance potential. He recently testified before Congress that “[i]n the future, intelligence services might use the IoT for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials.” But intelligence gathering in the Internet age is fundamentally about finding needles in haystacks – IoT is poised to add significantly more hay than needles. Law enforcement and the intelligence community will have to develop new methods to isolate and process the magnitude of information. And Congress and the courts will have to decide how laws should govern this type of access. For now, the unanswered question remains: How many refrigerators does it take to catch a terrorist? IoT governance Scott Andes is a senior policy analyst and associate fellow at the Anne T. and Robert M. Bass Initiative on Innovation and Placemaking, a part of the Centennial Scholar Initiative at the Brookings Institution. As with many new technology platforms, the Internet of Things is often approached as revolutionary, not evolutionary technology. The refrain is that some scientific Rubicon has been crossed and the impact of IoT will come soon regardless of public policy. Instead, the role of policymakers is to ensure this new technology is leveraged within public infrastructure and doesn’t adversely affect national security or aggravate inequality. While these goals are clearly important, they all assume technological advances of IoT are staunchly within the realm of the private sector and do not justify policy intervention. However, as with almost all new technologies that catch the public’s eye—robotics, clean energy, autonomous cars, etc.—hyperbolic news reporting overstates the market readiness of these technologies, further lowering the perceived need of policy support. The problem with this perspective is twofold. First, greater scientific breakthroughs are still needed. The current rate of improvement in processing power and data storage, miniaturization of devices, and more energy efficient sensors only begin to scratch the surface of IoT’s full potential. Advances within next-generation computational power, autonomous devices, and interoperable systems still require scientific breakthroughs and are nowhere near deployment. Second, even if the necessary technological advancements of IoT have been met, it’s not clear the U.S. economy will be the prime recipient of its economic value. Nations that lead in advanced manufacturing, like Germany, may already be better poised to export IoT-enabled products. Policymakers in the United States should view technological advancements in IoT as a global economic race that can be won through sound science policies. These should include: accelerating basic engineering research; helping that research reach the market; supporting entrepreneurs’ access to capital; and training a science and engineering-ready workforce that can scale up new technologies. IoT will democratize innovation Walter D. Valdivia is a fellow in the Center for Technology Innovation at Brookings. The Internet of Things could be a wonderful thing, but not in the way we imagine it. Today, the debate is dominated by cheerleaders or worrywarts. But their perspectives are merely two sides of the same coin: technical questions about reliability of communications and operations, and questions about system security. Our public imagination about the future is being narrowly circumscribed by these questions. However, as the Internet of Things starts to become a thing—or multiples things, or a networked plurality—it is likely to intrude so intensely into our daily lives that alternative imaginations will emerge and will demand a hearing. A compelling vision of the future is necessary to organize and coordinate the various market and political agents who will integrate IoT into society. Technological success is usually measured in terms set by the purveyor of that vision. Traditionally, this is a small group with a financial stake in technological development: the innovating industry. However, the intrusiveness and pervasiveness of the Internet of Things will prompt ordinary citizens to augment that vision. Citizen participation will deny any group a monopoly on that vision of the future. Such a development would be a true step in the direction of democratizing innovation. It could make IoT a wonderful thing indeed. Applications of IoT for infrastructure Adie Tomer is a fellow at the Brookings Institution Metropolitan Policy Program and a member of the Metropolitan Infrastructure Initiative. The Internet of Things and the built environment are a natural fit. The built environment is essentially just a collection of physical objects—from sidewalks and streets to buildings and water pipes—that all need to be managed in some capacity. Today, we measure our shared use of those objects through antiquated analog or digital systems. Think of the electricity meter on a building, or a person manually counting pedestrians on a busy city street. Digital, Internet-connected sensors promise to modernize measurement, relaying a whole suit of indicators to centralized databases tweaked to make sense of such big data. But let’s not fool ourselves. Simply outfitting cities and metro areas with more sensors won’t solve any of our pressing urban issues. Without governance frameworks to apply the data towards goals around transportation congestion, more efficient energy use, or reduced water waste, these sensors could be just another public investment that doesn’t lead to public benefit. The real goal for IoT in the urban space, then, is to ensure our built environment supports broader economic, social, and environmental objectives. And that’s not a technology issue—that’s a question around leadership and agenda-setting. Applications of IoT for health care Niam Yaraghi is a fellow in the Brookings Institution's Center for Technology Innovation. Health care is one of the most exciting application areas for IoT. Imagine that your Fitbit could determine if you fall, are seriously hurt, and need to be rushed to hospital. It automatically pings the closest ambulance and sends a brief summary of your medical status to the EMT personnel so that they can prepare for your emergency services even before they reach the scene. On the way, the ambulance will not need to use sirens to make way since the other autonomous vehicles have already received a notification about approaching ambulance and clear the way while the red lights automatically turn green. IoT will definitely improve the efficiency of health care services by reducing medical redundancies and errors. This dream will come true sooner than you think. However, if we do not appropriately address the privacy and security issues of healthcare data, then IoT can be our next nightmare. What if terrorist organizations (who are becoming increasingly technology savvy) find a way to hack into Fitbit and send wrong information to an EMT? Who owns our medical data? Can we prevent Fitbit from selling our health data to third parties? Given these concerns, I believe we should design a policy framework that encourages accountability and responsibility with regards to health data. The framework should precisely define who owns data; who can collect, store, mine and use it; and what penalties will be enforced if entities acted outside of this framework. Authors Jack Karsten Full Article
c The benefits of a knives-out Democratic debate By webfeeds.brookings.edu Published On :: Thu, 20 Feb 2020 13:31:50 +0000 Stop whining about Democrats criticizing each other. The idea that Democrats attacking Democrats is a risk and an avenue that will deliver reelection to Donald Trump is nonsense. Democrats must attack each other and attack each other aggressively. Vetting presidential candidates, highlighting their weaknesses and the gaps in their record is essential to building a… Full Article
c Bernie Sanders’s failed coalition By webfeeds.brookings.edu Published On :: Tue, 10 Mar 2020 11:00:33 +0000 Throughout Bernie Sanders’s presidential campaigns in 2016 and 2020, he promised to transform the Democratic Party and American politics. He promised a “revolution” that would resonate with a powerful group of Americans who have not normally participated in politics: young voters, liberal voters, and new voters. He believed that once his call went out and… Full Article
c It is time for a Cannabis Opportunity Agenda By webfeeds.brookings.edu Published On :: Mon, 23 Mar 2020 13:49:32 +0000 The 2020 election season will be a transformative time for cannabis policy in the United States, particularly as it relates to racial and social justice. Candidates for the White House and members of Congress have put forward ideas, policy proposals, and legislation that have changed the conversation around cannabis legalization. The present-day focus on cannabis… Full Article
c In administering the COVID-19 stimulus, the president’s role model should be Joe Biden By webfeeds.brookings.edu Published On :: Tue, 07 Apr 2020 20:24:12 +0000 As America plunges into recession, Congress and President Donald Trump have approved a series of aid packages to assist businesses, the unemployed, and others impacted by COVID-19. The first three aid packages will likely be supplemented by at least a fourth package, as the nation’s leaders better understand the depth and reach of the economic… Full Article
c With Sanders out, what’s next for the Democratic presidential race? By webfeeds.brookings.edu Published On :: Wed, 08 Apr 2020 21:44:21 +0000 Following the withdrawal of Sen. Bernie Sanders from the 2020 presidential race, the Democrats' presumptive nominee for president will be former Vice President Joe Biden. Senior Fellow John Hudak examines how Sanders and other progressives have shifted mainstream Democratic positions, and the repercussions for the Democratic convention in August. He also looks at the leadership… Full Article
c Inspectors general will drain the swamp, if Trump stops attacking them By webfeeds.brookings.edu Published On :: Thu, 16 Apr 2020 21:46:46 +0000 Over the past month, President Trump has fired one inspector general, removed an acting inspector general set to oversee the pandemic response and its more than $2 trillion dollars in new funding, and publicly criticized another from the White House briefing room. These sustained attacks against the federal government’s watchdogs fly in the face of… Full Article
c ‘Essential’ cannabis businesses: Strategies for regulation in a time of widespread crisis By webfeeds.brookings.edu Published On :: Sun, 19 Apr 2020 18:32:19 +0000 Most state governors and cannabis regulators were underprepared for the COVID-19 pandemic, a crisis is affecting every economic sector. But because the legal cannabis industry is relatively new in most places and still evolving everywhere, the challenges are even greater. What’s more, there is no history that could help us understand how the industry will endure the current economic situation. And so, in many… Full Article
c Let’s resolve to stop assuming the worst of each other in 2016 By webfeeds.brookings.edu Published On :: Mon, 28 Dec 2015 09:00:00 -0500 Even before the eruption of anti-Muslim rhetoric in the past several weeks, I had a privileged position from which to observe the deep current of Islamophobia that ran beneath the crust of mainstream politics over the fourteen years since 9/11. Because I work on Islamist extremism, my dad often forwards emails about Americans Muslims he receives from friends to ask if they are true. I don’t blame him for asking: they’re truly scary. Muslims imposing Sharia law over the objections of their fellow Americans. Muslims infiltrating the U.S. government to subvert it. And so on. But as with most Internet rumors circulated over email, the vast majority of the scary reports aren’t true. Take a peek at the “25 Hottest Urban Legends” on the rumor-busting website Snopes and you’ll see what I mean. The 11th on the list is about Muslim passengers on an AirTran flight that attempted a dry run to bring down a plane (they didn’t). The 15th is about an American Muslim who oversees all U.S. immigration (she just coordinates special naturalization ceremonies). The underlying message is that American Muslims are not to be trusted because of their religion. One reason these rumors have currency is that most Americans don’t know many of their Muslim neighbors. For all the worry of a Muslim takeover, there are only around 4 million in this country, a little over 1 percent of the total population. Most of them do not live in Republican strongholds, where they are most feared. [A]s with most Internet rumors circulated over email, the vast majority of the scary reports aren’t true. Of course, familiarity does not always lessen fears or tensions. But it does complicate easy stories about an unfamiliar culture and those who identify with it. For example, because I’ve worked on counterterrorism in the U.S. government, I’ve never bought the story that American Muslims are infiltrating the U.S. government to subvert it. I’ve simply met too many Muslims in the government working impossible hours to keep this country and its Constitution safe. American Muslims have their own easy stories to tell about non-Muslims that could use some complicating. Several of my Muslim friends have been surprised at the number of non-Muslim strangers who’ve come up to them and voiced their support. They’re surprised, presumably, because they assume that most non-Muslims in this country agree with Trump's rhetoric, which they don’t. Some American Muslims view Islamophobia a natural outgrowth of white American racism, religious bigotry, and xenophobia. That easy story may account for some Islamophobia but it ignores something major: actions by Muslims to deliberately set non-Muslims against them. Jihadist groups like al-Qaida and ISIS carry out attacks in this country to create popular backlash against Muslims in hopes of recruiting those who are angered by the backlash. Even though most Muslims reject the siren call of the jihadists, the backlash still leads some Muslims to expect the worst of nonbelievers and of the American government. Like the anti-Muslim rumor mill, they spread half-truths about Christian vigilante violence and government plots. For example, at least one prominent religious leader in the American Muslim community has insinuated that the San Bernardino attackers were patsies in a government conspiracy against Muslims. My hope is that we’ll all try to be a little less suspicious of one another’s motives and a little more suspicious of the easy stories we tell. Since it’s the holiday season, I shall indulge in a wish for the New Year. My hope is that we’ll all try to be a little less suspicious of one another’s motives and a little more suspicious of the easy stories we tell. I know the wish is fanciful given the current political climate but I’ve been struck by the number of Americans—Muslim and non-Muslim—who have been willing to confront their biases over the past few weeks and see things from the other side. If our enemies succeed by eroding our empathy for one another, we will succeed by reinforcing and expanding it. Authors William McCants Full Article
c Experts Weigh In: What is the future of al-Qaida and the Islamic State? By webfeeds.brookings.edu Published On :: Thu, 07 Jan 2016 10:57:00 -0500 Will McCants: As we wind down another year in the so-called Long War and begin another, it’s a good time to reflect on where we are in the fight against al-Qaida and its bête noire, the Islamic State. Both organizations have benefited from the chaos unleashed by the Arab Spring uprisings but they have taken different paths. Will those paths converge again or will the two organizations continue to remain at odds? Who has the best strategy at the moment? And what political changes might happen in the coming year that will reconfigure their rivalry for leadership of the global jihad? To answer these questions, I’ve asked some of the leading experts on the two organizations to weigh in over. The first is Barak Mendelsohn, an associate professor of political science at Haverford College and a senior fellow at the Foreign Policy Research Institute (FPRI). He is author of the brand new The al-Qaeda Franchise: The Expansion of al-Qaeda and Its Consequences. Barak Mendelsohn: Al-Qaida attacked the U.S. homeland on 9/11, unprepared for what would follow. There was a strong disconnect between al-Qaida’s meager capabilities and its strategic objectives of crippling the United States and of bringing about change in the Middle East. To bridge that gap, Osama bin Laden conveniently and unrealistically assumed that the attack on the United States would lead the Muslim masses and all other armed Islamist forces to join his cause. The collapse of the Taliban regime and the decimation of al-Qaida’s ranks quickly proved him wrong. Yet over fourteen years later al-Qaida is still around. Despite its unrealistic political vision and considerable setbacks—above all the rise of the Islamic State that upstaged al-Qaida and threatened its survival—it has branches in North Africa, the Arabian Peninsula, the Levant, Central Asia, and the Horn of Africa. Down, but not out Two factors explain al-Qaida’s resilience: changes in the environment due to the Arab revolutions and the group’s ability to take advantage of new opportunities by learning from past mistakes. The Arab awakening initially undercut al-Qaida’s original claims that change in Muslim countries cannot come peacefully or without first weakening the United States. Yet, the violence of regimes against their people in Syria, Libya, and elsewhere created new opportunities for al-Qaida to demonstrate its relevance. Furthermore, involved citizens determined to shape their own future presented al-Qaida with a new opportunity to recruit. But favorable conditions would be insufficient to explain al-Qaida’s resilience without changes in the way al-Qaida operates. Learning from its bitter experience in Iraq, al-Qaida opted to act with some moderation. It embedded itself among rebel movements in Syria and Yemen, thus showing it could be a constructive actor, attentive to the needs of the people and willing to cooperate with a wide array of groups. As part of a broader movement, al-Qaida’s affiliates in these countries also gained a measure of protection from external enemies reluctant to alienate the group’s new allies. [E]ven after showing some moderation, al-Qaida’s project is still too extreme for the overwhelming majority of Muslims. At present, the greatest threat to al-Qaida is not the United States or the Arab regimes; it’s the group’s former affiliate in Iraq, the Islamic State. ISIS is pressuring al-Qaida’s affiliates to defect—while it has failed so far to shift their allegiance, it has deepened cracks within the branches and persuaded small groups of al-Qaida members to change sides. Even if al-Qaida manages to survive the Islamic State’s challenge, in the long term it still faces a fundamental problem that is unlikely to change: even after showing some moderation, al-Qaida’s project is still too extreme for the overwhelming majority of Muslims. Up, but not forever With the United States seeking retrenchment and Middle Eastern regimes weakening, the Islamic State came to prominence under more convenient conditions and pursued a different strategy. Instead of wasting its energy on fighting the United States first, ISIS opted to establish a caliphate on the ruins of disintegrating Middle Eastern states. It has thrived on the chaos of the Arab rebellions. But in contrast to al-Qaida, it went beyond offering protection to oppressed Sunni Muslims by promoting a positive message of hope and pride. It does not merely empower Muslims to fend off attacks on their lives, property, and honor; the Islamic State offers its enthusiastic followers an historic chance to build a utopian order and restore the early Islamic empire or caliphate. ISIS opted to establish a caliphate on the ruins of disintegrating Middle Eastern states. It has thrived on the chaos of the Arab rebellions. The Islamic State’s leaders gambled that their impressive warfighting skills, the weakness of their opponents, and the reluctance of the United States to fight another war in the Middle East would allow the group to conquer and then govern territory. The gamble paid off. Not only did ISIS succeed in controlling vast territory, including the cities of Raqqa and Mosul; the slow response to its rise allowed the Islamic State’s propaganda machine to construct a narrative of invincibility and inevitability, which has, in turn, increased its appeal to new recruits and facilitated further expansion. And yet, the Islamic State’s prospects of success are low. Its miscalculations are threatening to undo much of its success. It prematurely and unnecessarily provoked an American intervention that, through a combination of bombings from the air and skilled Kurdish proxies on the ground, is limiting the Islamic State’s ability to expand and even reversing some of the group’s gains. ISIS could settle for consolidating its caliphate in the territories it currently controls, but its hubris and messianic zeal do not allow for such limited goals. It is committed to pursuing military expansion alongside its state-building project. This rigid commitment to two incompatible objectives is perhaps the Islamic State’s biggest weakness. [T]he slow response to its rise allowed the Islamic State’s propaganda machine to construct a narrative of invincibility and inevitability. Rather than pursue an economic plan that would guarantee the caliphate’s survival, the Islamic State has linked its economic viability to its military expansion. At present, ISIS relies on taxing its population and oil sales to support its flailing economy. But these financial resources cannot sustain a state, particularly one bent on simultaneously fighting multiple enemies on numerous fronts. Ironically, rather than taming its aspirations, the Islamic State sees conquest as the way to promote its state-building goals. Its plan for growing the economy is based on the extraction of resources through military expansion. While this plan worked well at first—when the Islamic State faced weak enemies—it is not a viable solution any longer, as the self-declared caliphate can no longer expand fast enough to meet its needs. Consequently, this strategy is undermining ISIS rather than strengthening it. Unfortunately, even if the Islamic State is bound to fail over the long run, it has had enough time to wreak havoc on other states in the neighborhood. And while its ability to govern is likely to continue diminishing, the terror attacks in Paris, Beirut, and Sinai suggest that the Islamic State will remain capable of causing much pain for a long time. Authors Barak MendelsohnWilliam McCants Full Article
c Why the United States can't make a magazine like ISIS By webfeeds.brookings.edu Published On :: Wed, 13 Jan 2016 10:07:00 -0500 Editors' Note: How can the U.S. government better counter ISIS propaganda? As the State Department overhauls its counter messaging program, Will McCants and Clint Watts examine what makes ISIS’s online magazine, Dabiq, so successful, and the obstacles to the U.S. government producing a publication that effective. This piece originally appeared on The Daily Beast. The Obama administration attributes much of ISIS’s success at communicating to its technological savvy, which has elevated the group to a global media and terrorist phenomenon. The president has gone so far as to say that the Paris attackers were a “bunch of killers with good social media.” Despite the praise heaped on the so-called Islamic State for its cutting-edge propaganda online, one of its most effective products is decidedly low tech. Dabiq, ISIS’s online news magazine, has a small but devoted readership that spans the globe. News of advances on the battlefield excite them—more evidence that God’s kingdom on earth has returned and grows. Stories of fighters inspire them—more models to emulate as they contemplate what role they can play in the divine drama unfolding. Journalists and analysts read it with almost the same intensity as ISIS fans; the contents of each volume fill newspapers and think-tank reports soon after it’s released. And no wonder: the magazine clearly states the organization’s goals; provides news of its activities that advance those goals; showcases personal stories of the people engaged in the activities; and announces major developments in the organization’s fight against its enemies. It’s a wealth of information presented between two covers every few months. Can you name a single U.S. government publication or online platform devoted to the anti-ISIS fight that is as informative or as widely-read as Dabiq? Is there anything that tells us what all these air sorties are for? Who’s fighting this fight on the ground? What advances the coalition has made and why we should we care? We couldn’t come up with one either. That got us to thinking: why can’t the U.S. government publish something like Dabiq online? Lack of imagination isn’t the reason. A news magazine isn’t a very creative idea—Americans perfected the form, which ISIS copied. And if anything, folks inside the government have too many overly-imaginative ideas, most of them involving whiz-bang technology. If you’ve thought it, they’ve thought it. A social media campaign for youth to come up with ways to counter violent extremism? Check. Sock-puppetry? Check. The only real obstacle impeding the U.S. government is itself. The executive branch’s complicated bureaucracy, legal strictures, and sensitivity to criticism from media and Congress make it tough to publish a Dabiq-style magazine. To see what we mean, let’s look at two of Dabiq’s regular features and see what would happen if the U.S. government tried to mimic them: Attack Reports: Each issue of Dabiq details its attacks on its enemies. One entry in issue 12 chronicled ISIS’s efforts to capture an airbase in Dayr al-Zawr, Syria. Another described four suicide attacks on the Saudi-led coalition fighting southern Yemen. Pictures accompany most entries, some quite gruesome. The U.S. government routinely writes these types of reports for internal consumption. But when they’re public—and thus under the scrutiny of Congress that holds the pursestrings and the media that holds the careerstrings—routine gives way to caution and quarreling. If the president asks his government to write attack reports for the public, the U.S. Department of State and the Department of Defense will quarrel about who will take the lead in writing and publishing them. Then they and the intelligence agencies will quarrel over which reports should be included. Will this report counter the president’s insistence that we have no boots on the ground? Will that report make it look like our Iraqi partners aren’t carrying their weight? Does this one tell the enemy too much about our game plan? Does that picture make U.S. soldiers look too menacing? Will this report later be discredited by the media? Will these battlefield successes be reversed in the future? Does anyone know if another agency has said this or its opposite? Will anyone trust what we’re saying? Shouldn’t someone else be saying this? When something finally slides off the serpentine conveyor belt months later, it will be a bland blob devoid of detail and relevance. Meanwhile, ISIS will have added twelve more volumes to its shelves. Biographies of Fighters: Dabiq sometimes profiles its fighters, including the young men on the front lines dying for ISIS’s cause. The fighters tell their stories and explain their reasons for fighting. In issue 8, for example, there is a Q&A with the man who murdered a prominent politician in Tunisia. He explains why he did it and how it advances the greater goals of the Islamic State. The United States military used to feature these sorts of stories, too—back when the American war in Iraq was a massive, overt affair. Now, that’s not the case. The identities of the Americans fighting in Syria and Iraq are a well-guarded secret because the government does not want them or their families to become targets. The government would also frown on them for nonchalantly talking about killing lest the American public get upset. And then there’s that boots on the ground thing. Without personal stories, we’re left with drones buzzing in the sky, and buzz-cut officers droning through stale Pentagon briefings. The human cost on both sides is reduced to numbers on slides, which means Americans can’t appreciate the true costs of war and foreigners can’t appreciate the sacrifices Americans are making on their behalf. Some readers might feel that the U.S. government should be constrained in these ways. They want the government to be sensitive to public opinion and exceedingly cautious when talking about war and violence. If so, they shouldn’t complain when the U.S. government explains its anti-ISIS fight in the vaguest possible terms—that’s the outcome of extreme caution compounded by bureaucratic bargaining on a mind-boggling scale. Others might feel we need to reform the way government does messaging. If so, don’t propose to change the system first. Rather, ask the system to perform a simple task like the one we’ve described and see where it breaks down. Then you’ll know what to fix. Making a news magazine probably isn’t the high tech solution the government is looking for, at least judging by Friday’s pilgrimage of senior security officials to Silicon Valley and the revamping of State Department’s online counter messaging campaign. But if our byzantine, poll-sensitive government can’t do something so basic, it won’t perform better when it’s tasked with something more complicated no matter how much technology it uses. Authors William McCantsClint Watts Image Source: © Stringer . / Reuters Full Article
c Experts weigh in (part 2): What is the future of al-Qaida and the Islamic State? By webfeeds.brookings.edu Published On :: Thu, 28 Jan 2016 12:47:00 -0500 Will McCants: As we begin another year in the so-called Long War, it’s a good time to reflect on where we are in the fight against al-Qaida and its bête noire, the Islamic State. Both organizations have benefited from the chaos unleashed by the Arab Spring uprisings but they have taken different paths. Will those paths converge again or will the two organizations continue to remain at odds? Who has the best strategy at the moment? And what political changes might happen in the coming year that will reconfigure their rivalry for leadership of the global jihad? To answer these questions, I’ve asked some of the leading experts on the two organizations to weigh in. First was Barak Mendelsohn, who contrasts al-Qaida’s resilience and emphasis on Sunni oppression with the Islamic State’s focus on building a utopian order and restoring the caliphate. Next is Clint Watts, a Fox fellow at the Foreign Policy Research Institute. He offers ways to avoid the flawed assumptions that have led to mistaken counterterrorism forecasts in recent years. Clint Watts: Two years ago today, counterterrorism forecasts focused on a “resurgent” al-Qaida. Debates over whether al-Qaida was again winning the war on terror ensued just a week before the Islamic State invaded Mosul. While Washington’s al-Qaida debates steamed away in 2013, Ayman al-Zawahiri’s al-Qaida suffered unprecedented internal setbacks from a disobedient, rogue affiliate formerly known as al-Qaida in Iraq (AQI). With terror predictions two years ago so far off the mark, should we even attempt to anticipate what the next two years of al-Qaida and ISIS will bring? Rather than prognosticate about how more than a dozen extremist groups operating on four continents might commit violence in the future, analysts might instead examine flawed assumptions that resulted in the strategic surprise known as the Islamic State. Here are insights from last decade’s jihadi shifts we should consider when making forecasts on al-Qaida and the Islamic State’s future in the coming decade. Loyalty is fleeting, self-interest is forever. Analysts that missed the Islamic State’s rise assumed that those who pledged allegiance to al-Qaida would remain loyal indefinitely. But loyalties change despite the oaths that bind them. Abu Bakr al-Baghdadi and the Islamic State’s leaders used technicalities to slip their commitments to al-Qaida. Boko Haram has rapidly gone from al-Qaida wannabe to Islamic State devotee. In short, jihadi pledges of loyalty should not be seen as binding or enduring, but instead temporary. When a group’s fortunes wane or leaders change, allegiance will rapidly shift to whatever strain of jihad proves most advantageous to the group or its leader. Prestige, money, manpower—these drive pledges of allegiance, not ideology. Al-Qaida and the Islamic State do not think solely about destroying the United States and its Western allies. Although global jihadi groups always call for attacks on the West, they don’t always deliver. Either they can’t or they have other priorities, like attacking closer to home. So jihadi propaganda alone does not tell us much about how the group is going to behave in the future. Zawahiri, for example, has publicly called on al-Qaida’s affiliates to carry out attacks on the West. But privately, he has instructed his affiliate in Syria to hold off. And for most of its history, the Islamic State focused on attacking the near enemy in the Middle East rather than the far enemy overseas, despite repeatedly vowing to hit the United States. Both groups will take advantage of any easy opportunity to strike the United States. However, continuing to frame future forecasts through an America-centric lens will yield analysis that’s off the mark and of questionable utility. [J]ihadi propaganda alone does not tell us much about how the group is going to behave in the future. Al-Qaida and the Islamic State don’t control all of the actions of their affiliates. News headlines lead casual readers to believe al-Qaida and the Islamic State command and control vast networks operating under a unified strategic plan. But a year ago, the Charlie Hebdo attack in Paris caught al-Qaida in the Arabian Peninsula (AQAP) completely by surprise—despite one of the attackers attributing the assault to the group. Al-Qaeda in the Islamic Maghreb's (AQIM) recent spate of attacks in Mali and Burkina Faso were likely conducted independently of al-Qaida’s central leadership. While the Islamic State has clearly mobilized its network and inspired others to execute a broad range of international attacks, the group’s central leadership in Iraq and Syria closely manages only a small subset of these plots. At no time since the birth of al-Qaida have jihadi affiliates and networks operated with such independence. Since Osama bin Laden’s death, al-Qaida affiliates in Yemen, the Sahel, Somalia, and Syria all aggressively sought to form states—a strategy bin Laden advised against. Target selections and the rapid pace of plots by militants in both networks suggest local dynamics rather than a cohesive, global grand strategy drive today’s jihad. Accurately anticipating the competition and cooperation of such a wide array of terrorist affiliates with overlapping allegiances to both groups will require examination by teams of analysts with a range of expertise rather than single pundits. At no time since the birth of al-Qaida have jihadi affiliates and networks operated with such independence. Both groups and their affiliates will be increasingly enticed to align with state sponsors and other non-jihadi, non-state actors. The more money al-Qaida and the Islamic State have, the more leverage they have over their affiliates. But when the money dries up—as it did in al-Qaida’s case and will in the Islamic State’s—the affiliates will look elsewhere to sustain themselves. Distant affiliates will seek new suitors or create new enterprises. Inevitably, some of the affiliates will look to states that are willing to fund them in proxy wars against their mutual adversaries. Iran, despite fighting the Islamic State in Syria, might be enticed to support Islamic State terrorism inside Saudi Arabia’s borders. Saudi Arabia could easily use AQAP as an ally against the Iranian backed Houthi in Yemen. African nations may find it easier to pay off jihadi groups threatening their countries than face persistent destabilizing attacks in their cities. When money becomes scarce, the affiliates of al-Qaida and the Islamic State will have fewer qualms about taking money from their ideological enemies if they share common short-term interests. If you want to predict the future direction of the Islamic State and al-Qaida, avoid the flawed assumptions noted above. Instead, I offer these three notes: First, look to regional terrorism forecasts illuminating local nuances routinely overlooked in big global assessments of al-Qaida and the Islamic State. Depending on the region, either the Islamic State or al-Qaida may reign supreme and their ascendance will be driven more by local than global forces. Second, watch the migration of surviving foreign fighters from the Islamic State’s decline in Iraq and Syria. Their refuge will be our future trouble spot. Third, don’t try to anticipate too far into the future. Since bin Laden’s death, the terrorist landscape has become more diffuse, a half dozen affiliates have risen and fallen, and the Arab Spring went from great hope for democracies to protracted quagmires across the Middle East. Today’s terrorism picture remains complex, volatile, and muddled. There’s no reason to believe tomorrow’s will be anything different. Authors Clint WattsWilliam McCants Full Article
c Experts Weigh In (part 3): What is the future of al-Qaida and the Islamic State? By webfeeds.brookings.edu Published On :: Wed, 24 Feb 2016 11:48:00 -0500 Will McCants: As we continue onwards in the so-called Long War, it’s a good time to reflect on where we are in the fight against al-Qaida and its bête noire, the Islamic State. Both organizations have benefited from the chaos unleashed by the Arab Spring uprisings but they have taken different paths. Will those paths converge again or will the two organizations continue to remain at odds? Who has the best strategy at the moment? And what political changes might happen in the coming year that will reconfigure their rivalry for leadership of the global jihad? To answer these questions, I’ve asked some of the leading experts on the two organizations to weigh in. First was Barak Mendelsohn, who analyzed the factors that explain the resilience and weaknesses of both groups. Then Clint Watts offered ways to avoid the flawed assumptions that have led to mistaken counterterrorism forecasts in recent years. Next up is Charles Lister, a resident fellow at the Middle East Institute, to examine the respective courses each group has charted to date and whether that's likely to change. Charles Lister: The world of international jihad has had a turbulent few years, and only now is the dust beginning to settle. The emergence of the Islamic State as an independent transnational jihadi rival to al-Qaida sparked a competitive dynamic. That has heightened the threat of attacks in the West and intensified the need for both movements to demonstrate their value on local battlefields. Having spent trillions of dollars pushing back al-Qaida in Afghanistan and Pakistan and al-Qaida in Iraq, the jihadi threat we face today far eclipses that seen in 2000 and 2001. As has been the case for some time, al-Qaida is no longer a grand transnational movement, but rather a loose network of semi-independent armed groups dispersed around the world. Although al-Qaida’s central leadership appears to be increasingly cut off from the world, frequently taking many weeks to respond publicly to significant events, its word remains strong within its affiliates. For example, a secret letter from al-Qaida leader Ayman al-Zawahiri to his Syrian affiliate the Nusra Front in early 2015 promptly caused the group to cease plotting attacks abroad. Seeking rapid and visible results, ISIS worries little about taking the time to win popular acceptance and instead controls territory through force. While the eruption of the Arab Spring in 2010 challenged al-Qaida’s insistence that only violent jihad can secure political change, the subsequent repression and resulting instability provided an opportunity. What followed was a period of extraordinary strategic review. Beginning with Ansar al-Sharia in Yemen (in 2010 and 2011) and then with al-Qaida in the Islamic Maghreb (AQIM), Ansar al-Din, and the Movement for Unity and Jihad in West Africa (MUJAO) in Mali (2012), al-Qaida began developing a new strategy focused on slowly nurturing unstable and vulnerable societies into hosts for an al-Qaida Islamic state. Although a premature imposition of harsh Shariah norms caused projects in Yemen and Mali to fail, al-Qaida’s activities in Syria and Yemen today look to have perfected the new “long game” approach. In Syria and Yemen, al-Qaida has taken advantage of weak states suffering from acute socio-political instability in order to embed itself within popular revolutionary movements. Through a consciously managed process of “controlled pragmatism,” al-Qaida has successfully integrated its fighters into broader dynamics that, with additional manipulation, look all but intractable. Through a temporary renunciation of Islamic hudud (fixed punishments in the Quran and Hadith) and an overt insistence on multilateral populist action, al-Qaida has begun socializing entire communities into accepting its role within their revolutionary societies. With durable roots in these operational zones—“safe bases,” as Zawahiri calls them—al-Qaida hopes one day to proclaim durable Islamic emirates as individual components of an eventual caliphate. Breadth versus depth The Islamic State (or ISIS), on the other hand, has emerged as al-Qaida’s obstreperous and brutally rebellious younger sibling. Seeking rapid and visible results, ISIS worries little about taking the time to win popular acceptance and instead controls territory through force and psychological intimidation. As a militarily capable and administratively accomplished organization, ISIS has acquired a strong stranglehold over parts of Iraq and Syria—like Raqqa, Deir el-Zour, and Mosul—but its roots are shallow at best elsewhere in both countries. With effective and representative local partners, the U.S.-led coalition can and will eventually take back much of ISIS’s territory, but evidence thus far suggests progress will be slow. Meanwhile, ISIS has developed invaluable strategic depth elsewhere in the world, through its acquisition of affiliates—or additional “states” for its Caliphate—in Yemen, Libya, Algeria, Egypt, Afghanistan, Pakistan, Nigeria, and Russia. Although it will struggle to expand much beyond its current geographical reach, the growing importance of ISIS in Libya, Egypt, and Afghanistan-Pakistan in particular will allow the movement to survive pressures it faces in Syria and Iraq. As that pressure heightens, ISIS will seek to delegate some level of power to its international affiliates, while actively encouraging retaliatory attacks—both centrally directed and more broadly inspired—against high-profile Western targets. Instability breeds opportunity for groups like ISIS, so we should also expect it to exploit the fact that refugee flows from Syria towards Europe in 2016 look set to dramatically eclipse those seen in 2015. Instability breeds opportunity for groups like ISIS. Charting a new course? That the world now faces threats from two major transnational jihadist movements employing discernibly different strategies makes today’s counterterrorism challenge much more difficult. The dramatic expansion of ISIS and its captivation of the world’s media attention has encouraged a U.S.-led obsession with an organization that has minimal roots into conflict-ridden societies. Meanwhile the West has become distracted from its long-time enemy al-Qaida, which has now grown deep roots in places like Syria and Yemen. Al-Qaida has not disappeared, and neither has it been defeated. We continue this policy imbalance at our peril. In recent discussions with Islamist sources in Syria, I’ve heard that al-Qaida may be further adapting its long-game strategy. The Nusra Front has been engaged in six weeks of on/off secret talks with at least eight moderate Islamist rebel groups, after proposing a grand merger with any interested party in early January. Although talks briefly came to a close in mid-January over the troublesome issue of the Nusra Front’s allegiance to al-Qaida, the group’s leader Abu Mohammed al-Jolani now placed those ties as an issue on the table for negotiation. Al-Qaida has not disappeared, and neither has it been defeated. The fact that this sensitive subject is now reportedly open for discussion is a significant indicator of how far the Nusra Front is willing to stretch its jihadist mores for the sake of integration in Syrian revolutionary dynamics. However, the al-Nusra Front's leader, Abu Mohammed al-Jolani, is a long-time Al-Qaeda loyalist and doesn't fit the profile of someone willing to break a religious oath purely for the sake of an opportunistic power play. It is therefore interesting that this secret debate inside Syria comes amid whispers within Salafi-jihadi and pro-al-Qaida circles that Zawahiri is considering “releasing” his affiliates from their loyalty pledges in order to transform al-Qaida into an organic network of locally-inspired movements—led by and loosely tied together by an overarching strategic idea. Whether al-Qaida and its affiliates ultimately evolve along this path or not, the threat they pose to local, regional, and international security is clear. When compounded by ISIS’s determination to continue expanding and to conduct more frequent and more deadly attacks abroad, jihadist militancy looks well-placed to pose an ever present danger for many years to come. Authors Charles ListerWilliam McCants Full Article
c Beyond 2016: Security challenges and opportunities for the next administration By webfeeds.brookings.edu Published On :: Tue, 01 Mar 2016 09:00:00 -0500 Event Information March 1, 20169:00 AM - 4:15 PM ESTFalk AuditoriumBrookings Institution1775 Massachusetts Avenue NWWashington, DC 20036 Register for the EventThe Center for 21st Century Security Intelligence seventh annual military and federal fellow research symposiumOn March 1, the seventh annual military and federal fellow research symposium featured the independent research produced by members of the military services and federal agencies who are currently serving at think-tanks and universities across the nation. Organized by the fellows themselves, the symposium provides a platform for building greater awareness of the cutting-edge work that America’s military and governmental leaders are producing on key national security policy issues. With presidential primary season well underway, it’s clear that whoever emerges in November 2016 as the next commander-in-chief will have their hands full with a number of foreign policy and national security choices. This year’s panels explored these developing issues and their prospects for resolution after the final votes have been counted. During their keynote conversation, the Honorable Michèle Flournoy discussed her assessment of the strategic threat environment with General John Allen, USMC (Ret.), who also provided opening remarks on strategic leadership and the importance of military and other federal fellowship experiences. Video Opening remarks and The future of the All-Volunteer ForceThe next generation of terrorismHarnessing technology in the future forceKeynote discussion: Assessing the strategic environmentTo intervene, or not to intervene? Audio Beyond 2016: Security challenges and opportunities for the next administration Full Article
c The French connection: Explaining Sunni militancy around the world By webfeeds.brookings.edu Published On :: Fri, 25 Mar 2016 14:55:00 -0400 Editors’ Note: The mass-casualty terrorist attacks in Paris and now in Brussels underscore an unsettling truth: Jihadis pose a greater threat to France and Belgium than to the rest of Europe. Research by Will McCants and Chris Meserole reveals that French political culture may play a role. This post originally appeared in Foreign Affairs. The mass-casualty terrorist attacks in Paris and now in Brussels underscore an unsettling truth: Jihadists pose a greater threat to France and Belgium than to the rest of Europe. The body counts are larger and the disrupted plots are more numerous. The trend might be explained by the nature of the Islamic State (ISIS) networks in Europe or as failures of policing in France and Belgium. Both explanations have merit. However, our research reveals that another factor may be at play: French political culture. Last fall, we began a project to test empirically the many proposed explanations for Sunni militancy around the globe. The goal was to take common measures of the violence—namely, the number of Sunni foreign fighters from any given country as well as the number of Sunni terror attacks carried out within it—and then crunch the numbers to see which explanations best predicted a country’s rate of Sunni radicalization and violence. (The raw foreign fighter data came from The International Centre for the Study of Radicalisation and Political Violence; the original attack data came from the University of Maryland’s START project.) What we found surprised us, particularly when it came to foreign fighter radicalization. It turns out that the best predictor of foreign fighter radicalization was not a country’s wealth. Nor was it how well-educated its citizens were, how healthy they were, or even how much Internet access they enjoyed. Instead, the top predictor was whether a country was Francophone; that is, whether it currently lists (or previously listed) French as a national language. As strange as it may seem, four of the five countries with the highest rates of radicalization in the world are Francophone, including the top two in Europe (France and Belgium). Knowledgeable readers will immediately object that the raw numbers tell a different story. The English-speaking United Kingdom, for example, has produced far more foreign fighters than French-speaking Belgium. And fighters from Saudi Arabia number in the several thousands. But the raw numbers are misleading. If you view the foreign fighters as a percentage of the overall Muslim population, you see a different picture. Per Muslim resident, Belgium produces far more foreign fighters than either the United Kingdom or Saudi Arabia. [W]hat could the language of love possibly have to do with Islamist violence? We suspect that it is really a proxy for something else: French political culture. So what could the language of love possibly have to do with Islamist violence? We suspect that it is really a proxy for something else: French political culture. The French approach to secularism is more aggressive than, say, the British approach. France and Belgium, for example, are the only two countries in Europe to ban the full veil in their public schools. They’re also the only two countries in Western Europe not to gain the highest rating for democracy in the well-known Polity score data, which does not include explanations for the markdowns. Adding support to this story are the top interactions we found between different variables. When you look at which combination of variables is most predictive, it turns out that the “Francophone effect” is actually strongest in the countries that are most developed: French-speaking countries with the highest literacy, best infrastructure, and best health system. This is not a story about French colonial plunder. If anything it’s a story about what happens when French economic and political development has most deeply taken root. An important subplot within this story concerns the distribution of wealth. In particular, the rate of youth unemployment and urbanization appear to matter a great deal too. Globally, we found that when between 10 and 30 percent of a country’s youth are unemployed, there is a strong relationship between a rise in youth unemployment and a rise in Sunni militancy. Rates outside that range don’t have an effect. Likewise, when urbanization is between 60 and 80 percent, there is a strong relationship. These findings seem to matter most in Francophone countries. Among the over 1,000 interactions our model looked at, those between Francophone and youth unemployment and Francophone and urbanization both ranked among the 15 most predictive. There’s broad anecdotal support for this idea: consider the rampant radicalization in Molenbeek, in the Parisbanlieus, in Ben Gardane. Each of these contexts have produced a massively disproportionate share of foreign fighters, and each are also urban pockets with high youth unemployment. As with the Francophone finding overall, we’re left with guesswork as to why exactly the relationships between French politics, urbanization, youth unemployment, and Sunni militancy exist. We suspect that when there are large numbers of unemployed youth, some of them are bound to get up to mischief. When they live in large cities, they have more opportunities to connect with people espousing radical causes. And when those cities are in Francophone countries that adopt the strident French approach to secularism, Sunni radicalism is more appealing. For now, the relationship needs to be studied and tested by comparing several cases in countries and between countries. We also found other interesting relationships—such as between Sunni violence and prior civil conflict—but they are neither as strong nor as compelling. Regardless, the latest attacks in Belgium are reason enough to share the initial findings. They may be way off, but at least they are based on the best available data. If the data is wrong or our interpretations skewed, we hope the effort will lead to more rigorous explanations of what is driving jihadist terrorism in Europe. Our initial findings should in no way imply that Francophone countries are responsible for the recent horrible attacks—no country deserves to have its civilians killed, regardless of the perpetrator’s motives. But the magnitude of the violence and the fear it engenders demand that we investigate those motives beyond just the standard boilerplate explanations. Authors William McCantsChristopher Meserole Publication: Foreign Affairs Full Article
c Rethinking Political Islam By webfeeds.brookings.edu Published On :: Fri, 06 May 2016 14:10:00 -0400 Full Article
c American attitudes on refugees from the Middle East By webfeeds.brookings.edu Published On :: Mon, 13 Jun 2016 14:00:00 -0400 Event Information June 13, 20162:00 PM - 3:30 PM EDTThe Brookings InstitutionFalk Auditorium1775 Massachusetts Ave., N.W.Washington, DC 20036 Register for the EventOn June 13, Brookings launched a new public opinion survey focusing on American attitudes toward refugees from the Middle East and from Syria in particular.With violence in the Middle East and the associated refugee crisis continuing unabated, these issues remain prominent in Washington policy debates. It is therefore increasingly important for U.S. policymakers, political candidates, and voters to understand the American public’s attitudes toward the conflicts in the Middle East and the refugees fleeing those crises. On June 13, Brookings launched a new public opinion survey focusing on American attitudes toward refugees from the Middle East and from Syria in particular. Conducted by Nonresident Senior Fellow Shibley Telhami, the poll looks at a range of questions, from whether Americans feel the United States has a moral obligation to take in refugees to whether these refugees pose a threat to national security. The national poll takes into account an expanded set of demographic variables and includes an over-sized sample of millennials. Telhami was joined in discussion by POLITICO Magazine and Boston Globe contributor Indira Lakshmanan. William McCants, senior fellow and director of the Project on U.S. Relations with the Islamic World at Brookings, provided introductory remarks and moderated the discussion. This event launched the Brookings Refugees Forum, which will take place on June 14 and 15. Join the conversation on Twitter using #RefugeeCrisis. Video American attitudes on refugees from the Middle East - Part 1American attitudes on refugees from the Middle East - Part 2 Audio American attitudes on refugees from the Middle East Transcript Uncorrected Transcript (.pdf) Event Materials 20160613_telhami_poll_presentation20160613_american_opinion_refugees_transcript Full Article
c Realist or neocon? Mixed messages in Trump advisor’s foreign policy vision By webfeeds.brookings.edu Published On :: Tue, 19 Jul 2016 08:00:00 -0400 Last night, retired lieutenant general Michael Flynn addressed the Republican convention as a headline speaker on the subject of national security. One of Donald Trump’s closest advisors—so much so that he was considered for vice president—Flynn repeated many of the themes found in his new book, The Field of Fight, How We Can Win the Global War Against Radical Islam and Its Allies, which he coauthored with Michael Ledeen. (The book is published by St. Martin’s, which also published mine.) Written in Flynn’s voice, the book advances two related arguments: First, the U.S. government does not know enough about its enemies because it does not collect enough intelligence, and it refuses to take ideological motivations seriously. Second, our enemies are collaborating in an “international alliance of evil countries and movements that is working to destroy” the United States despite their ideological differences. Readers will immediately notice a tension between the two ideas. “On the surface,” Flynn admits, “it seems incoherent.” He asks: “How can a Communist regime like North Korea embrace a radical Islamist regime like Iran? What about Russia’s Vladimir Putin? He is certainly no jihadi; indeed, Russia has a good deal to fear from radical Islamist groups.” Flynn spends much of the book resolving the contradiction and proving that America’s enemies—North Korea, China, Russia, Iran, Syria, Cuba, Bolivia, Venezuela, Nicaragua, al-Qaida, Hezbollah, and ISIS—are in fact working in concert. No one who has read classified intelligence or studied international relations will balk at the idea that unlikely friendships are formed against a common enemy. As Flynn observes, the revolutionary Shiite government in Tehran cooperates with nationalist Russia and communist North Korea; it has also turned a blind eye (at the very least) to al-Qaida’s Sunni operatives in Iran and used them bargaining chips when negotiating with Osama bin Laden and the United States. Flynn argues that this is more than “an alliance of convenience.” Rather, the United States’ enemies share “a contempt for democracy and an agreement—by all the members of the enemy alliance—that dictatorship is a superior way to run a country, an empire, or a caliphate.” Their shared goals of maximizing dictatorship and minimizing U.S. interference override their substantial ideological differences. Consequently, the U.S. government must work to destroy the alliance by “removing the sickening chokehold of tyranny, dictatorships, and Radical Islamist regimes.” Its failure to do so over the past decades gravely imperils the United States, he contends. The book thus offers two very different views of how to exercise American power abroad: spread democracies or stand with friendly strongmen...[P]erhaps it mirrors the confusion in the Republican establishment over the direction of conservative foreign policy. Some of Flynn’s evidence for the alliance diverts into the conspiratorial—I’ve seen nothing credible to back up his assertion that the Iranians were behind the 1979 takeover of the Grand Mosque in Mecca by Sunni apocalypticists. And there’s an important difference between the territorially-bounded ambitions of Iran, Russia, and North Korea, on the one hand, and ISIS’s desire to conquer the world on the other; the former makes alliances of convenience easier than the latter. Still, Flynn would basically be a neocon if he stuck with his core argument: tyrannies of all stripes are arrayed against the United States so the United States should destroy them. But some tyrannies are less worthy of destruction than others. In fact, Flynn argues there’s a category of despot that should be excluded from his principle, the “friendly tyrants” like President Abdel-Fatah el-Sissi in Egypt and former president Zine Ben Ali in Tunisia. Saddam Hussein should not have been toppled, Flynn argues, and even Russia could become an “ideal partner for fighting Radical Islam” if only it would come to its senses about the threat of “Radical Islam.” Taken alone, these arguments would make Flynn realist, not a neocon. The book thus offers two very different views of how to exercise American power abroad: spread democracies or stand with friendly strongmen. Neither is a sure path to security. Spreading democracy through the wrong means can bring to power regimes that are even more hostile and authoritarian; standing with strongmen risks the same. Absent some principle higher than just democracy or security for their own sakes, the reader is unable to decide between Flynn’s contradictory perspectives and judge when their benefits are worth the risks. It’s strange to find a book about strategy so at odds with itself. Perhaps the dissonance is due to the co-authors’ divergent views (Ledeen is a neocon and Flynn is comfortable dining with Putin.) Or perhaps it mirrors the confusion in the Republican establishment over the direction of conservative foreign policy. Whatever the case, the muddled argument offered in The Field of Fight demonstrates how hard it is to overcome ideological differences to ally against a common foe, regardless of whether that alliance is one of convenience or conviction. Authors William McCants Full Article
c Global economic and environmental outcomes of the Paris Agreement By webfeeds.brookings.edu Published On :: The Paris Agreement, adopted by the Parties to the United Nations Framework Convention on Climate Change (UNFCCC) in 2015, has now been signed by 197 countries. It entered into force in 2016. The agreement established a process for moving the world toward stabilizing greenhouse gas (GHG) concentrations at a level that would avoid dangerous climate… Full Article
c Policy insights from comparing carbon pricing modeling scenarios By webfeeds.brookings.edu Published On :: Carbon pricing is an important policy tool for reducing greenhouse gas pollution. The Stanford Energy Modeling Forum exercise 32 convened eleven modeling teams to project emissions, energy, and economic outcomes of an illustrative range of economy-wide carbon price policies. The study compared a coordinated reference scenario involving no new policies with policy scenarios that impose… Full Article
c Leading carbon price proposals: A bipartisan dialogue By webfeeds.brookings.edu Published On :: Fri, 07 Jun 2019 15:47:37 +0000 Economists overwhelmingly recommend a price on carbon as a way to control the risk of climatic disruption. A fee on carbon dioxide and other greenhouse gas emissions would shift the relative prices of different sources of energy and other goods by an amount that depends on how damaging they are to the earth’s climate. A… Full Article
c My Climate Journey podcast episode 17: Adele Morris By webfeeds.brookings.edu Published On :: Mon, 08 Jul 2019 15:23:14 +0000 Full Article
c The risk of fiscal collapse in coal-reliant communities By webfeeds.brookings.edu Published On :: EXECUTIVE SUMMARY If the United States undertakes actions to address the risks of climate change, the use of coal in the power sector will decline rapidly. This presents major risks to the 53,000 US workers employed by the industry and their communities. 26 US counties are classified as “coal-mining dependent,” meaning the coal industry is… Full Article
c Columbia Energy Exchange: Coal communities face risk of fiscal collapse By webfeeds.brookings.edu Published On :: Mon, 15 Jul 2019 15:31:47 +0000 Full Article
c The risk of fiscal collapse in coal-reliant communities By webfeeds.brookings.edu Published On :: Wed, 17 Jul 2019 20:46:52 +0000 Full Article
c Why local governments should prepare for the fiscal effects of a dwindling coal industry By webfeeds.brookings.edu Published On :: Thu, 05 Sep 2019 15:36:41 +0000 Full Article
c The Neoliberal Podcast: Carbon Taxes ft. Adele Morris, David Hart & Philippe Benoit By webfeeds.brookings.edu Published On :: Wed, 09 Oct 2019 14:42:05 +0000 Full Article
c Adele Morris on BPEA and looking outside macroeconomics By webfeeds.brookings.edu Published On :: Thu, 12 Mar 2020 13:00:49 +0000 Adele Morris is a senior fellow in Economic Studies and policy director for Climate and Energy Economics at Brookings. She recently served as a discussant for a paper as part of the Spring 2019 BPEA conference.Her research informs critical decisions related to climate change, energy, and tax policy. She is a leading global expert on the design… Full Article
c A systematic review of systems dynamics and agent-based obesity models: Evaluating obesity as part of the global syndemic By webfeeds.brookings.edu Published On :: Fri, 19 Jul 2019 13:02:35 +0000 Full Article
c Modeling community efforts to reduce childhood obesity By webfeeds.brookings.edu Published On :: Mon, 26 Aug 2019 13:00:42 +0000 Why childhood obesity matters According to the latest data, childhood obesity affects nearly 1 in 5 children in the United States, a number which has more than tripled since the early 1970s. Children who have obesity are at a higher risk of many immediate health risks such as high blood pressure and high cholesterol, type… Full Article
c Simulating the effects of tobacco retail restriction policies By webfeeds.brookings.edu Published On :: Tue, 03 Sep 2019 13:00:00 +0000 Tobacco use remains the single largest preventable cause of death and disease in the United States, killing more than 480,000 Americans each year and incurring over $300 billion per year in costs for direct medical care and lost productivity. In addition, of all cigarettes sold in the U.S. in 2016, 35% were menthol cigarettes, which… Full Article