in

A cross-linking mass spectrometry approach defines protein interactions in yeast mitochondria

Andreas Linden
Apr 24, 2020; 0:RA120.002028v1-mcp.RA120.002028
Research




in

Characterization of signaling pathways associated with pancreatic {beta}-cell adaptive flexibility in compensation of obesity-linked diabetes in db/db mice

Taewook Kang
Apr 7, 2020; 0:RA119.001882v1-mcp.RA119.001882
Research




in

Proteome and phosphoproteome analysis of brown adipocytes reveals that RICTOR loss dampens global insulin/AKT signaling

Samuel W Entwisle
Apr 6, 2020; 0:RA120.001946v2-mcp.RA120.001946
Research




in

Large-scale Identification of N-linked Intact Glycopeptides in Human Serum using HILIC Enrichment and Spectral Library Search

Qingbo Shu
Apr 1, 2020; 19:672-689
Research




in

Selection of features with consistent profiles improves relative protein quantification in mass spectrometry experiments

Tsung-Heng Tsai
Mar 31, 2020; 0:RA119.001792v1-mcp.RA119.001792
Research




in

Profiling Cell Signaling Networks at Single-cell Resolution

Xiao-Kang Lun
May 1, 2020; 19:744-756
Review




in

Quantitative Profiling of the Human Substantia Nigra Proteome from Laser-capture Microdissected FFPE Tissue

Eva Griesser
May 1, 2020; 19:839-851
Research




in

DEqMS: a method for accurate variance estimation in differential protein expression analysis

Yafeng Zhu
Mar 23, 2020; 0:TIR119.001646v1-mcp.TIR119.001646
Technological Innovation and Resources




in

Virtual Issue: Technological Innovations

Anne-Claude Gingras
Apr 1, 2020; 19:572-573
Editorial




in

Compliance Checklists No Longer Required at Initial Manuscript Submission

Alma L. Burlingame
Apr 1, 2020; 19:571-571
Editorial




in

Improving Identification of In-organello Protein-Protein Interactions Using an Affinity-enrichable, Isotopically Coded, and Mass Spectrometry-cleavable Chemical Crosslinker

Karl A. T. Makepeace
Apr 1, 2020; 19:624-639
Research




in

An Improved Boosting to Amplify Signal with Isobaric Labeling (iBASIL) Strategy for Precise Quantitative Single-cell Proteomics

Chia-Feng Tsai
May 1, 2020; 19:828-838
Research




in

Acquiring and Analyzing Data Independent Acquisition Proteomics Experiments without Spectrum Libraries

Lindsay K Pino
Apr 20, 2020; 0:P119.001913v1-mcp.P119.001913
Perspective




in

A Compact Quadrupole-Orbitrap Mass Spectrometer with FAIMS Interface Improves Proteome Coverage in Short LC Gradients

Dorte B. Bekker-Jensen
Apr 1, 2020; 19:716-729
Technological Innovation and Resources




in

Proximity Dependent Biotinylation: Key Enzymes and Adaptation to Proteomics Approaches

Payman Samavarchi-Tehrani
May 1, 2020; 19:757-773
Review




in

Microsoft delivers fixes for 110 bugs in April, 2020 Patch Tuesday

For the April edition of Patch Tuesday, Microsoft repaired a total of 110 security vulnerabilities across their product line. Included in this count are 37 remote code execution bugs, and 33 elevation of privilege bugs. The company rated eighteen of the vulnerabilities “Critical.” This release’s most notable item is the follow-up to last month’s announcement, […]




in

Rising to the challenge in the worst circumstances

In this era when we need to do everything possible to keep essential businesses operational, the people of the information security community have banded together in an inspirational effort to protect us all.




in

Following the money in a massive “sextortion” spam scheme

Cryptocurrency profits from sextortion spam funneled into wallets tied to other cybercrime and dark web market activity.





in

Protecting the Cloud: Securing access to public cloud accounts

How Sophos Cloud Optix helps you secure access to your AWS, Azure and Google Cloud Platform accounts.




in

Protecting the Cloud: Securing user remote access to AWS

How to create secure access to services hosted in AWS with Sophos XG Firewall.





in

Mathematical light shines blindly on us

By William Yslas Vélez Professor Emeritus University of Arizona “When I go to a Mexican restaurant I would gladly pay the musicians to stop playing.” John (not his real name) did not like the noise level. This statement came up … Continue reading




in

A New Type of Learning Community

Setting high standards is expected from all educators. Yet, I think I may have taken this to an extreme in my 2019 spring senior seminar course in algebraic combinatorics. Students walked in to class, got a copy of the syllabus … Continue reading




in

Finding belonging through mentorship

Guest blog by Stephen McKean On my first day of college, I showed up an hour early to my very first class. The class was Math 2210, multivariable calculus. For some reason, I thought this was the highest math class … Continue reading




in

Learning During the Pandemic: What we wish our professors and mentors knew

Student Authors: Mayleen Cortez, Brooke Keene-Gomez, Lucy Martinez, Amaury V. Miniño, Jenna Race, Kelemua Tesfaye, and Stephanie. Blog post compiled by Melissa Gutiérrez González, Pamela E. Harris, and Alicia Prieto Langarica. In this blog we center the voices of mathematics students as … Continue reading




in

Strengthening Urban Preparedness and Resilience Against Biological Threats in Accra

Invitation Only Research Event

1 March 2019 - 10:30am to 2 March 2019 - 3:00pm

Chatham House, London

Capacity to contain and respond to biological threats varies considerably across the world. Yet such preparedness is vital for prevention, impact-reduction and resilience in the face of biological events, whether they be natural or deliberate outbreaks.

Chatham House is conducting a series of meetings to strengthen urban preparedness for, and resilience against, biological threats in African countries. This meeting will examine the preparedness and prevention mechanisms in Accra, reviewing the comprehensiveness of city-level preparedness.  

This meeting will focus on the formation and implementation of city-level action plans in the context of preparedness for managing biological threats. It will also explore how local authorities are contributing to this effort with their knowledge and expertise.

Attendance at this event is by invitation only.

Nilza Amaral

Project Manager, International Security Programme




in

Cybersecurity in the Commonwealth: Building the Foundations of Effective National Responses in the Caribbean

Invitation Only Research Event

8 March 2019 - 9:00am to 5:30pm

Bridgetown, Barbados

Event participants

Joyce Hakmeh, Cyber Research Fellow, International Security Department, Chatham House

This workshop is the second in a series in the 'Implementing the Commonwealth Cybersecurity Agenda' project. The workshop aims to provide a multi-stakeholder pan-Commonwealth platform to discuss how to take the implementation of the 'Commonwealth Cyber Declaration' forward with a focus on the second pillar of the declaration – building the foundations of an effective national cybersecurity response with eight action points. 

As such, the workshop gathers different project implementers under the UK Foreign and Commonwealth Office’s Cyber Programme, in addition to other key relevant stakeholders from the global level, to explore ongoing initiatives which aim to deliver one or more of pillar two’s action points.

The workshop addresses issues from a global perspective and a Commonwealth perspective and will include presentations from selected partners from different Commonwealth countries.

Calum Inverarity

Research Analyst and Coordinator, International Security Department
+44 (0) 207 957 5751




in

How Regulation Could Break the Internet: In Conversation with Andrew Sullivan

Research Event

19 June 2019 - 6:00pm to 7:00pm

Chatham House | 10 St James's Square | London | SW1Y 4LE

Event participants

Andrew Sullivan, President and CEO Internet Society
Chair: Emily Taylor, Associate Fellow, International Security Department, Chatham House; Editor, Journal of Cyber Policy

Internet regulation is increasing around the world creating positive obligations on internet providers and exerting negative unintended consequences on the internet infrastructure. In some ways, most of this regulatory activity is justifiable. Governments are concerned about the increased risk that the use of the internet brings to societies. As a response, many governments have been enacting regulations as their main approach to dealing with these concerns. The main challenge is that most of the current regulations are either ill-defined or unworkable.  

On the one hand, several governments have established procedures that seek to analyze the impacts of new regulatory proposals before they were adopted. However, there hasn’t been enough attention aimed at analyzing regulations after they have been adopted and only a few have measures in place to evaluate the impacts of the procedures and practices that govern the regulatory process itself.

On the other hand, much of the regulation creates unintended consequences to the internet itself. It undermines many of its fundamental properties and challenges the integrity and resiliency of its infrastructure.  

This event discusses current practices in internet-related regulation and the related challenges. Panellists will discuss how governments can enforce regulations that achieve their intended purpose while at the same time protecting the internet’s core infrastructure and its properties, including its openness, interoperability and global reach.

Calum Inverarity

Research Analyst and Coordinator, International Security Department
+44 (0) 207 957 5751




in

The Implications of Drone Proliferation in Europe

Invitation Only Research Event

20 May 2019 - 9:30am to 21 May 2019 - 12:30pm

Chatham House, London

Several countries in Europe are currently developing, increasing or arming their drone capabilities. This increase in drone capability is taking place through procurement by individual states, bilateral projects – such as the French-German Future Combat Air Systems – and multinational efforts that include the development of the Eurodrone – a project by Germany, France, Italy and Spain – and the nEUROn which involves France, Greece, Italy, Spain, Sweden and Switzerland. In addition, the EU has recently approved the European Defence Fund which, along with funds for joint development of defence equipment and technology and for defence research, also includes direct funding to support the development of the Eurodrone.
  
Other recent developments include the UK obtaining the MQ−9B SkyGuardian – an updated variant of the MQ-9 Reaper – and Germany signing a leasing agreement for the Israeli-made Heron TP. Also, in March 2019, the US State Department approved the sale of the MQ-9B SkyGuardian to Belgium and announced the provision of MQ-9 Reaper drones to the Netherlands. While the UK and France are the only countries in Europe so far to possess armed drones, Italy is on track to acquire armed drones in the near future. However, out of the European countries currently seeking to obtain unarmed drones – including Spain, Poland and Greece – only Switzerland has so far ruled out weaponizing its drones in the future.
 
This Chatham House workshop will examine key issues pertaining to drone proliferation in Europe including: 
  • The risks and challenges of the increasing number and arming of drones. 
  • The risks for countries of not doing so in terms of geostrategic interests and the future battlefield.
  • Opportunities for developing common standards on drone transfers and deployment across EU member states.
  • Sharing and cooperation on drone use.
  • What legal and policy implications might arise for European states as a result.

With Brexit on the horizon, participants will also consider what impact this may have on future drone developments in Europe.

Attendance at this event is by invitation only.

Nilza Amaral

Project Manager, International Security Programme




in

Protecting Children in Conflict: See Me Safe Symposium

Invitation Only Research Event

7 May 2019 - 10:00am to 5:00pm

Chatham House, London

Today there are 420 million children, or one-fifth of children worldwide, who live in conflict zones and are at risk of being killed or injured and denied access to education, healthcare and humanitarian assistance. From Myanmar and Syria, to South Sudan and Yemen, the impact of conflict on children and their families is devastating. With conflicts becoming more protracted and urbanized, and the undermining of international rules and norms, the risk to civilians is rapidly increasing. 
 
The impact of the crisis in civilian protection is not only devastating children’s lives and risking a lost generation, it threatens global stability and prosperity, contributing to the degradation of the international rules-based system and its institutions and undermining the ability to hold perpetrators accountable and prevent these atrocities from happening.
 
This symposium will bring together practitioners, policymakers, business leaders, philanthropists and academics for a day of panel discussions on the protection of children in conflict. The aim of the event is to generate an informed debate and to deepen engagement with issues around protecting children in conflict as well as to inspire support to help rebuild children’s lives.
 
This event will be followed by a reception from 17:00-18:30.
 
Attendance is by invitation only.
 
Celebrating its centenary in 2020, Chatham House is partnering with Save the Children on this core area of their work, in their anniversary year.

Nilza Amaral

Project Manager, International Security Programme




in

Christopher Painter

Associate Fellow, International Security Programme

Biography

Christopher Painter is a globally recognized leader on cyber policy, cyber diplomacy, cybersecurity and combatting cybercrime.

He has been at the vanguard of cyber issues for over 27 years, first as a federal prosecutor handling some of the most high-profile cyber cases in the U.S., then as a senior official at the U.S. Department of Justice, the FBI, the White House National Security Council and, finally, as the world’s first cyber diplomat at the U.S. Department of State.

Among other things, Christopher currently serves as a commissioner on the Global Commission for the Stability of Cyberspace and chairs a working group on cyber capacity for the Global Forum for Cyber Expertise.

He is a frequent speaker on cyber issues, frequently is interviewed and quoted in the media and has testified on numerous occasions to U.S. Congressional committees. 

He has received a number of awards and honors including Japan’s Order of the Rising Sun, the RSA Security Conference Public Policy Award and the Attorney General’s Award for Exceptional Service. 

He received his B.A. from Cornell University and J.D. from Stanford Law School.

Areas of expertise

  • International aspects of cyber policy including cyber diplomacy
  • Deterrence and collective action in cyberspace
  • Combatting cybercrime and enhancing cybersecurity

Past experience

2019William J. Perry Fellow, Center for Security and Cooperation, Stanford University 
2017 - presentBoard member, Center for Internet Security
2017 - presentCommissioner, Global Commission for the Stability of Cyberspace




in

Cyber Insurance for Civil Nuclear Facilities: Risks and Opportunities

8 May 2019

This paper sets out a roadmap for how organizations in the civil nuclear sector can explore their options and review their cyber risk exposure.

Éireann Leverett

Senior Risk Researcher, University of Cambridge

GettyImages-667179424.jpg

The control room inside the Paks nuclear power plant in Hungary, 10 April 2017. Photo: Getty Images
  • Civil nuclear facilities and organizations hold sensitive information on security clearances, national security, health and safety, nuclear regulatory issues and international inspection obligations. The sensitivity and variety of such data mean that products tailored for insuring the civil nuclear industry have evolved independently and are likely to continue to do so.
  • ‘Air-gaps’ – measures designed to isolate computer systems from the internet – need to be continually maintained for industrial systems. Yet years of evidence indicate that proper maintenance of such protections is often lacking (mainly because very real economic drivers exist that push users towards keeping infrastructure connected). Indeed, even when air-gaps are maintained, security breaches can still occur.
  • Even if a particular organization has staff that are highly trained, ready and capable of handling a technological accident, hacking attack or incidence of insider sabotage, it still has to do business and/or communicate with other organizations that may not have the essentials of cybersecurity in place.
  • Regardless of whether the choice is made to buy external insurance or put aside revenues in preparation for costly incidents, the approach to cyber risk calculation should be the same. Prevention is one part of the equation, but an organization will also need to consider the resources and contingency measures available to it should prevention strategies fail. Can it balance the likelihood of a hacker’s success against the maximum cost to the organization, and put aside enough capital and manpower to get it through a crisis?
  • All civil nuclear facilities should consider the establishment of computer security incident response (CSIR) teams as a relevant concern, if such arrangements are not already in place. The existence of a CSIR team will be a prerequisite for any facility seeking to obtain civil nuclear cyber insurance.
  • Preventing attacks such as those involving phishing and ransomware requires good cyber hygiene practices throughout the workforce. Reducing an organization’s ‘time to recovery’ takes training and dedication. Practising the necessary tasks in crisis simulations greatly reduces the likelihood of friction and the potential for error in a crisis.




in

Understanding Cybercrime for Better Policing: Regional and Global Challenges

Research Event

18 June 2019 - 9:00am to 5:30pm

Chatham House | 10 St James's Square | London | SW1Y 4LE

In recent years, cybercrime has evolved from a niche technological concern into a prominent global issue with substantial preventative and remedial costs for businesses and governments alike. Despite heavy investment in sophisticated cybersecurity measures and the adoption of several legal, organizational and capacity-building measures, cybercrime remains a major threat which is evolving on a daily basis. Today’s cybercrime is more aggressive, more complex, more organized and – importantly – more unpredictable than ever before.

The challenges posed by cybercrime are experienced acutely by countries undergoing digital transformations: as the level of connectivity rises, so too does the potential for online theft, fraud and abuse. Cybercrime is pervasive but governments can work to limit its impact by creating a resilient overall economy and robust institution, and appropriately equipping law enforcement and the justice system to navigate its novel challenges.

To advance the discourse surrounding these issues, this workshop will assess the current cyber threat landscape and how it is evolving. It will identify the main obstacles encountered by law enforcement, the judiciary and prosecutors in their fight against cybercrime. It will also compare national, regional and global approaches that countries can use to effectively curb cybercrime and tackle its emerging challenges.

Calum Inverarity

Research Analyst and Coordinator, International Security Department
+44 (0) 207 957 5751




in

Tackle the ‘Splinternet’

12 June 2019

Marjorie Buchser

Executive Director, Digital Society Initiative

Joyce Hakmeh

Senior Research Fellow, International Security Programme; Co-Editor, Journal of Cyber Policy
Competing governance visions are impairing efforts to regulate the digital space. To limit the spread of repressive models, policymakers in the West and elsewhere need to ensure the benefits of an open and well-run system are more widely communicated.

The development of governance in a wide range of digital spheres – from cyberspace to internet infrastructure to emerging technologies such as artificial intelligence (AI) – is failing to match rapid advances in technical capabilities or the rise in security threats. This is leaving serious regulatory gaps, which means that instruments and mechanisms essential for protecting privacy and data, tackling cybercrime or establishing common ethical standards for AI, among many other imperatives, remain largely inadequate.

A starting point for effective policy formation is to recognize the essential complexity of the digital landscape, and the consequent importance of creating a ‘common language’ for multiple stakeholders (including under-represented actors such as smaller and/or developing countries, civil society and non-for-profit organizations).

The world’s evolving technological infrastructure is not a monolithic creation. In practice, it encompasses a highly diverse mix of elements – so-called ‘high-tech domains’,[1] hardware, systems, algorithms, protocols and standards – designed by a plethora of private companies, public bodies and non-profit organizations.[2] Varying cultural, economic and political assumptions have shaped where and which technologies have been deployed so far, and how they have been implemented.

Perhaps the most notable trend is the proliferation of techno-national regimes and private-sector policy initiatives, reflecting often-incompatible doctrines in respect of privacy, openness, inclusion and state control. Beyond governments, the interests and ambitions of prominent multinationals (notably the so-called ‘GAFAM’ tech giants in the West, and their ‘BATX’ counterparts in China)[3] are significant factors feeding into this debate.

Cyberspace and AI – two case studies

Two particular case studies highlight the essential challenges that this evolving – and, in some respects, still largely unformed – policy landscape presents. The first relates to cyberspace. Since 1998, Russia has established itself as a strong voice in the cyberspace governance debate – calling for a better understanding, at the UN level, of ICT developments and their impact on international security.

The country’s efforts were a precursor to the establishment in 2004 of a series of UN Groups of Governmental Experts (GGEs), aimed at strengthening the security of global information and telecommunications systems. These groups initially succeeded in developing common rules, norms and principles around some key issues. For example, the 2013 GGE meeting recognized that international law applies to the digital space and that its enforcement is essential for a secure, peaceful and accessible ICT environment.

However, the GGE process stalled in 2017, primarily due to fundamental disagreements between countries on the right to self-defence and on the applicability of international humanitarian law to cyber conflicts. The breakdown in talks reflected, in particular, the divide between two principal techno-ideological blocs: one, led by the US, the EU and like-minded states, advocating a global and open approach to the digital space; the other, led mainly by Russia and China, emphasizing a sovereignty-and-control model.

The divide was arguably entrenched in December 2018, with the passage of two resolutions at the UN General Assembly. A resolution sponsored by Russia created a working group to identify new norms and look into establishing regular institutional dialogue.

At the same time, a US-sponsored resolution established a GGE tasked, in part, with identifying ways to promote compliance with existing cyber norms. Each resolution was in line with its respective promoter’s stance on cyberspace. While some observers considered these resolutions potentially complementary, others saw in them competing campaigns to cement a preferred model as the global norm. Outside the UN, there have also been dozens of multilateral and bilateral accords with similar objectives, led by diverse stakeholders.[4]

The second case study concerns AI. Emerging policy in this sector suffers from an absence of global standards and a proliferation of proposed regulatory models. The potential ability of AI to deliver unprecedented capabilities in so many areas of human activity – from automation and language applications to warfare – means that it has become an area of intense rivalry between governments seeking technical and ideological leadership of this field.

China has by far the most ambitious programme. In 2017, its government released a three-step strategy for achieving global dominance in AI by 2030. Beijing aims to create an AI industry worth about RMB 1 trillion ($150 billion)[5] and is pushing for greater use of AI in areas ranging from military applications to the development of smart cities. Elsewhere, the US administration has issued an executive order on ‘maintaining American leadership on AI’.

On the other side of the Atlantic, at least 15 European countries (including France, Germany and the UK) have set up national AI plans. Although these strategies are essential for the development of policy infrastructure, they are country-specific and offer little in terms of global coordination. Ominously, greater inclusion and cooperation are scarcely mentioned, and remain the least prioritized policy areas.[6]

Competing multilateral frameworks on AI have also emerged. In April 2019, the European Commission published its ethics guidelines for trustworthy AI. Ministers from Nordic countries[7] recently issued their own declaration on collaboration in ‘AI in the Nordic-Baltic region’. And leaders of the G7 have committed to the ‘Charlevoix Common Vision for the Future of Artificial Intelligence’, which includes 12 guiding principles to ensure ‘human-centric AI’.

More recently, OECD member countries adopted a set of joint recommendations on AI. While nations outside the OECD were welcomed into the coalition – with Argentina, Brazil and Colombia adhering to the OECD’s newly established principles – China, India and Russia have yet to join the discussion. Despite their global aspirations, these emerging groups remain largely G7-led or EU-centric, and again highlight the divide between parallel models. 

The importance of ‘swing states’

No clear winner has emerged from among the competing visions for cyberspace and AI governance, nor indeed from the similar contests for doctrinal control in other digital domains. Concerns are rising that a so-called ‘splinternet’ may be inevitable – in which the internet fragments into separate open and closed spheres and cyber governance is similarly divided.

Each ideological camp is trying to build a critical mass of support by recruiting undecided states to its cause. Often referred to as ‘swing states’, the targets of these overtures are still in the process of developing their digital infrastructure and determining which regulatory and ethical frameworks they will apply. Yet the policy choices made by these countries could have a major influence on the direction of international digital governance in the future.

India offers a case in point. For now, the country seems to have chosen a versatile approach, engaging with actors on various sides of the policy debate, depending on the technology governance domain. On the one hand, its draft Personal Data Protection Bill mirrors principles in the EU’s General Data Protection Regulation (GDPR), suggesting a potential preference for the Western approach to data security.

However, in 2018, India was the leading country in terms of internet shutdowns, with over 100 reported incidents.[8] India has also chosen to collaborate outside the principal ideological blocs, as evidenced by an AI partnership it has entered into with the UAE. At the UN level, India has taken positions that support both blocs, although more often favouring the sovereignty-and-control approach.

Principles for rule-making

Sovereign nations have asserted aspirations for technological dominance with little heed to the cross-border implications of their policies. This drift towards a digital infrastructure fragmented by national regulation has potentially far-reaching societal and political consequences – and implies an urgent need for coordinated rule-making at the international level.

The lack of standards and enforcement mechanisms has created instability and increased vulnerabilities in democratic systems. In recent years, liberal democracies have been targeted by malevolent intrusions in their election systems and media sectors, and their critical infrastructure has come under increased threat. If Western nations cannot align around, and enforce, a normative framework that seeks to preserve individual privacy, openness and accountability through regulation, a growing number of governments may be drawn towards repressive forms of governance.

To mitigate those risks, efforts to negotiate a rules-based international order for the digital space should keep several guiding principles in mind. One is the importance of developing joint standards, as well as the need for consistent messaging towards the emerging cohort of engaged ‘swing states’. Another is the need for persistence in ensuring that the political, civic and economic benefits associated with a more open and well-regulated digital sphere are made clear to governments and citizens everywhere.

Countries advocating an open, free and secure model should take the lead in embracing and promoting a common affirmative model – one that draws on human rights principles (such as the rights to freedom of opinion, freedom of expression and privacy) and expands their applications to the digital space.  

Specific rules on cyberspace and technology use need to include pragmatic policy ideas and models of implementation. As this regulatory corpus develops, rules should be adapted to reflect informed consideration of economic and social priorities and attitudes, and to keep pace with what is possible technologically.[9]

What needs to happen

  • Demystifying the salient issues, consistent messaging and the creation of a common discourse are key to advancing a well-informed debate on global digital governance.
  • The benefits associated with open and well-regulated digital governance should be clearly presented to all stakeholders. For example, the link between sustainable development, respect for human rights and a secure, free and open internet should take priority in the debate with developing countries.
  • International norms need to be updated and reinterpreted to assert the primacy of non-harmful applications of technologies and digital interactions.
  • This process should follow a multi-stakeholder approach to include under-represented actors, such as developing countries and civil society, and should adopt a gender-balanced approach.
  • The design of rules, standards and norms needs to take into account the essentially transnational nature of digital technologies. Rules, standards and norms need to be applicable consistently across jurisdictions.
  • Developing countries should be supported in building their digital infrastructure, and in increasing the capacity of governments and citizens to make informed policy decisions on technology.

Notes

[1] Including but not limited to AI and an associated group of digital technologies, such as the Internet of Things, big data, blockchain, quantum computing, advanced robotics, self-driving cars and other autonomous systems, additive manufacturing (i.e. 3D printing), social networks, the new generation of biotechnology, and genetic engineering.

[2] O’Hara, K. and Hall, W. (2018), Four Internets: The Geopolitics of Digital Governance, Centre for International Governance Innovation, CIGI Paper No. 206, https://www.cigionline.org/publications/four-internets-geopolitics-digital-governance.

[3] GAFAM = Google, Amazon, Facebook, Apple and Microsoft; BATX = Baidu, Alibaba, Tencent and Xiaomi.

[4] Carnegie Endowment for International Peace (undated), ‘Cyber Norms Index’, https://carnegieendowment.org/publications/interactive/cybernorms (accessed 30 May 2019).

[5] Future of Life Institute (undated), ‘AI Policy – China’, https://futureoflife.org/ai-policy-china?cn-reloaded=1.

[6] Dutton, T. (2018), ‘Building an AI World: Report on National and Regional AI Strategies’, 6 December 2018, CIFAR, https://www.cifar.ca/cifarnews/2018/12/06/building-an-ai-world-report-on-national-and-regional-ai-strategies.

[7] Including Denmark, Estonia, Finland, the Faroe Islands, Iceland, Latvia, Lithuania, Norway, Sweden and the Åland Islands.

[8] Shahbaz, A. (2018), Freedom on the Net 2018: The Rise of Digital Authoritarianism, Freedom House, October 2018, https://freedomhouse.org/report/freedom-net/freedom-net-2018/rise-digital-authoritarianism.

[9] Google White Paper (2018), Perspectives on Issues in AI Governance, https://www.blog.google/outreach-initiatives/public-policy/engaging-policy-stakeholders-issues-ai-governance/.

This essay was produced for the 2019 edition of Chatham House Expert Perspectives – our annual survey of risks and opportunities in global affairs – in which our researchers identify areas where the current sets of rules, institutions and mechanisms for peaceful international cooperation are falling short, and present ideas for reform and modernization.




in

Peter Watkins

Associate Fellow, International Security Programme

Biography

Peter Watkins became an associate fellow for Chatham House in June 2019. Before that, from 2014 to 2018, he was Director General (DG) in the UK Ministry of Defence (MoD) responsible for strategic defence policy, including key multilateral and bilateral relationships (such as NATO), nuclear, cyber, space and prosperity (latterly this post was known as the DG Strategy and International).

Previously he served as DG of the Defence Academy, Director of Operational Policy, Director responsible for the UK share of the multinational Typhoon combat aircraft programme and as Defence Counsellor in the UK Embassy in Berlin.

He is a frequent participant in conferences on defence and security in the UK and overseas.

He was awarded the CB (2019) and CBE (2004) for services to defence. He has an MA from Cambridge University.

Areas of expertise

  • European security
  • Deterrence policy
  • Nuclear policy
  • Stabilisation and peacekeeping
  • International armaments cooperation

Past experience

2006-07Fellow, Weatherhead Center for International Affairs, Harvard University
1993-94Senior course member, NATO Defense College

 




in

Yasmin Afina

Research Assistant, International Security Programme

Biography

Yasmin Afina joined Chatham House as research assistant for the International Security programme in April 2019. She formerly worked for the United Nations Institute for Disarmament Research (UNIDIR)’s Security and Technology Programme, and the United Nations Office for Disarmament Affairs (UNODA).

Yasmin’s research at Chatham House covers projects related to nuclear weapons systems, strategic weapons systems, emerging technologies including cyber and artificial intelligence, and international law.

In her previous capacities, Yasmin’s research included international, regional and national cybersecurity policies, the international security implications of quantum computing, and algorithmic bias in autonomous technologies and law enforcement operations.

Yasmin holds an LL.M. from the Geneva Academy of International Humanitarian Law and Human Rights, an LL.B. from the University of Essex, and a French Bachelor of Laws and Postgraduate degree (Maîtrise) in International Law from the Université Toulouse I Capitole.

Areas of expertise

  • Cybersecurity of weapons systems, command control and communication systems
  • Cybersecurity policies and governance
  • Autonomous technologies (incl. artificial intelligence, machine learning)
  • International law (incl. international humanitarian law, international human rights law, jus ad bellum)
  • Nuclear weapons policy

Past experience

2018-19Programme assistant, security and technology, United Nations Institute for Disarmament Research (UNIDIR)
2017-18Project assistant, emerging security issues, United Nations Institute for Disarmament Research (UNIDIR)
2017Weapons of Mass Destruction Programme, United Nations Institute for Disarmament Research (UNIDIR)
2017-18LL.M., Geneva Academy of International Humanitarian Law and Human Rights (CH)
2016-17Maîtrise, Université Toulouse I Capitole (FR)
2016Convention on Certain Conventional Weapons Implementation Support Unit, United Nations Office for Disarmament Affairs (UNODA) Geneva Branch
2013-17LL.B., University of Essex (UK)
2013-16Licence (Bachelor of Laws), Université Toulouse I Capitole (FR)
2014Volunteer, World YWCA




in

The Destabilizing Danger of Cyberattacks on Missile Systems

2 July 2019

Dr Patricia Lewis

Research Director, Conflict, Science & Transformation; Director, International Security Programme

Dr Beyza Unal

Senior Research Fellow, International Security Programme
‘Left-of-launch’ attacks that aim to disable enemy missile systems may increase the chance of them being used, not least because the systems are so vulnerable.

2019-07-02-NKMissile.jpg

This undated photo released by North Korea's news agency in March 2017 shows the launch of four ballistic missiles during a military drill at an undisclosed location in North Korea. Photo: STR/AFP/Getty Images.

After President Trump decided to halt a missile attack on Iran in response to the downing of a US drone, it was revealed that the US had conducted cyberattacks on Iranian weapons systems to prevent Iran launching missiles against US assets in the region.

This ‘left-of-launch’ strategy – the pre-emptive action to prevent an adversary launch missiles – has been part of the US missile defence strategy for some time now. President George W Bush asked the US military and intelligence community to infiltrate the supply chain of North Korean missiles. It was claimed that the US hacked the North Korean ballistic missile programme, causing a failed ballistic missile test, in 2012.

It was not clear then – or now – whether these ‘left-of-launch’ cyberattacks aimed at North Korea were successful as described or whether they were primarily a bluff. But that is somewhat irrelevant; the belief in the possibility and the understanding of the potential impact of such cyber capabilities undermines North Korean or Iranian confidence in their abilities to launch their missiles. In times of conflict, loss of confidence in weapons systems may lead to escalation.

In other words, the adversary may be left with no option but to take the chance to use these missiles or to lose them in a conflict setting. ‘Left of launch’ is a dangerous game. If it is based on a bluff, it could be called upon and lead to deterrence failure. If it is based on real action, then it could create an asymmetrical power struggle. If the attacker establishes false confidence in the power of a cyber weapon, then it might lead to false signalling and messaging.

This is the new normal. The cat-and-mouse game has to be taken seriously, not least because missile systems are so vulnerable.

There are several ways an offensive cyber operation against missile systems might work. These include exploiting missile designs, altering software or hardware, or creating clandestine pathways to the missile command and control systems.

They can also be attacked in space, targeting space assets and their link to strategic systems.

Most missile systems rely, at least in part, on digital information that comes from or via space-based or space-dependent assets such as: communication satellites; satellites that provide position, navigation and timing (PNT) information (for example GPS or Galileo); weather satellites to help predict flight paths, accurate targeting and launch conditions; and remote imagery satellites to assist with information and intelligence for the planning and targeting.

Missile launches themselves depend on 1) the command and control systems of the missiles, 2) the way in which information is transmitted to the missile launch facilities and 3) the way in which information is transmitted to the missiles themselves in flight. All these aspects rely on space technology.

In addition, the ground stations that transmit and receive data to and from satellites are also vulnerable to cyberattack – either through their known and unknown internet connectivity or through malicious use of flash drives that contain a deliberate cyber infection.

Non-space-based communications systems that use cable and ground-to-air-to-ground masts are likewise under threat from cyberattacks that find their way in via internet connectivity, proximity interference or memory sticks. Human error in introducing connectivity via phones, laptops and external drives, and in clicking on malicious links in sophisticated phishing lures, is common in facilitating inadvertent connectivity and malware infection.

All of these can create a military capacity able to interfere with missile launches. Malware might have been sitting on the missile command and control system for months or even years, remaining inactivated until a chosen time or by a trigger that sets in motion a disruption either to the launch or to the flight path of the missile. The country that launches the missile that either fails to launch or fails to reach the target may never know if this was the result of a design flaw, a common malfunction or a deliberate cyberattack.

States with these capabilities must exercise caution: cyber offence manoeuvres may prevent the launch of missile attacks against US assets in the Middle East or in the Pacific regions, but they may also interfere with US missile launches in the future. Even, as has recently been revealed, US cyber weapons targeting an adversary may blow back and inadvertently infect US systems. Nobody is invulnerable.




in

Cybersecurity by Design in Civil Nuclear Power Plants

24 July 2019

Cyberattacks are increasingly challenging critical national infrastructure. This paper considers the security by design approach for civil nuclear power plants and analyses areas of risk and opportunities for the nuclear industry.

Dr Beyza Unal

Senior Research Fellow, International Security Programme

Roger Brunt

Managing Director, Grosmont Howe Ltd

2019-07-24-NuclearPlant.jpg

An employee climbs into the cooling tower of the third and fourth unit at Mochovce nuclear power plant in Slovakia on 2 July 2019. Photo: Getty Images

Summary

  • The application of ‘security by design’ in nuclear new builds could provide operators with the opportunity to establish a robust and resilient security architecture at the beginning of a nuclear power plant’s life cycle. This will enhance the protection of the plant and reduce the need for costly security improvements during its operating life.
  • Security by design cannot fully protect a nuclear power plant from rapidly evolving cyberattacks, which expose previously unsuspected or unknown vulnerabilities.
  • Careful design of security systems and architecture can – and should – achieve levels of protection that exceed current norms and expectations. However, the sourcing of components from a global supply chain means that the integrity of even the most skilfully designed security regime cannot be guaranteed without exhaustive checks of its components.
  • Security by design may well include a requirement for a technical support organization to conduct quality assurance of cyber defences and practices, and this regime should be endorsed by a facility’s executive board and continued at regular intervals after the new build facility has been commissioned.
  • Given the years it takes to design, plan and build a new nuclear power plant, it is important to recognize that from the point of ‘design freeze’ onwards, the operator will be building in vulnerabilities, as technology continues to evolve rapidly while construction fails to keep pace with it. Security by design cannot be a panacea, but it is an important factor in the establishment of a robust nuclear security – and cybersecurity – culture.




in

Cyber Governance in the Commonwealth: Towards Stability and Responsible State Behaviour in Cyberspace

Invitation Only Research Event

7 October 2019 - 10:30am to 5:30pm

Addis Ababa, Ethiopia

This roundtable is part of a series under the project, 'Implementing the Commonwealth Cybersecurity Agenda', funded by the UK Foreign and Commonwealth Office (FCO). The roundtable aims to provide a multi-stakeholder, pan-Commonwealth platform to discuss how to implement the Commonwealth Cyber Declaration with a focus on its third pillar 'To promote stability in cyberspace through international cooperation'.

In particular, the roundtable focuses on points 3 and 4 of the third pillar which revolve around the commitment to promote frameworks for stability in cyberspace including the applicability of international law, agreed voluntary norms of responsible state behaviour and the development and implementation of confidence-building measures consistent with the 2015 report of the UNGGE. 

The workshop also focuses on the commitment to advance discussions on how existing international law, including the Charter of the United Nations and applicable international humanitarian law, applies in cyberspace.

The roundtable addresses the issue of global cyber governance from a Commonwealth perspective and will also include a discussion around the way forward, the needed capacity of the different Commonwealth countries and the cooperation between its members for better cyber governance.

Participants include UNGGE members from Commonwealth countries in addition to representatives to the UN Open-Ended Working Group from African countries as well as members from academia, civil society and industry.

Calum Inverarity

Research Analyst and Coordinator, International Security Department
+44 (0) 207 957 5751




in

Artificial Intelligence Prediction and Counterterrorism

9 August 2019

The use of AI in counterterrorism is not inherently wrong, and this paper suggests some necessary conditions for legitimate use of AI as part of a predictive approach to counterterrorism on the part of liberal democratic states.

Kathleen McKendrick

British Army Officer, Former Visiting Research Fellow at Chatham House

2019-08-06-AICounterterrorism.jpg

Surveillance cameras manufactured by Hangzhou Hikvision Digital Technology Co. at a testing station near the company’s headquarters in Hangzhou, China. Photo: Getty Images

Summary

  • The use of predictive artificial intelligence (AI) in countering terrorism is often assumed to have a deleterious effect on human rights, generating spectres of ‘pre-crime’ punishment and surveillance states. However, the well-regulated use of new capabilities may enhance states’ abilities to protect citizens’ right to life, while at the same time improving adherence to principles intended to protect other human rights, such as transparency, proportionality and freedom from unfair discrimination. The same regulatory framework could also contribute to safeguarding against broader misuse of related technologies.
  • Most states focus on preventing terrorist attacks, rather than reacting to them. As such, prediction is already central to effective counterterrorism. AI allows higher volumes of data to be analysed, and may perceive patterns in those data that would, for reasons of both volume and dimensionality, otherwise be beyond the capacity of human interpretation. The impact of this is that traditional methods of investigation that work outwards from known suspects may be supplemented by methods that analyse the activity of a broad section of an entire population to identify previously unknown threats.
  • Developments in AI have amplified the ability to conduct surveillance without being constrained by resources. Facial recognition technology, for instance, may enable the complete automation of surveillance using CCTV in public places in the near future.
  • The current way predictive AI capabilities are used presents a number of interrelated problems from both a human rights and a practical perspective. Where limitations and regulations do exist, they may have the effect of curtailing the utility of approaches that apply AI, while not necessarily safeguarding human rights to an adequate extent.
  • The infringement of privacy associated with the automated analysis of certain types of public data is not wrong in principle, but the analysis must be conducted within a robust legal and policy framework that places sensible limitations on interventions based on its results.
  • In future, broader access to less intrusive aspects of public data, direct regulation of how those data are used – including oversight of activities by private-sector actors – and the imposition of technical as well as regulatory safeguards may improve both operational performance and compliance with human rights legislation. It is important that any such measures proceed in a manner that is sensitive to the impact on other rights such as freedom of expression, and freedom of association and assembly.




in

The Future of EU–US Cooperation in Space Traffic Management and Space Situational Awareness

29 August 2019

As more space activities develop, there is an increasing requirement for comprehensive space situational awareness (SSA). This paper provides an overview of the current landscape in SSA and space traffic management as well as possible scenarios for EU–US cooperation in this area. 

Alexandra Stickings

Research Fellow for Space Policy and Security, RUSI

2019-08-19-SpaceTrafficManagement.jpg

Deployment of the NanoRacks-Remove Debris Satellite from the International Space Station. Photo: Getty Images.

Summary

  • Space situational awareness (SSA) and space traffic management (STM) are essential for sustainable near-Earth orbit. International cooperation in SSA and STM is vital with the growing number of satellite operators and the increasingly complex space environment.
  • The various definitions of SSA and STM are ambiguous. Understanding the activities that fall under each term can better assist in finding areas for cooperation and collaboration.
  • SSA has historically been a military activity, leading to an incomplete public catalogue of its use and barriers to sharing information with other states and the commercial sector. The rise in private space actors has increased the number of commercial STM providers and, with plans in the US to move responsibility for STM to civilian control, there will likely be more opportunities for international collaboration, particularly through the EU Space Surveillance and Tracking (SST) programme.
  • Individual EU member states possess developed STM capabilities, but overall these are still some way behind those of allies such as the US. Further investment in STM infrastructure and programmes is required for the EU and individual European states to be an essential partner to the US and add value to the global effort.
  • There are worldwide challenges, both political and technical, to providing STM coverage, which may lead to a lack of collaboration and gaps in understanding of activities in orbit. Existing sensors have limitations in terms of the size of objects that can be detected and the precision with which their movements can be predicted. These capability gaps represent opportunities for the EU to contribute.
  • The EU can build on its tradition of support for openness and civil society by creating a system that fosters an environment of cooperation and collaboration involving industry, commercial STM providers and the wider international community.
  • Although collaboration in STM is vital, the EU should also aim to tackle issues within the wider definition of SSA including space weather, intelligence and the security of ground stations.
  • The EU is well placed to become a global leader in SSA and STM. However, it needs to take into consideration the current political and technical landscape when making decisions regarding investment in capabilities and the pursuit of international partnerships.




in

Examining Measures to Mitigate Cyber Vulnerabilities of Space-based Strategic Assets

Invitation Only Research Event

30 October 2019 - 9:30am to 4:00pm

Chatham House | 10 St James's Square | London | SW1Y 4LE

Event participants

Beyza Unal, Senior Research Fellow, International Security Department, Chatham House
Patricia Lewis, Research Director, International Security Department, Chatham House

Strategic systems that depend on space-based assets, such as command, control and communication, early warning systems, weapons systems and weapons platforms, are essential for conducting successful NATO operations and missions. Given the increasing dependency on such systems, the alliance and key member states would therefore benefit from an in-depth analysis of possible mitigation and resilience measures.

This workshop is part of the International Security Department’s (ISD) project on space security and the vulnerability of strategic assets to cyberattacks, which includes a recently published report. This project aims to create resilience in NATO and key NATO member states, building the capacity of key policymakers and stakeholders to respond with effective policies and procedures. This workshop will focus on measures to mitigate the cyber vulnerabilities of NATO’s space-dependent strategic assets. Moreover, participants will discuss the type of resilience measures and mechanisms required.

Attendance at this event is by invitation only. 

Calum Inverarity

Research Analyst and Coordinator, International Security Department
+44 (0) 207 957 5751




in

Who’s Afraid of Huawei? Understanding the 5G Security Concerns

9 September 2019

Emily Taylor

Associate Fellow, International Security Programme
Emily Taylor examines the controversy around the Chinese tech giant’s mobile broadband equipment and the different approaches taken by Western countries.

2019-09-06-Huawei.jpg

Huawei's Ox Horn campus in Dongguan, China. Photo: Getty Images.

As countries move towards the fifth generation of mobile broadband, 5G, the United States has been loudly calling out Huawei as a security threat. It has employed alarmist rhetoric and threatened to limit trade and intelligence sharing with close allies that use Huawei in their 5G infrastructure.

While some countries such as Australia have adopted a hard line against Huawei, others like the UK have been more circumspect, arguing that the risks of using the firm’s technology can be mitigated without forgoing the benefits.

So, who is right, and why have these close allies taken such different approaches?

The risks

Long-standing concerns relating to Huawei are plausible. There are credible allegations that it has benefitted from stolen intellectual property, and that it could not thrive without a close relationship with the Chinese state.

Huawei hotly denies allegations that users are at risk of its technology being used for state espionage, and says it would resist any order to share information with the Chinese government. But there are questions over whether it could really resist China’s stringent domestic legislation, which compels companies to share data with the government. And given China’s track record of using cyberattacks to conduct intellectual property theft, there may be added risks of embedding a Chinese provider into critical communications infrastructure.

In addition, China’s rise as a global technological superpower has been boosted by the flow of financial capital through government subsidies, venture and private equity, which reveal murky boundaries between the state and private sector for domestic darlings. Meanwhile, the Belt and Road initiative has seen generous investment by China in technology infrastructure across Africa, South America and Asia.

There’s no such thing as a free lunch or a free network – as Sri Lanka discovered when China assumed shares in a strategic port in return for debt forgiveness; or Mexico when a 1% interest loan for its 4G network came on the condition that 80% of the funding was spent with Huawei.

Aside from intelligence and geopolitical concerns, the quality of Huawei’s products represents a significant cyber risk, one that has received less attention than it deserves.

On top of that, 5G by itself will significantly increase the threat landscape from a cybersecurity perspective. The network layer will be more intelligent and adaptable through the use of software and cloud services. The number of network antennae will increase by a factor of 20, and many will be poorly secured ‘things’; there is no need for a backdoor if you have any number of ‘bug doors’.

Finally, the US is threatening to limit intelligence sharing with its closest allies if they adopt Huawei. So why would any country even consider using Huawei in their 5G infrastructure?

Different situations

The truth is that not every country is free to manoeuvre; 5G technology will sit on top of existing mobile infrastructure.

Australia and the US can afford to take a hard line: their national infrastructure has been largely Huawei-free since 2012. However, the Chinese firm is deeply embedded in other countries’ existing structures – for example, in the UK, Huawei has provided telecommunications infrastructure since 2005. Even if the UK decided tomorrow to ditch Huawei, it cannot just rip up existing 4G infrastructure. To do so would cost a fortune, risk years of delay in the adoption of 5G and limit competition in 5G provisioning.

As a result, the UK has adopted a pragmatic approach resulting from years of oversight and analysis of Huawei equipment, during which it has never found evidence of malicious Chinese state cyber activity through Huawei.

At the heart of this process is the Huawei Cyber Security Evaluation Centre, which was founded in 2010 as a confidence-building measure. Originally criticized for ‘effectively policing itself’, as it was run and staffed entirely by Huawei, the governance has now been strengthened, with the National Cyber Security Centre chairing its oversight board.

The board’s 2019 report makes grim reading, highlighting ‘serious and system defects in Huawei’s software engineering and cyber security competence’. But it does not accuse the company of serving as a platform for state-sponsored surveillance.

Similar evidence-based policy approaches are emerging in other countries like Norway and Italy. They offer flexibility for governments, for example by limiting access to some contract competition through legitimate and transparent means, such as security reviews during procurement. The approaches also raise security concerns (both national and cyber) to a primary issue when awarding contracts – something that was not always done in the past, when price was the key driver.

The UK is also stressing the need to manage risk and increase vendor diversity in the ecosystem to avoid single points of failure. A further approach that is beginning to emerge is to draw a line between network ‘core’ and ‘periphery’ components, excluding some providers from the more sensitive ‘core’. The limited rollouts of 5G in the UK so far have adopted multi-provider strategies, and only one has reportedly not included Huawei kit.

Managing the risks to cyber security and national security will become more complex in a 5G environment. In global supply chains, bans based on the nationality of the provider offer little assurance. For countries that have already committed to Huawei in the past, and who may not wish to be drawn into an outright trade war with China, these moderate approaches offer a potential way forward.




in

How Is New Technology Driving Geopolitical Relations?

Research Event

22 October 2019 - 6:00pm to 7:00pm

Chatham House, London

Event participants

Rt Hon Baroness Neville-Jones DCMG, Minister of State for Security and Counter Terrorism (2010-11)
Jamie Condliffe, Editor, DealBook Newsletter and Writer, Bits Tech Newsletter, The New York Times
Jamie Saunders, Partner, Wychwood Partners LLP; Visiting Professor, University College London
Chair: Dr Patricia Lewis, Research Director, International Security Department, Chatham House

New technology such as 5G, artificial intelligence, nanotechnology and robotics have become, now more than ever, intertwined with geopolitical, economic and trade interests. Leading powers are using new technology to exert power and influence and to shape geopolitics more generally.

The ongoing race between the US and China around 5G technology is a case in point. Amid these tensions, the impact on developing countries is not sufficiently addressed.

Arguably, the existing digital divide will increase leading developing countries to the early, if not hasty, adoption of new technology for fear of lagging behind. This could create opportunities but will also pose risks.

This panel discusses how new technology is changing the geopolitical landscape. It also discusses the role that stakeholders, including governments, play in the creation of standards for new technologies and what that means for its deployment in key markets technically and financially.

Finally, the panel looks at the issue from the perspective of developing countries, addressing the choices that have to be made in terms of affordability, development priorities and security concerns.

This event was organized with the kind support of DXC Technology.

Nicole Darabian

Research Assistant, Cyber Policy, International Security Department




in

EU–US Cooperation on Tackling Disinformation

3 October 2019

Disinformation, as the latest iteration of propaganda suitable for a digitally interconnected world, shows no signs of abating. This paper provides a holistic overview of the current state of play and outlines how EU and US cooperation can mitigate disinformation in the future.

Sophia Ignatidou

Academy Associate, International Security Programme

2019-09-19-FakeNews.jpg

A congressional staff member displays print outs of social media posts during a hearing before the House Select Intelligence Committee 1 November 2017 in Washington, DC. Photo: Getty Images.
  • EU and US cooperation on tackling disinformation needs to be grounded in an international human rights framework in order to bridge the differences of both parties and include other countries facing this challenge.
  • The disinformation debate needs to be reformulated to cover systemic issues rather than merely technical or security concerns. A lag in regulatory development has led to systemic vulnerabilities. In this context, policymakers need to push for more evidence-based analysis, which is only attainable if technology companies engage in honest debate and allow meaningful access to data – as determined by government appointed researchers rather than the companies themselves – taking into account and respecting users’ privacy.
  • Data governance needs to be the focus of attempts to tackle disinformation. Data’s implications for information, market and power asymmetries, feed into and exacerbate the problem.
  • Policymakers should focus on regulating the distribution of online content rather than the subject matter itself, which may have implications for freedom of speech.
  • Disinformation is mainly the result of inefficient gatekeeping of highly extractive digital companies. The old gatekeepers, journalists and their respective regulators, need to be actively engaged in devising the new regulatory framework.
  • Legacy media need to urgently consider the issue of ‘strategic silence’ and avoid being co-opted by political actors aiming to manipulate the accelerated, reactive news cycle by engaging in divisive ‘clickbait’ rhetoric verging on disinformation and propaganda. When strategic silence is not an option, contextual analysis is fundamental.
  •  The EU delegation should assist the coordination of EU–US efforts to tackle disinformation by drawing on the work and expertise at the G7 Rapid Response Mechanism (RRM), the Transatlantic Commission on Election Integrity (TCEI), the European Centre of Excellence for Countering Hybrid Threats (Hybrid CoE), the High-level Panel on Digital Cooperation, and work with the International Telecommunication Union (ITU) to foster a long-term interdisciplinary forum to harness technological innovation to protect and support democracy from threats such as disinformation.
  • The EU and US must avoid rushed regulation that may condone enhanced surveillance or vilify journalism that scrutinizes those in power in the name of security.




in

Who Runs the Internet: Internet Consolidation and Control

Research Event

10 December 2019 - 6:00pm to 7:15pm

Chatham House, London

Event participants

Andrew Sullivan, President and CEO, Internet Society
Jennifer Cobbe, Research Associate, Department of Computer Science and Technology, University of Cambridge
Jesse Sowell, Assistant Professor, Department of International Affairs, Bush School of Government and Public Service, Texas A&M University
Chair: Emily Taylor, Associate Fellow, International Security, Chatham House, Editor, Journal of Cyber Policy

In recent years, there has been a growing debate around the influence of a few large internet technology companies on the internet’s infrastructure and over the popular applications and social media platforms that we use every day.

The internet which was once widely viewed as a collective platform for limitless, permissionless innovation, competition and growth, is now increasingly viewed as a consolidated environment dominated by a few. Such market dominance threatens to undermine the internet’s fundamental benefits as a distributed network in which no single entity has control.  

The panel examines the risks of consolidation throughout the internet’s technology stack such as the impact on complex supply chains that support applications, including cloud provisions, ‘as a service’.

It also explores the potential benefits, for example, when building out essential infrastructure to support faster and cheaper internet services in developing economies, consolidation can create economies of scale that bring the resource-intensive building blocks of the internet economy within the reach of new start-ups and innovators.

The panel provides an interdisciplinary perspective exploring the relationship between consolidation and evolutions in the internet infrastructure as well as unpacking its policy implications.

This event supports a special issue of the Journal of Cyber Policy as part of a collaboration between Chatham House and the Internet Society which explores the impact of the consolidation on the internet’s fundamental architecture.

Nilza Amaral

Project Manager, International Security Programme




in

EU–US Relations on Internet Governance

14 November 2019

As internet governance issues emerge in the wake of innovations such as the Internet of Things and advanced artificial intelligence, there is an urgent need for the EU and US to establish a common, positive multi-stakeholder vision for regulating and governing the internet.

Emily Taylor

Associate Fellow, International Security Programme

Stacie Hoffmann

Internet Policy and Cybersecurity Consultant, Oxford Information Labs

2019-11-14-EUUSInternet.jpg

Server room network cables in New York City, November 2014. Photo: Michael Bocchieri/Staff/Getty.
  • Political, economic, sociological and technological factors are poised to challenge EU and US ideological positions on internet governance, which will make it difficult to find consensus and common ground in the years to come.
  • The EU and US share core values and perspectives relating to internet governance, such as openness, freedom and interoperability, as well as a human rights framework for cybersecurity. There have been many examples of successful multi-stakeholder cooperation between the EU and US, including the Internet Assigned Numbers Authority (IANA) transition and the European Dialogue on Internet Governance (EuroDIG).
  • There are also subtle differences between the EU and US, and each has different reasons to support multi-stakeholderism. Cases that highlight growing tensions in EU–US coordination on internet governance include the controversies surrounding the EU General Data Protection Regulation (GDPR), the WHOIS system that governs domain name registration data, and the board of the Internet Corporation for Assigned Names and Numbers (ICANN), which undermined an independent cybersecurity review.
  • Internet governance is becoming more complex, with a multiplicity of actors and no obvious authority for important emerging issues. Additionally, the rise of China and its authoritarian vision for the future of the internet is a threat to the current internet governance institutions that have been shaped by and reflect Western values.
  • To bridge ideological gaps the EU and US should build capacity between likeminded stakeholders, create a taskforce on effective multi-stakeholder internet governance, and work through non-governmental stakeholders to improve participation.




in

AI-driven Personalization in Digital Media: Political and Societal Implications

2 December 2019

The fallout from disinformation and online manipulation strategies have alerted Western democracies to the novel, nuanced vulnerabilities of our information society. This paper outlines the implications of the adoption of AI by the the legacy media, as well as by the new media, focusing on personalization.

Sophia Ignatidou

Academy Associate, International Security Programme

2019-12-02-AI-Driven-Personalization-small.jpg

The Reuters and other news apps seen on an iPhone, 29 January 2019. Photo: Getty Images.

Summary

  • Machine learning (ML)-driven personalization is fast expanding from social media to the wider information space, encompassing legacy media, multinational conglomerates and digital-native publishers: however, this is happening within a regulatory and oversight vacuum that needs to be addressed as a matter of urgency.
  • Mass-scale adoption of personalization in communication has serious implications for human rights, societal resilience and political security. Data protection, privacy and wrongful discrimination, as well as freedom of opinion and of expression, are some of the areas impacted by this technological transformation.
  • Artificial intelligence (AI) and its ML subset are novel technologies that demand novel ways of approaching oversight, monitoring and analysis. Policymakers, regulators, media professionals and engineers need to be able to conceptualize issues in an interdisciplinary way that is appropriate for sociotechnical systems.
  • Funding needs to be allocated to research into human–computer interaction in information environments, data infrastructure, technology market trends, and the broader impact of ML systems within the communication sector.
  • Although global, high-level ethical frameworks for AI are welcome, they are no substitute for domain- and context-specific codes of ethics. Legacy media and digital-native publishers need to overhaul their editorial codes to make them fit for purpose in a digital ecosystem transformed by ML. Journalistic principles need to be reformulated and refined in the current informational context in order to efficiently inform the ML models built for personalized communication.
  • Codes of ethics will not by themselves be enough, so current regulatory and legislative frameworks as they relate to media need to be reassessed. Media regulators need to develop their in-house capacity for thorough research and monitoring into ML systems, and – when appropriate –proportionate sanctions for actors found to be employing such systems towards malign ends. Collaboration with data protection authorities, competition authorities and national electoral commissions is paramount for preserving the integrity of elections and of a political discourse grounded on democratic principles.
  • Upskilling senior managers and editorial teams is fundamental if media professionals are to be able to engage meaningfully and effectively with data scientists and AI engineers.




in

Building LGBTIQ+ Inclusivity in the Armed Forces, 20 Years After the Ban Was Lifted

16 January 2020

Will Davies

Army Chief of General Staff Research Fellow, International Security Programme
Change was slow to come but progress has since been swift. Not only can a continuing focus on inclusivity benefit service people and the organization, it is also an essential element of a values-based foreign policy.

2020-01-16-Westminster.jpg

Crew members from HMS Westminster march through Admiralty Arch as they exercise their freedom of the city in August 2019 in London. Photo: Getty Images.

The new UK government will conduct a review of foreign, security and defence policy in 2020. If the UK decides to use values as a framework for foreign policy this needs to be reflected in its armed forces. One area where this is essential is continuing to deepen inclusivity for LGBTIQ+ personnel, building on the progress made since the ban on their service was lifted in 2000.

I witnessed the ban first-hand as a young officer in the British Army in 1998. As the duty officer I visited soldiers being held in the regimental detention cells to check all was well. One day a corporal, who I knew, was there awaiting discharge from the army having been convicted of being gay. On the one hand, here was service law in action, which was officially protecting the army’s operational effectiveness and an authority not to be questioned at my level. On the other, here was an excellent soldier in a state of turmoil and public humiliation. How extreme this seems now.

On 12 January 2000 Tony Blair’s Labour government announced an immediate lifting of the ban for lesbian, gay and bisexual personnel (LGB) and introduced a new code of conduct for personal relationships. (LGB is the term used by the armed forces to describe those personnel who had been banned prior to 2000.) This followed a landmark ruling in a case taken to the European Court of Human Rights in 1999 by four LGB ex-service personnel – supported by Stonewall – who had been dismissed from service for their sexuality.

Up to that point the Ministry of Defence's long-held position had been that LGB personnel had a negative impact on the morale and cohesion of a unit and damaged operational effectiveness. Service personnel were automatically dismissed if it was discovered they were LGB, even though homosexuality had been decriminalized in the UK by 1967.

Proof that the armed forces had been lagging behind the rest of society was confirmed by the positive response to the change among service personnel, despite a handful of vocal political and military leaders who foresaw negative impacts. The noteworthy service of LGBTIQ+ people in Iraq and Afghanistan only served to debunk any residual myths.

Twenty years on, considerable progress has been made and my memories from 1998 now seem alien. This is a story to celebrate – however in the quest for greater inclusivity there is always room for improvement.

Defence Minister Johnny Mercer last week apologized following recent calls from campaign group Liberty for a fuller apology. In December 2019, the Ministry of Defence announced it was putting in place a scheme to return medals stripped from veterans upon their discharge.

The armed forces today have a range of inclusivity measures to improve workplace culture including assessments of workplace climate and diversity networks supported by champions drawn from senior leadership.

But assessing the actual lived experience for LGBTIQ+ people is challenging due to its subjectivity. This has not been helped by low participation in the 2015 initiative to encourage people to declare confidentially their sexual orientation, designed to facilitate more focused and relevant policies. As of 1 October 2019, only 20.3 per cent of regular service people had declared a sexual orientation.

A measure of positive progress is the annual Stonewall Workplace Equality Index, the definitive benchmarking tool for employers to measure their progress on LGBTIQ+ inclusion in the workplace; 2015 marked the first year in which all three services were placed in the top 100 employers in the UK and in 2019 the Royal Navy, British Army and Royal Air Force were placed 15th=, 51st= and 68th respectively.

Nevertheless, LGBTIQ+ service people and those in other protected groups still face challenges. The 2019 Ministry of Defence review of inappropriate behaviour in the armed forces, the Wigston Report, concluded there is an unacceptable level of sexual harassment, bullying and discrimination. It found that 26-36% of LGBTIQ+ service people have experienced negative comments or conduct at work because of their sexual orientation.

The Secretary of State for Defence accepted the report’s 36 recommendations on culture, incident reporting, training and a more effective complaints system. Pivotal to successful implementation will be a coherent strategy driven by fully engaged leaders.

Society is also expecting ever higher standards, particularly in public bodies. The armed forces emphasise their values and standards, including ‘respect for others’, as defining organisational characteristics; individuals are expected to live by them. Only in a genuinely inclusive environment can an individual thrive and operate confidently within a team.

The armed forces also recognize as a priority the need to connect to and reflect society more closely in order to attract and retain talent from across all of society. The armed forces’ active participation in UK Pride is helping to break down barriers in this area.

In a post-Brexit world, the UK’s values, support for human rights and reputation for fairness are distinctive strengths that can have an impact on the world stage and offer a framework for future policy. The armed forces must continue to push and promote greater inclusivity in support. When operating overseas with less liberal regimes, this will be sensitive and require careful handling; however it will be an overt manifestation of a broader policy and a way to communicate strong and consistent values over time.

The armed forces were damagingly behind the times 20 years ago. But good progress has been made since. Inclusion initiatives must continue to be pushed to bring benefits to the individual and the organization as well as demonstrate a values-based foreign policy.