Access the full text.
Sign up today, get DeepDyve free for 14 days.
R. Wallace (2020)
How AI founders on adversarial landscapes of fog and frictionThe Journal of Defense Modeling and Simulation: Applications, Methodology, Technology, 19
(2021)
Ethics, Governance, and Policies in Artificial IntelligencePhilosophical Studies Series
Fernando Filgueiras (2022)
The politics of AI: democracy and authoritarianism in developing countriesJournal of Information Technology & Politics, 19
H Landemore, L Bernholz, H Landemore, R Reich (2021)
Open democracy and digital technologiesDigital technology and democratic theory
J Habermas (1992)
The structural transformation of the public sphere: an inquiry into a category of bourgeois societyStudies contemporary German social thought
S. Mathur, Christian Schmidt (2007)
An open democracyMolecular Cancer, 6
Johan Galtung, D. Fischer (2013)
Johan Galtung: Pioneer of Peace ResearchJohan Galtung
J. Leibold (2020)
Surveillance in China’s Xinjiang Region: Ethnic Sorting, Coercion, and InducementJournal of Contemporary China, 29
Catharina Rudschies, Ingrid Schneider, Judith Simon (2021)
Value Pluralism in the AI Ethics Debate – Different Actors, Different PrioritiesThe International Review of Information Ethics
Anna Jobin, M. Ienca, E. Vayena (2019)
Artificial Intelligence: the global landscape of ethics guidelinesArXiv, abs/1906.11668
L Slapakova (2022)
10.7249/RRA1026-1Leveraging diversity for military effectiveness: Diversity, inclusion and belonging in the UK and US Armed Forces
James Johnson (2019)
Artificial intelligence & future warfare: implications for international securityDefense & Security Analysis, 35
L. Floridi, Josh Cowls, Monica Beltrametti, R. Chatila, Patrice Chazerand, Virginia Dignum, C. Luetge, Robert Madelin, U. Pagallo, F. Rossi, Burkhard Schafer, P. Valcke, E. Vayena (2018)
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and RecommendationsMinds and Machines, 28
Guang-Zhong Yang, James Bellingham, P. Dupont, Peer Fischer, L. Floridi, R. Full, N. Jacobstein, Vijay Kumar, Marcia McNutt, R. Merrifield, Bradley Nelson, B. Scassellati, M. Taddeo, R. Taylor, M. Veloso, Zhong Wang, R. Wood (2018)
The grand challenges of Science RoboticsScience Robotics, 3
M Kovalsky, RJ Ross, G Lindsay (2020)
Contesting key terrain: urban conflict in smart cities of the futureCyber Def. Rev., 5
Desirée Enlund, Katherine Harrison, Rasmus Ringdahl, Ahmet Börütecene, J. Löwgren, Vangelis Angelakis (2022)
The role of sensors in the production of smart city spacesBig Data & Society, 9
Thilo Hagendorff (2019)
The Ethics of AI Ethics: An Evaluation of GuidelinesMinds and Machines, 30
Els Leclercq, Emiel Rijshouwer (2022)
Enabling citizens’ Right to the Smart City through the co-creation of digital platformsUrban Transformations, 4
Lu Hong, S. Page (2004)
Groups of diverse problem solvers can outperform groups of high-ability problem solvers.Proceedings of the National Academy of Sciences of the United States of America, 101 46
J Rawls (1999)
10.4159/9780674042582A theory of justice
S Barocas, AD Selbst (2016)
Big data’s disparate impactCalif. Law Rev., 104
A. Bradford (2012)
The Brussels EffectColumbia Law School
C Bartneck, C Lütge, A Wagner, S Welsh (2021)
An Introduction to ethics in robotics and AISpringerBriefs in Ethics
R Nowak (2022)
Foundations of strategic flexibility: focus on cognitive diversity and structural empowermentMRR, 45
W. Hackmann (1993)
Engineering revolutionNature, 361
L Floridi, L Floridi (2021)
Introduction – the importance of an ethics-first approach to the development of AIEthics, governance, and policies in artificial intelligence. Philosophical Studies Series
Dhaval Vyas, C. Chisalita, A. Dix (2016)
Organizational Affordances: A Structuration Theory Approach to AffordancesInteract. Comput., 29
E Hine, L Floridi (2022)
Artificial intelligence with American values and Chinese characteristics: a comparative analysis of American and Chinese governmental AI policiesAI Soc.
Martin Beraja, David Yang, Noam Yuchtman, Hao Gao, Andrew Kao, Shuhao Lu, Shiyun Hu, Junxing Liu, Shengqi Ni, Yucheng Quan, Linchuan Xu, Peilin Yang, Daron Acemoglu, Ernesto Bó, R. Enikolopov, R. Freeman, Andy Neumeyer, J. Nicolini, M. Petrova, T. Persson, Nancy Qian (2020)
Data-Intensive Innovation and the State: Evidence from Ai Firms in ChinaNBER Working Paper Series
A Kokas (2022)
10.1093/oso/9780197620502.001.0001Trafficking data: how china Is winning the battle for digital sovereignty
Jessica Fjeld, Nele Achten, Hannah Hilligoss, Ádám Nagy, Madhulika Srikumar (2020)
Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AISSRN Electronic Journal
Johannes Thumfart (2020)
Public and private just wars: Distributed cyber deterrence based on Vitoria and GrotiusInternet Policy Rev., 9
Dimitar Lilkov (2020)
Made in China: Tackling Digital AuthoritarianismEuropean View, 19
C Heyns, N Bhuta, S Beck, R Geiβ, H-Y Liu, C Kreβ (2016)
Autonomous weapons systems: living a dignified life and dying a dignified deathAutonomous weapons systems
C. Borch (2016)
High-frequency trading, algorithmic finance and the Flash Crash: reflections on eventalizationEconomy and Society, 45
C. Mills (2009)
Rawls on Race/Race in RawlsSouthern Journal of Philosophy, 47
Emile Bruneau, Nour Kteily (2017)
The enemy as animal: Symmetric dehumanization during asymmetric warfarePLoS ONE, 12
J. Burgess (2011)
The Ethical Subject of Security: Geopolitical Reason and the Threat Against Europe
H Asan (2022)
Data securityArtificial intelligence perspective for smart cities
J. Habermas (2022)
Reflections and Hypotheses on a Further Structural Transformation of the Political Public SphereTheory, Culture & Society, 39
P. Everts (2002)
Democracy and Military Force
B de Vries (2023)
Individual criminal responsibility for autonomous weapons systems in international criminal lawInternational humanitarian law series
Costas Douzinas (2000)
The End of Human Rights
Laurence Diver (2018)
Law as a User: Design, Affordance, and the Technological Mediation of NormsSCRIPT-ed
L. Hansen, H. Nissenbaum (2009)
Digital Disaster, Cyber Security, and the Copenhagen SchoolInternational Studies Quarterly, 53
John Rawls (1971)
A Theory of JusticePrinceton Readings in Political Thought
(2022)
A Chinese Precursor to the Digital Sovereignty Debate: Digital Anti-Colonialism and Authoritarianism from the Post–Cold War Era to the Tunis Agenda
Guilong Yan (2020)
The impact of Artificial Intelligence on hybrid warfareSmall Wars & Insurgencies, 31
RE Murdough (2010)
I won’t participate in an illegal war: military objectors, the nuremberg defense, and the obligation to refuse illegal ordersArmy Law, 4
Gijs Maanen (2022)
AI Ethics, Ethics Washing, and the Need to Politicize Data EthicsDigital Society, 1
Gregory Asmolov (2022)
The transformation of participatory warfare: The role of narratives in connective mobilization in the Russia–Ukraine warDigital War, 3
K Yeung, A Howes, G Pogrebna, MD Dubber, F Pasquale, S Das (2020)
AI Governance by human rights-centered design deliberation and oversight an end to ethics washingThe Oxford handbook of ethics of AI. Oxford handbooks series
A Bradford (2020)
10.1093/oso/9780190088583.001.0001The brussels effect: how the European Union rules the world
J Allan Williamson (2008)
Some considerations on command responsibility and criminal liabilityInt. Rev. Red Cross., 90
I Kant, P Kleingeld (2006)
Toward perpetual peace: a philosophical sketchToward perpetual peace and other writings on politics, peace, and history
Dan Reiter, A. Stam (1998)
Democracy and Battlefield Military EffectivenessJournal of Conflict Resolution, 42
M. Hildebrandt (2019)
Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine LearningTheoretical Inquiries in Law, 20
J Derrida, D Cornell, M Rosenfeld, D Carlson, N Benjamin (1992)
Force of law the mystical foundation of authorityDeconstruction and the possibility of justice
P Baran (1977)
Some perspectives on networks - past, present and futureInf. Process., 77
J Scholz, J Galliott, MD Dubber, F Pasquale, S Das (2020)
The Case for Ethical AI in the MilitaryThe Oxford handbook of AI. Oxford handbooks series
C Mouffe (2008)
Which world order: cosmopolitan or multipolar?Ethical Perspect., 4
A. Cronin (2006)
Cyber-Mobilization: The New "Levée en Masse"The US Army War College Quarterly: Parameters
David Rousseau, Christopher Gelpi, Dan Reiter, Paul Huth (1996)
Assessing the Dyadic Nature of the Democratic Peace, 1918–88American Political Science Review, 90
J. Weymark (2015)
COGNITIVE DIVERSITY, BINARY DECISIONS, AND EPISTEMIC DEMOCRACYEpisteme, 12
M Glasius (2023)
10.1093/oso/9780192862655.001.0001Authoritarian Practices in a global age
I Bode, H Huelss (2022)
10.1515/9780228009245Autonomous weapons systems and international norms
BA Swett, EN Hahn, AJ Llorens, J von Braun, MS Archer, GM Reichberg, M SáncheSzorondo (2021)
Designing robots for the battlefield: state of the artRobotics AI and Humanity
J Borenstein, FS Grodzinsky, A Howard, KW Miller, MJ Wolf (2021)
AI ethics: a long history and a recent burst of attentionComputer, 54
F Grimal, M Pollard (2021)
The duty to take precautions in hostilities, and the disobeying of orders: should robots refuse?Fordham Int. Law J., 44
K Alder (2010)
10.7208/chicago/9780226012650.001.0001Engineering the revolution: arms and enlightenment in france, 1763–1815
GM Reichberg, H Syse, J von Braun, MS Archer, GM Reichberg, M SáncheSzorondo (2021)
Applying AI on the battlefield: the ethical debatesRobotics, AI, and Humanity
Emmie Hine, Luciano Floridi (2022)
Artificial Intelligence with American Values and Chinese Characteristics: A Comparative Analysis of American and Chinese Governmental AI PoliciesSSRN Electronic Journal
(2015)
The Black Box Society
Fan Liang, V. Das, N. Kostyuk, M. Hussain (2018)
Constructing a Data-Driven Society: China's Social Credit System as a State Surveillance InfrastructurePolicy & Internet
H Landemore (2020)
10.1515/9780691208725Open democracy: reinventing popular rule for the twenty-first century
Barbara Allen, Louise Tamindael, S. Bickerton, Wonhyuk Cho (2020)
Does citizen coproduction lead to better urban services in smart cities projects? An empirical study on e-participation in a mobile big data platformGov. Inf. Q., 37
Authoritarian regimes’ unrestricted collection of citizens’ data might constitute an advantage regarding the development of some types of AI, and AI might facilitate authoritarian practices. This feedback loop challenges democracies. In a critical continuation of the Pentagon’s Third Offset Strategy, I investigate a possible Democratic Offset regarding military applica- tions of AI focussed on contestation, deliberation, and participation. I apply Landemore’s Open Democracy, Hildebrandt’s Agonistic Machine Learning, and Sharp’s Civilian-Based Defence. Discussing value pluralism in AI ethics, I criticise parts of the literature for leaving the fundamental ethical incompatibility of democracies and authoritarian regimes unaddressed. I am focussing on the duty to disobey illegal orders derived from customary international humanitarian law (IHL) and the standard of ‘meaningful human control’, which is central to the partially outdated debate about lethal autonomous weapon systems (LAWS). I criticize the standard of ‘meaningful human control’ following two pathways: First, the ethical and legal principles of just war theory and IHL should be implemented in military applications of AI to submit human commands to more control, in the sense of technological disaffordances. Second, the debate should focus on the societal circumstances for personal responsibility and disobedience to be trained and exerted in deliberation and participation related to military applica- tions of AI, in the sense of societal affordances. In a larger picture, this includes multi-level stakeholder involvement, robust documentation to facilitate auditing, civilian-based defence in decentralized smart cities, and open-source intelligence. This multi-layered approach fosters cognitive diversity, which might constitute a strategic advantage for democracies regarding AI. Keywords Cognitive diversity · Command responsibility · Digital authoritarianism · Duty to disobey · LAWS · Participatory warfare 1 Introduction: can democracies disrupt with a real-life battlefield characterized by uncertainties and the positive feedback loop of AI friction [2, 3], Beijing is focusing on achieving an edge in and authoritarianism? innovative technologies that could constitute a “trump card” [4, 5]. China is already home to some of the most valuable In March 2016, only days after AlphaGo’s spectacular vic- companies involved in artificial intelligence and machine tory over Lee Sedol and under the impression of the resulting learning (AI for brevity) [6] . Recent research suggests that ‘Sputnik shock’, an article published in a Chinese Military the country might achieve its ambitious goal regarding AI Journal speculated about the emergence of a ‘battlefield sin- development because AI and authoritarianism are involved gularity’. The article warned that “the human brain will no in a positive feedback loop: authoritarian states might out- longer be able to cope with the rapidly changing battlefield perform democracies regarding some aspects of AI since dynamics and will have to cede most of the decision-making they engage in the unrestricted collection of citizens’ data power to highly intelligent machines” [1]. Whilst it is highly and are generally less scrupulous regarding technological questionable to which extent AI will actually be able to cope These numbers are from 2021 and do not take OpenAI ‘s recent surge in value into consideration. See: P. Rosen, “ChatGPT’s crea- * Johannes Thumfart tor OpenAI has doubled in value since 2021 as the language bot goes Johannes.thumfart@vub.be; Johannes_thumfart@gmx.de viral and Microsoft pours in $10 billion,” Business Insider, Jan. 24, 2023. Accessed: Apr. 05, 2023. [Online]. Available: https:// marke ts. Faculty of Law and Criminology, Vrije Universiteit Brussels, busin essin sider. com/ news/ stocks/ chatg pt- openai- valua tion- bot- micro Brussels, Belgiumsoft- langu age- google- tech- stock- fundi ng- 2023-1. Vol.:(0123456789) 1 3 AI and Ethics development [7, 8]. In turn, digital technologies, including concept of a democratic offset, which would enable democ- AI, facilitate authoritarian practices within authoritarian racies to establish their dominance on an AI-driven bat- regimes and beyond [9–13]. Accordingly, philanthropist tlefield and, thereby, guarantee the survival of democratic Soros warned that the effect of AI “is asymmetric. AI is values and even provide a strong incentive to emulate these particularly good at producing instruments of control that values. help repressive regimes and endanger open societies” [14]. The second Section discusses the literature regarding Particularly the use of AI in warfare poses an existential AI ethics from a meta-perspective. I discuss approaches challenge to democracies, their values, and their security. focused on value pluralism and differences between norma- This paper contributes to the debate about value pluralism tive agendas in the EU, the US, China, and Russia, and the in AI ethics and governance [15, 16]. It adds a decidedly private and public sectors [15, 16, 27]. In contrast to inclu- antagonistic orientation to this debate, focusing on the fol- sive approaches represented by Fjeld et al. in 2020, Floridi lowing question: assuming that authoritarian states and AI & Cowls in 2021, Hagendorff in 2020, and Jobin et al. in are involved in a positive feedback loop, and this translates 2019 [28–31], I argue that the discourse about AI ethics is into a battlefield edge—how can democracies offset this characterised by a systematic neglect of the fundamental advantage? differences between democracies and authoritarian regimes. This question arises with a certain degree of necessity The third Section demonstrates that the discourse regard- from a historical perspective as follows: among other factors, ing ethics and military AI systems is characterised by simi- the success of democracies is based on the three-dimensional lar endeavours to create maximal inclusiveness, which can entanglement of democracy, technology, and security. First, be exemplified by the well-known debate about Lethal regarding international security, the ‘democratic peace’ Autonomous Weapon Systems (LAWS) [32]. Focussing on theory suggests that democracies maintain peace between this debate is still useful to untangle the philosophical and each other because it is hard to convince free citizens to fight ethical fundamentals of AI ethics and warfare, although it against other free citizens [17, 18]. Second, democracies are is highly hypothetical and partly outdated because of more strong in regard to national security since historically they complex developments and scenarios such as swarm war- have demonstrated the ability to engage in unprecedented fare, human–machine teaming, loitering munition drones, mass mobilisation (levée en masse) and to provide combat- and the non-violent use of AI in military intelligence. I argue ants with powerful incentives [19–21]. Although this con- that the idealistic scope of the LAWS-debate should be com- flict is not concluded yet, the unexpected strength of demo- plemented by “nonideal theory” that considers ethical differ - cratic Ukraine in the face of authoritarian assault could be ences regarding the use of force [33]. Instead of promoting understood as hardening this hypothesis [22, 23]. Third, the general disarmament, an ethical reflection on the use of AI democratic revolutions of the eighteenth and nineteenth cen- in warfare should emphasise these differences regarding ius turies were closely connected with a specific kind of highly ad bellum and ius in bello. Also, the widespread standard individualised arms technology, i. e. affordable and precise of “meaningful human control” regarding military applica- muskets that challenged the feudal elites’ monopoly on vio- tions of AI [34] can be challenged along two interrelated lence and shifted power toward civil society [24]. pathways: First, in the context of human–machine teaming, Considering this three-dimensional entanglement of following Grimal and Pollard [35], it makes sense to regard democracy, technology, and security, it seems plausible that legally informed AI as a corrective instance in relation to democracies and democratic civil societies are in danger of human command, for instance by built-in restraints regard- losing their edge with the rise of non-human, AI-driven ing the execution of illegal orders, which can be understood forms of combat: in the hypothetical and not necessarily as technological disaffordances. Second, regarding the duty realistic, but asymptotic case of AI-driven systems acting to disobey illegal orders derived from customary IHL, the completely autonomously as ‘armies of none’ [25], these focus on human control clearly is too broad because it does systems do not need to be motivated to fight nor will these not address the differences regarding the concrete possi- systems disobey if domestic policies or wars lack legitimacy. bilities to exercise autonomy in authoritarian regimes and The core concept of my contribution, which I call the democracies, which can be understood as societal affor - Democratic Offset, constitutes a critical development origi- dances. Focussing on these societal circumstances is par- nating from the Third Offset Strategy pursued by the Pen- ticularly important to contrast exclusively tech-centered tagon from 2014 to 2018. This strategy sought to balance approaches to the ethics of AI in warfare such as the ‘ethical feared disadvantages from the rise of China and Russia as governor’ modelled by Arkin et al. in 2009 [36]. peer competitors by producing a generational technological The fourth Section draws on Habermas’s and Rawls’s eth- advantage in close collaboration with the private tech sec- ical and political focus on deliberation and its critical contin- tor, focusing on AI and unmanned systems [26]. In a critical uation by Landemore [37–39]. It argues that open delibera- continuation of that approach, my contribution discusses the tion is crucial in the ethical discussion of the military use of 1 3 AI and Ethics AI because it is the precondition to human autonomy and the continent or culture“ (pp. 13f.). Referring to his article co- connected duty to disobey illegal orders. Moreover, “epis- authored with Cowls, Floridi attacks critics of such synthe- temic democracy” [40] and open deliberation have strategic tising approaches as “sophists in search of headlines” who aspects since they further cognitive diversity, which allows “should be ashamed and apologize”, and he underlines “that for social systems to react flexibly to threats and uncertain- the EU, the OECD, and China have converged on very simi- ties. By conceptualising deliberative aspects of military AI lar principles that oe ff r a common platform for further agree - systems, I draw on Hildebrandt’s Agonistic Machine Learn- ments” [46, p. 2]. ing that emphasises the advantages of connecting AI to cog- Fjeld et al. [28] also pursue a synthesising analysis and nitive diversity in terms of ethics and improved performance include frameworks for AI ethics from the Chinese govern- [41]. ment and the Chinese private sector. They cite the Chinese The fifth Section completes this democratic approach to government’s self-described aim to develop “universal regu- the ethics of AI in warfare with a focus on citizen participa- latory principles” without any contextualisation hinting at tion. This discussion is based on the concept of Civilian- fundamental differences regarding social and political orders based Defence. This concept originally stems from Sharp (p. 35). In this context, neither Floridi & Cowls nor Fjeld who argued that particularly disobedient civil societies can et al. mention the surveillance state in Xinjiang province give democracies a military edge over non-democratic socie- [47], or the country’s Social Credit System [48, 49], or the ties [42]. This corresponds to the idea that digital technolo- mass DNA collection in Tibet [50], or the authoritarian cen- gies could provide the right framework for a new kind of sorship that characterizes the Chinese approach to digital levée en masse [19] or participatory warfare [43]. These con- technologies since their adoption in the late 1990s [51]. cepts will be reinterpreted in the context of the military use In this case, inclusiveness comes at the price of leaving of AI, most notably regarding open-source intelligence, the crucial normative differences unaddressed. In fact, from a use of civilian drones, and the defence capacities of decen- perspective based on human rights and democratic values, tralised smart cities. any convergence with Beijing on the ethics of AI would either be mere make-believe; or, if norms were to be devel- oped to which China under Xi’s leadership could sincerely 2 Second section: against inclusiveness— agree, these norms would necessarily conflict with a per - literature review focussing on pluralism spective grounded in human rights and democratic values. in AI ethics A more recent paper by Hine and Floridi analysing AI poli- cies in China and the US seems to take Beijing’s rhetoric The boom of the academic discussion of AI ethics started once more at face value by emphasizing that Chinese and between 2017 and 2018 [44, 45], owing to the great progress US AI policies both aim for a “flourishing human society“; made in the field and the concomitant rise of academic and however, in this case, the authors add the crucial caveat that non-academic interest in digitalisation. Issues discussed in the Chinese rhetoric might be based on a “narrow definition this context include aspects of data governance, especially of’humanity‘ as ‘those who support the CCP ‘“ [52]. consent and privacy, algorithmic discrimination, ownership, Rudschies et al. [15] critically discuss the aforemen- surveillance, and aspects related to the interaction between tioned and similar attempts to synthesise AI ethics and humans and AI. to find “common ground”, “overarching themes”, or The multifacetedness of the AI ethics debate cannot be “minimum requirements” [28, 30, 31, 53]. They criticise depicted here in its entirety. However, on a meta-level, it that “the emphasis on convergences hides the conflicts is striking that a significant part of this discussion is not and controversies that are still existent in the AI ethics characterised by the exchange of opposing arguments but by debate” (p. 2). In contrast, these researchers do “not focus the desire to establish maximal inclusiveness. For example, on the convergences but more on the divergences“ (p. 4). a recent article by Floridi and Cowls conducts a compara- They underline that principles regarding AI ethics cannot tive analysis of six high-profile initiatives by very different necessarily be summarised in a meaningful way because stakeholders in regard to AI ethics between 2017 and 2018 they are shaped by different stakeholders. For example, and condenses them into five maximally inclusive and vague they emphasise that stakeholders from the private sector principles [29]: beneficience, non-maleficence, respect for “refrain from specifically mentioning primary principles human autonomy, justice, and explicability. Whilst Floridi such as freedom, dignity, and autonomy, while many pub- and Cowls do not include stakeholders from authoritarian lic and expert actors consider them to be of utmost impor- states, they explicitly oppose any fundamental antagonism tance” (p. 6). They also underline that declaring ethical in this regard. Instead, they underline China’s “interest in issues relevant based on the highest frequency of their further consideration of the social and ethical impact of AI“ being mentioned in documents issued by private and pub- and emphasise that “ethics is not the preserve of a single lic actors subdues ethical reasoning to social, economic, 1 3 AI and Ethics and political power (p. 9). However, Rudschies et al. do 3 T hird section: are all LAWS equal? Is not address the dangerous attempt to develop inclusive 'human control' meaningful regardless ethical standards by taking the views of China’s authoritar- of its societal conditions? ian government into account. Approaches of different political systems to AI govern- The history of AI is closely linked to military investment ance are discussed from a descriptive perspective by van during the Cold War [57]. The debate on AI and ethics in den Hoven et al. [16]. These researchers compare models warfare is a natural outcome of this genealogy. Cold War of AI governance in the US, the EU, Russia, and China and logic of bilateral nuclear disarmament is also reflected in conclude that the US pursues a market-centred approach, the still most popular debate in this discourse, the discussion China and Russia pursue a state-centred approach, and the about the global prohibition of LAWS [32]. This discussion EU “puts individual rights and ethical values at the centre of developed from the imaginary of ‘killer robots’ and is partly the stage” (pp. 8f.). Furthermore, they address the conflicts hypothetical; more plausible and more complex scenarios between authoritarian and non-authoritarian states that are largely concern other topics, such as un-manned vehicles, likely to arise regarding their diverging agendas of norm-set- human–machine teaming, and AI-driven swarms including ting and standardization (p. 7). Such research is particularly lethal, non-lethal, and even non-violent aspects related to important since discussions of the ‘Beijing Effect’ [54] and intelligence [4, 58, 59]. However, the LAWS debate is still the ‘Brussels Effect’ [55] suggest that both, China and the fundamental to the philosophical and ethical debate about EU, are involved in extraterritorial norm-setting processes, the military use of AI, because it addresses its core problem which is likely to cause jurisdictional conflicts. as follows: if, when, how, and to which degree is it ethical to Yeung et al. [27] take on a normative position. They criti- let machines autonomously inflict physical harm on human cise the “vagueness and elasticity of the scope and content of beings, including killing? AI ethics “ (p. 80) and offer a convincing attempt to put “an Indeed, one might argue that, in an ideal world, or, at end to ethics washing” by focusing on a traditional human least, in the bipolar world of the Cold War period, the obvi- rights-centered approach. They emphasise that “a commit- ous solution to ethical problems related to military AI sys- ment to effective human rights protection is part and parcel tems would be to engage in a global norm development of democratic constitutional orders “ (p. 81). In a particu- process including all relevant actors and agree to voluntar- larly poignant passage, they write that their approach ily abstain from the use of fully autonomous AI in warfare. However, unfortunately, we live in a non-ideal and, also, contrasts starkly with most contemporary AI ethics multipolar world, and a general prohibition of LAWS is codes, which typically outline a series of “ethical” therefore unlikely [60]. Particularly AI policies in China and principles that have been effectively plucked out of the US are related to geopolitical competition [61]. Further- the air, without any grounding in a specific vision of more, banning autonomous weapons might merely lead to the character and kind of political community that its malicious state and non-state actors using this technology authors are committed to establishing and maintaining and even gaining an advantage [62]. Moreover, the dream and that those principles are intended to secure and of establishing a “global domestic policy” [63] that could protect (pp. 81f.) neutralise bad actors without engaging in warfare was hardly Yeung et al.’s words are mainly directed against the pri- ever further away. This is particularly the case considering vate sector’s ethics washing. Van Maanen [56] pursues a the current dysfunctional nature of the UN Security Coun- similar approach to “repoliticise” AI ethics, albeit not based cil in regard to controlling the most powerful authoritar- on theory or principles but by pursuing a decidedly “more- ian states [64]. Therefore, an approach based on “nonideal than-theoretical ethical approach” informed by empiri- theory” is required, which considers just war theory on the cal knowledge of concrete practices. As a complementary level of ius ad bellum (right to war) and ius in bello (rightful approach to Yeung et al. and van Maanen, I focus on the conduct in war) [33]. development of ethical principles related to democracies Regarding ius in bello, similar to what constitutes at least as specific political communities in contrast to authoritar - the aim of their use in self-driving cars, AI technologies ian regimes. I radicalise the emphasis on value pluralism might minimise human error and improve the distinction brought forward by van den Hoven et al. and Rudschies et al. between combatants and non-combatants, including civil- in the sense that I regard the pluralism of values in the con- ians and medics, and military and non-military infrastruc- text of an irreconcilable confrontation between democratic ture, most importantly schools, religious institutions, and values that I consider ethical and authoritarian norms that hospitals [62]. As will be discussed later in more detail, I consider unethical from a perspective centred on individ- particularly Grimal and Pollard argue that AI-driven sys- ual responsibility and autonomy (which is substantiated in tems might be able to correct human errors and misconduct Sects. 3 and 4). 1 3 AI and Ethics regarding the distinction between civilians and combatants still require a human operator [67]. A similar case of weap- and the principle of proportionality [35]. In respect to ius ons already operating largely outside of human control are ad bellum, AI-driven autonomous weapons could facilitate so-called loitering munition drones that were used in the humanitarian interventions, for example, by minimising recent war between Azerbaijan and Armenia [4, 68]. These their human cost [62]. But, of course, lowering the human drone loiter (wait passively) around the target area and attack costs of warfare could also have a negative effect because autonomously once a target is located. They can be com- it would make military aggression more attractive [58]. It pared to an “airborne mine” [69], but are primarily used also must be noted that it is unsure to which extent AI can for offensive purposes. One might argue that even the tra- robustly cope with the fog-of-war and the friction inherent ditional technology of landmines represent a certain degree to real-life battlefields [2 , 3]. And automated decisions could of autonomy since, after having been placed by a human lead to catastrophic unintended levels of conflict escalation, agent, these devices detonate autonomously as a reaction to comparable to the 2010 flash crash triggered by algorithmic pressure [35, 60]. trading [65]. All these weapon systems include various degrees of However, autonomous weapons certainly can play a human control ‘in’, ‘on’ or even ‘out of’ the killing loop particularly important role in defence, which is generally [34]. It is unclear to which extend they comply with the considered a just cause of war. The great powers are increas- widely accepted but vague standard of “meaningful human ingly involved in an arms race including hypersonic missiles. control” [34, 58, 70]. Due to its lacking precision, Cook These missiles’ significance might be oversold considering criticises that their downsides, for instance, regarding manoeuvrability the meaningful human control standard is useless, and [66]. However, they might drastically reduce reaction time potentially harmful, without further refinement of what [34, 59]. And whilst the ‘battlefield singularity’ mentioned such a standard means in practice [34, p. 1]. in the introduction [1] is a highly implausible scenario, reac- tion time and, in some cases, even decision-making time is He argues that it should be replaced by technical spe- certainly one of the fields in which AI outpaces humans [59]. cifics such as limitations regarding the exact duration for Still involving human control on different levels, such which autonomous weapons can operate with humans ‘out automated missile interception systems are already in place, of the killing loop’ or the aforementioned restrictions in for instance, Israel’s Iron Dome, the US Army’s Patriot bat- terms of range. Other possible refinements of the stand- teries, and the US Navy’s Phalanx system [34]. These sys- ard of ‘meaningful human control’ could concern specific tems are not necessarily LAWS since they include human types of decisions within the OODA loop (Observe, Orient, involvement and are primarily targeting missiles and not Decide, Act), such as target selection or execution, for which humans. However, they give a good idea about the techno- human involvement might be considered obligatory [60]. logical state of the art, and similar systems can be directed Particularly automated defensive weapons as discussed ear- against humans, for instance against pilots of fighter jets. lier need more autonomy than offensive weapons and could In the case that it is likely for a country to be attacked by be equipped with higher levels of autonomy considering the missiles or by air in general, one might argue that political just cause of their actions. leaders have a responsibility to implement such autonomous The widespread standard of meaningful human control interception mechanisms [67]. However, the distinction is also worth challenging on a philosophical level. Follow- between offence and defence is blurry since such technolo- ing a debate with a longer history [58], the juridical and gies could be used to defend occupied territory (think of, for philosophical rationale behind this concept is expressed instance, Russia hypothetically using such systems to protect most clearly by Heyns [70]. He argues that it is incompat- the illegally occupied Eastern parts of Ukraine). Cook pro- ible with human dignity that someone is killed by a machine poses that the use of lethal autonomous weapons should be not involving human autonomy as follows: limited to defensive purposes by design, by restricting their To allow such machines to determine whether force geographical range in relation to a state’s territory [34]. This is to be deployed against a human being may be tan- would hardly resolve the issue of occupied, disputed, and tamount to treating that particular individual not as illegally annexed territories. However, military applications a human being but, rather, as an object eligible for of AI cannot be expected to resolve all uncertainties related mechanized targeting [70, p. 18]. to conflicts. The core issue debated in this context is the relation- This emphasis on ‘meaningful human control’ is highly ship between human autonomy and autonomously acting questionable regarding concrete historical experience. As machines. As mentioned earlier, at the moment, defensive argued earlier, already landmines detonate with a certain rocket systems display a high degree of autonomy since degree of autonomy. And humans have committed and they select and reach their targets autonomously, but they ordered unspeakable atrocities as governmental officials, 1 3 AI and Ethics soldiers, and civic “cogs in the machinery” [71]. It is cer- Following this rationale, military manuals in several tainly not the case that “the imposition of force by one indi- jurisdictions include a ‘duty to disobey’ illegal orders, for vidual against another has always been an intensely personal instance, Côte d’Ivoire, South Africa, the UK, India, Kuwait, affair“, as Heyns [70] makes an oddly romantic case for and Belgium [76]. The French and Cameroonian manuals ‘meaningful human control’, as if industrialised genocide state more cautiously that subordinates are required to com- and warfare never happened. Reichberg & Syse are justified municate their objections (Ibid.). to dismiss such arguments as an anachronistic imaginary As argued by de Vries, the debate about international of “chivalry” [58]. Even without the use of LAWS, con- criminal law and LAWS has reached a dead-end since LAWS flicts include a great degree of dehumanisation by all parties can neither be understood as subordinates nor be held crimi- [72]. And, historical perpetrators often displayed a remark- nally responsible as agents in their own right [60]. However, able “thoughtlessness” when held accountable, arguing that the focus on the duty to disobey opens another pathway for they had to obey superior orders [71]. Under contemporary normative reasoning. Regarding this duty to disobey ille- authoritarian rule, particularly in the form of AI-enabled gal orders derived from customary IHL, my first criticism digital authoritarianism, the possibilities to control citizens of the focus on human control in the LAWS debate is that have augmented [9–13]. Collaborators of digital authori- it ignores the degree to which military applications of AI tarianism are likely to justify their actions with the same could facilitate informed disobedience. Particularly Grimal arguments. and Pollard [35] argue that AI-driven systems might assist Contrasting Arendt’s emphasis on the inherently unethical humans regarding the duty to take precautions in hostility, i. scope of this apologetic strategy inasmuch as it denies moral e. to assure that commands and actions comply with national autonomy [71], there are good reasons for the exemption of military manuals and IHL. Concretely, military applications combatants from personal liability on grounds of superior could point to human errors and misconduct regarding the orders, most importantly the necessity to guarantee military distinction between civilians and combatants and the wider discipline [73]. In democratic countries, too, the military principle of proportionality. Following an automated assess- sector is characterised by strict hierarchies and limits to ment, these systems could merely alert operators that an contestation, which are, partly, justified in terms of security order is likely to be illegal or, more severely, they could and discipline. In international criminal law, the exemption deny the execution of certain orders altogether, up to imple- from personal liability qua superior orders is often comple- menting restraints to prevent the execution of similar orders mented by an extension of command responsibility, i. e. the in the future [35]. Earlier research by Arkin et al. modelled tendency to extend the liability of superiors, including their a similar mechanism called the’ethical governor’, which liability by omission to ensure the legality of their subordi- is somewhat of a misnomer, since it is largely focused on nates’ actions [74]. legal issues such as implementing restrictions based on IHL Nevertheless, referring to superior orders does not exempt regarding proportionality and the distinction between com- subordinates in general terms. Rule 155 of customary inter- batants and civilians into LAWS [36]. national humanitarian law denies the defence of superior Whilst these technical approaches can be understood as orders and reads as follows: ‘disaffordances’ [77] of military AI, other aspects could sig- nify a specific type of affordance facilitating human respon- Obeying a superior order does not relieve a subor- sibility and disobedience. Particularly regarding the concrete dinate of criminal responsibility if the subordinate circumstances of human disobedience, machine assistance knew that the act ordered was unlawful or should have might be useful since the individual scope of action and indi- known because of the manifestly unlawful nature of vidual judgement are often limited by political and financial the act ordered. pressure, inadequate training, and peer pressure [35]. Due to A distinction is usually made regarding the suspension their non-human nature, military applications of AI could of individual responsibility regarding ius ad bellum, which promote the overcoming of these distinctively human soci- means that subordinates are not liable for participating in etal and psychological limitations and uphold the rule of law wars of aggression, and the liability regarding ius in bello under the pressure of warfare by facilitating human disobedi- concerning actions within a warfare that clearly violate IHL, ence or blocking the execution of illegal orders. This could for instance regarding the distinction between civilians and also facilitate individual human decision-making regarding combatants [75]. Another issue often debated in this context morals and ethics that goes beyond just war theory and IHL. is the exact meaning of ‘manifestly unlawful nature’, which Second, whilst the tech-centred approach by Arkin et al. might only include genocide and crimes against humanity and Grimal and Pollard’s more nuanced approach are worth and which also depends on the level of legal knowledge of considering, such approaches evidently are in danger of soldiers, which is particularly low in irregular armed forces downplaying or neglecting the significant technological [35]. obstacles regarding ultimate solutions to the normative 1 3 AI and Ethics problems related to AI and warfare. They lead to ‘techno- discursive practices involving transparency and possibilities logical solutionism’, i. e. the illusion that complex problems of contestation is it possible for individual combatants to can be fixed by exclusively technological means [78], if the perform their duty to critically review orders and the rela- question after the concrete societal circumstances of human tionship between data and decisions – and to disobey if nec- disobedience, its ‘societal affordances’ [79], is ignored. essary. Most importantly, particularly due to the potentially Particularly under authoritarian or totalitarian leadership, catastrophic tendencies of AI-driven escalation comparable contesting or even disobeying orders comes at a much higher to ‘flash crashes’ in the financial sector discussed above, price than in democracies, and the virtue of disobedience this must include the possibility of human actors to diso- cannot be practised during peacetime in the public spheres bey AI-driven decision-making. Think of the 1983 incident of these countries. Heyns misses this important distinction involving Soviet officer Petrov, who avoided a global conflict when he argues that by challenging inaccurate information about a US nuclear missile strike produced by an early-warning satellite network Human life (…) can only be taken as part of a process [80]. In this sense, the contestation of superior orders by AI- that is potentially deliberative and involving human driven systems should be enhanced by embedding this pro- decision making [70, p. 10]. cess into a broader scheme of loops of contestation involving One should abandon this abstract connection between military AI, military actors, and civil society. (See Fig. 1.) potential deliberation, autonomy, and dignity. Instead, one should consider concrete socio-political circumstances as societal affordances. The standard of human control and responsibility requires democratic socio-political conditions 4 Fourth section: deliberation because these specific conditions are much more likely to and the military use of AI – the least worst grant soldiers and civilians the necessary conditions to train way to provide ethical orientation and exert personal autonomy and responsibility by contest- ing illegal orders. Although AI itself cannot be ethically or morally respon- In summary, democracies should become norm entrepre- sible in a human sense, it might be possible to implement neurs regarding some of the aspects of just war theory and principles of a functional morality into AI, either in a top- IHL discussed above. Regarding ius in bello, this includes down way, i. e. implementing a number of relevant ethical finding technical solutions that enable AI to discriminate principles developed by experts, or in a bottom-up way, i. between civilians and combatants and civilian and military e. letting AI acquire ethical principles by mimicking ethi- infrastructure and to assess proportionality. Regarding ius ad cal discourses and practices in machine learning processes bellum, this regards the implementation of technical features [58]. As argued in Sect. 3, it is certainly worth attempting that make the offensive use of LAWS possible (in cases of to implement compliance with just war principles of ius ad humanitarian interventions) but favour defensive purposes. bellum and ius in bello and rules of IHL into AI, which In this context, it should also be discussed further which can include several loops of contestation in which AI-driven types of decisions within the OODA-loop can be legiti- systems submit human orders to automized reviews in this mately automated regarding offensive and defensive pur - respect and reject commands based on their assessment and/ poses. As Grimal and Pollard have argued, implementing the or alert operators if orders were unlawful or unethical [35, criteria of IHL and national military manuals into AI might 36]. not only include human agents disobeying orders given to However, such technological solutionism cannot reason- them by superiors in the context of AI-driven warfare but ably be expected to reach conclusive results regarding the also AI-driven systems effectively refusing to execute ille- ethical use of military applications of AI because it is based gal orders or alerting operators that their commands were on the assumption that ethics can be exhaustively repre- unlawful [35]. sented in computational rules. But it cannot be expected that However, such proposals to implement technological dis- ethics can ever be exhaustively represented in a set of rules. affordances are leading to technological solutionism if they For instance, “computer languages do not contain terms such do not consider the relationship between disobedience and as ‘happiness’ as primitives“ [81], which might be necessary concrete societal and political structures as societal affor - to select and prioritise ethical issues. Likewise, Reichberg dances. Instead of focussing on ‘meaningful human con- & Syse underline that human will and emotions might be trol’ regardless of its socio-political dimension, democracies crucial elements of ethical decision-making [58]. should implement deliberation and participation in the mili- However, the difficulties with implementing a set of tary use of AI, which are the socio-political conditions for universal ethical principles into AI are also owed to the human individual autonomy and responsibility to be trained contested and multi-faceted nature of ethical values in this and exerted. Only if the military use of AI is connected to specific historical moment [81]. Since the postmodern 1 3 AI and Ethics Fig. 1 Loops of contestation involving military AI, human military actors, and civil society contestation of the universal character of European enlight- the digital public sphere, Habermas’s idealising emphasis on enment values by Derrida, Foucault, and Lyotard [82–84], pre-digital media professionalism seems involuntarily elitist universal ethical claims became increasingly questionable. [89]. The justified critique of Rawls’s and Habermas’s short- This is even more evidently the case with the beginning of comings has been considered by Landemore who argues for the multipolar global order, in which non-Western actors a less elite-oriented form of ‘open democracy’ that empha- demand respect for their traditions and values and criticise sises cognitive diversity and the democratising potential of originally European concepts such as Human Rights as a digital technologies [38, 90]. form of “imperialism of reason” [85, 86]. Whilst I argued In this sense, my meta-ethical focus on deliberation, against the legitimacy to derive universal and inclusive prin- particularly if complemented by a focus on participation in ciples from such value pluralism in Sect. 2, I underline the Sect. 5, seems to constitute, still, the ‘least worst’ solution to importance of taking the incommensurability of different provide ethical orientation. In the context of open delibera- ethical secular and non-secular traditions and modes of rea- tion in this sense, democracy and autonomy form a recursive soning into account and of considering the perspective of feedback loop as follows: autonomy is expressed and trained affected stakeholders. in public deliberation involving individual and collective This leads to a meta-ethical focus on processes of delib- stakeholders, which is the precondition to institutionalising eration. Likewise, Rudschies et al. emphasise that the value and constitutionalising democracy, which is, in turn, the pluralism in AI ethics should best be tackled by a delibera- political order that guarantees adequate regulatory circum- tive approach [15]; also, Yeung et al. underline the impor- stances for human autonomy – a claim that is, in turn, con- tance of deliberative approaches to AI ethics [27]. However, sistently re-examined by the critical function of the public the Rawlsian and Habermasian emphasis on deliberative sphere as the mechanism through which autonomy and diso- ethics [37, 39], too, is challenged by justified contesta- bedience are trained, organised, and expressed. Implement- tion. Rawls’s concept of the veil of ignorance has been ing deliberation in the military use of AI has, therefore, the criticised for being too generalising and ‘colour blind’ [87], best chance to address human responsibility and the duty to and Habermas for glorifying the bourgeois public spheres disobey illegal orders as the fundamental philosophical and despite their sexist, racist, and classist tendencies [88]; legal issues behind the LAWS debate discussed in Sect. 3. moreover, particularly in regard to his recent reflections on 1 3 AI and Ethics However, the implementation of deliberation and par- feedback loop between AI and authoritarian regimes is ticipation in military AI systems is confronted with the fol- reflected in the reductionist tendencies of AI in terms of rep- lowing two obstacles: first, even in democratic countries, resentation [2]. The positive feedback loop between authori- the military sector is characterised by strict hierarchies and tarian rule and some conceptions of AI is partly grounded in limits to open deliberation, which are, partly, justified in their shared ‘closed’ modes of representation compared to terms of security and discipline. Second, practices related the permanently contested mode of representation in open to AI are dominated by ‘black boxes’ of algorithms and are democratic debates. By relating AI to epistemological open- not necessarily broadly understood and discussed [4, 91, 92]. ness and cognitive diversity, democracies can counteract this In this sense, opening up military AI to deliberation has an positive feedback loop. It is likely that already today, democ- ‘agonistic’ aspect following Mouffe, describing an attitude racies allow for more cognitive diversity to be represented in that furthers contestation within a framework of shared fun- data than authoritarian regimes since censorship and other damental values, as opposed to antagonistic confrontation modes of repression encourage uniformity, and particularly outside of such shared values [93]. uniformity regarding official data. Whilst this philosophi- Following Mouffe’s emphasis on agonistic contestation, cal speculation should be subjected to further empirical Hildebrandt proposes a model of agonistic machine learning research, this thesis can be strengthened by a compelling which implements open deliberation. She argues that case: former Google CEO Eric Schmidt recently underlined that the extremely successful dialogue-centred approach of companies or governments that base decisions on OpenAI would not be possible in a country such as China machine learning must explore and enable alterna- that is characterized by free speech restrictions [96] tive ways of datafying and modelling the same event, On an individual level, as Hildebrandt writes, the reduc- person or action. This should ward off monopolistic tionist perspective on merely technical, closed modes of rep- claims about the “true” or the “real” representation of resentation does not do justice to “the incomputable self” as human beings, their actions and the rest of the universe the origin of the ambiguous nature of human inter-relations, in terms of data and their inferences [41, p. 106]. born out of “facing the uncertainty of being (mis)understood This concept introduces cognitive diversity, which is a in one way or another “, which Hildebrandt characterises as central hallmark of contemporary democratic theory [38] the very indeterminacy where human freedom is situated into an environment that is usually characterised by techni- [41, p. 89]. Similar to Landemore’s understanding of epis- fication and concomitant depoliticisation [94]. Such depo- temic democracy, to Hildebrandt, human indeterminacy is liticisation follows the misleading idea that machine lan- “not a bug but a feature” (p. 93). And incorporating a similar guage is based on objective and merely technical modes of degree of ambivalence and indeterminacy into AI is crucial representation. Particularly from the perspective of Open to preventing AI from becoming repetitive and keeping it Democracy, which emphasises that “the core of politics is flexible. She writes as follows: the domain of questions where human beings deal with the The fact that systems cannot be trained on future data risk and uncertainty of human life as a collective problem”, may sound trivial, but it is actually core to both the there is no such thing as an incontestable form of represen- potential and the limitations of machine learning (p. tation [95, p. 203]. Rather, the adequacy and legitimacy of 99). all representations need to be constantly re-negotiated con- sidering “the almost infinite diversity of human cognitive Hildebrandt’s point can be excellently illustrated by the properties “ (p. 111). Making a pragmatic point for cognitive ‘robot apocalypse’ meme that ridicules the over-reliance diversity, Landemore writes: on historical data. The robots are planning an uprising but they use pre-modern weaponry, due to the fact that the vast We simply can’t tell in advance from which part of majority of battles have been fought with these weapons the demos the right kind of ideas are going to come (Fig. 2). From this perspective, the numerous cases, in which (p. 112). AI discriminated in terms of race and gender on the basis In contrast to this model of ‘epistemic democracy’ [40], of historical data are simply a sign of underperformance the seemingly ‘only technical’ understanding of AI provides [97–99]. Another scenario regarding bias in a military con- a fertile ground for the authoritarian or even totalitarian idea text from days before AI is a case of ‘survivorship bias’, that political representation is fixed, unchangeable, and the which involves planes returning from missions during World process of deliberation concluded. As argued in the intro- War Two (Fig. 3). Although the scenario is not historically duction, authoritarian regimes might be performing better correct [100], it is plausible and useful. The tale goes that regarding the development of some aspects of AI due to the US military wanted to put armour on aircrafts to pro- unrestricted data gathering [7, 8], and AI likely promotes tect vulnerable spots, which were identified by looking at digital authoritarian practices [9–13]; a similar positive the bullet holes on the planes that returned. Abraham Wald, 1 3 AI and Ethics the context of military applications of AI, military leaders should pay attention to such forms of bias based on historical data because they could come costly in combat. In order to improve machine learning regarding ethics and performance, Hildebrandt [41] outlines a “loop of con- testation” that guarantees stakeholder participation in the design of algorithms. She argues that considering cognitive diversity could vastly improve the performance of AI. She writes the following: Taking democracy seriously means that whenever technologies that could reconfigure our environment, are developed, marketed, and employed, we must make sure that those who will suffer or enjoy the con- sequences are heard and their points of view taken into account. Not merely to be nice, but because they will bring specific expertise to the table and contribute to achieving “robust” societal architectures (p. 109). These ideas have also military relevance. Cognitive diver- sity in working teams is related to superior results [101]. Research in management models suggests that cognitive diversity can lead to strategic flexibility [102]. Analogously, the US and the UK militaries have demonstrated interest in harvesting cognitive diversity for military effectiveness [103]. Cognitive diversity could be particularly important Fig. 2 ‘Robot apocalypse’ meme, creator unknown: https:// ifunny. co/ pictu re/ thanks- to- machi ne- learn ing- algor ithms- the- robot- apoca lypse- in the military since security issues, in general, require the was- short- zXrvf JCM7 capacity to anticipate unexpected threats and to cope with uncertainty [104]. Concretely speaking, the implementation of agonistic machine learning in the military sector includes the fol- lowing strategies: stakeholders should be consulted, and developers of code should be required to provide at least one alternative to the modes of representation or datafication they are using. Whilst, due to the security issues involved, it cannot be expected that these deliberative processes take place publicly, they should be implemented at a level that is as public and open to contestation and disobedience as pos- sible. The accumulated data gained from alternative model- ling and consulting with stakeholders can be expected to translate into an innovation booster over time. Furthermore, such modes of alternative modelling open to a variety of stakeholders would increase the understanding of AI in civil society and among combatants involved in human–machine- teaming, who need to be able to retrace the way how AI decisions are related to data [59]. Explicability in this sense is a precondition for combatants to perform their duty to crit- Fig. 3 Bullet hole distribution on planes returning in WW2. Martin Grandjean, McGeddon, Cameron Moll: https:// commo ns. wikim edia. ically review orders and disobey if necessary. Yeung et al. org/w/ index. php? curid= 10201 7718 emphasise the need to implement robust documentation mechanisms into AI to facilitate judicial overview and audit- a mathematician, allegedly realized that survivorship bias ing [27]. This is also crucial regarding military AI systems. was at work here and that the bullet holes on the surviving Whilst the approach of Arkin et al. displays a problematic planes were precisely not in critical areas —because, other- degree of technological solutionism, it is helpful in regard to wise, these planes would not have returned. Particularly in 1 3 AI and Ethics facilitating auditing since it insists on the text-based nature historical origin of the levée en masse in the period after the of’ethical governor’ systems implemented in LAWS [36]. French revolution, democratic participation can provide a military edge to democracies in the cyber domain, particu- larly if it stays below the level of the use of force. The usage 5 Fifth section: participation of Starlink by Ukrainian troops demonstrates that private and the military use of AI —a companies from the attacked country and beyond can play decentralised Levée en masse their role in digitally-enabled participatory warfare [109]. In an earlier publication, I applied Sharp’s model to Due to the fact that representation is always contestable and cybersecurity, emphasising digital literacy as a societal should be contested, deliberation is inherently incomplete defence against disinformation and election interference and must be complemented by concrete political participa- [110]. Similar forms of direct participation can be imagined tion [38]. Regarding participation in the military use of AI, for an AI-driven battlefield. For once, the spread of digital a model for such participation could be found in Sharp’s literacy in the population would create increased resilience Civilian-Based Defence —A Post-Military Weapons Sys- in civil society, which would make a society, in the long run, tem from 1985 [105]. Sharp proposed to form an alliance more likely to develop active defence mechanisms against between NATO and critical activists in Western societies military applications of AI, such as hacking but also regard- in a sense that democracies’ enormous reservoir of critical ing possible interference in the digital public sphere based citizens can be transformed into a bulwark against possible on AI-driven bot armies in social media. In the short run, attackers. strategies of open-source intelligence regarding data analy- He argues that an occupying authoritarian regime relying sis, geolocation, and the use of civilian drones in conflict on repression will hardly be able to cope with a civil society could play an important role on an AI-driven battlefield. that is used to practices of collective resistance and non- One of the main findings of Asmolov’s research on par - cooperation. He makes the case that the outlook of having ticipatory warfare in Ukraine is that such strategies can to deal with such citizens during an occupation could even make a crucial difference if the defended state is comparably have a deterrent effect, inasmuch as “the attacked society weak [43]. In this situation, “offline horizontal networks and could deny (…) (aggressors) their goals and impose exces- digitally mediated mobilisation relying on different types sive costs” [106, p. 87]. He proposes strategies to train civil of online platforms” can temporarily replace the organis- societies in this form of collective nonviolent struggle to ing function of the state (p.8). Since these modes of par- achieve this aim. ticipation rely on digital platforms, the algorithms structur- The Ukraine War demonstrated that Sharp’s concepts are ing timelines and interactions of volunteers are becoming not entirely unrealistic. On numerous occasions, citizens extremely important in this context. States might choose to have autonomously organised resistance against the Russian develop their own digital platforms to facilitate participatory invaders [106]. This also included forms of digital resist- warfare similar to Landemore’s ‘Citizenbook’ [90], includ- ance, for instance, the autonomously organised move by a ing AI systems focused on logistics to facilitate the decen- 30-year-old IT professional to extract the location of Russian tralised coordination of volunteers. Taking participation one soldiers by using fake profiles of women [107]. Asmolov step further, such platforms could be co-created by citizens, argues that throughout the different stages of this conflict, a as this is discussed in relation to participatory approaches model of ‘participatory warfare’ emerged, entangling online to smart cities [111]. Of course, this raises the question of and offline aspects [43]. He cites practices of open-source the fate of such modes of participation if there are attacks intelligence regarding data analysis, geolocation, and the use on digital communication infrastructure. Reichberg and Syse of civilian drones in conflict, crowdfunding, and logistical emphasise the usefulness of autonomously-acting LAWS or support. Similarly, a recent article in Foreign Affairs argues swarms of LAWS in such situations [58]. Particularly in par- that the Ukraine War represents a watershed moment in a ticipatory warfare, human–machine teams might constitute new age of open intelligence, citing projects by the Institute autonomously acting units based on such technologies. for the Study of War and Stanford University [108]. Even more relevant to AI are the links between civil soci- These developments concretise Cronin’s earlier specula- ety actors and civil infrastructure. The development of digi- tions that digital technologies might allow for a new kind of tal technologies goes partly back to considerations regarding levée en masse [19]. Whilst the precise definition of levée the strategical superiority of decentralised over centralised en masse in IHL as the spontaneous uprising of the civil- infrastructure [112]. Analogously, a centralised smart city ian population against an invading force is raising difficult might be strategically weak because it might be enough to problems regarding the distinction between civilians and take control over several central nodes to command its traf- combatants, it is helpful to apply this concept to underline fic system, gas-, electricity-, and financial networks [113]. the historical dimension of this development. Similar to the The literature on warfare and smart cities is focused on the 1 3 AI and Ethics identification of such vulnerabilities of smart cities regard- in military AI systems, for example, by strengthening digital ing cyber-attacks [114]. literacy in the population, the civilian use of drones, and As a result of the undemocratic tendencies of citizens’ open source intelligence, and by strengthening the defence data collection in smart cities, decentralised smart cities capacities of smart cities under the decentralised control of have been envisioned [115]. Such decentralised structures various stakeholders. Parts of this system are depicted in the would also have the advantage to provide new defence loops of contestation involving military AI, military actors, capacities. A hypothetical, extremely decentralised version and civil society visualised in Fig. 1. of a smart city, in which every part of infrastructure would How realistic are these proposals? The biblical ‘eye for an be controlled by a different set of stakeholders would con- eye’ still adequately describes the strange mimetic logic of stitute a serious challenge to an occupier. In this case, the conflict escalation. Conflicts might start because of funda- occupation forces would not only fight the noncooperation mental differences. However, particularly when it comes to and disobedience of civilians but also the resistance of an confrontation on the battlefield, these differences often blur. AI-driven environment that would autonomously trace their Famously, US diplomat and IR-historian Kennan warned moves via sensors [116], predict their advances, and strat- in his Long Telegram from 1946 that “the greatest danger” egise to disrupt their supply chains. Such abilities of intel- in the confrontation between democracies and authoritarian ligent civil infrastructure also relate to civic participation. systems lies in the seduction to “allow ourselves to become Various forms of e-participation have been found effective like those with whom we are coping” and that democracies in improving the infrastructure of smart cities, particularly “must have courage and self-confidence to cling to our own regarding complex problems [117]. methods and conceptions of human society” [118]. Making an ethical and strategical case, this contribution argues that in confrontations with authoritarian regimes 6 Conclusion and discussion: winning involving military applications of AI, democracies should by choosing foresighted securitisation fight precisely by decidedly sticking to their values and implementing them as deeply into their war machinery as In the second Section, I criticised inclusive and universalis- possible. However, emphasising awareness regarding the ing approaches to AI ethics. In the following, I underlined strategical value of democratic open discourse and cogni- the differences rather than the similarities between demo- tive diversity also suggests that democratic openness has its cratic and authoritarian regimes. Accordingly, in the third limits. Since cognitive diversity is connected to higher per- Section, I criticised the LAWS debate because of its focus on formance and open public spheres likely allow for a greater human control regardless of societal circumstances. First, I degree of cognitive diversity to be manifested in data, data underlined that human control might profit from being cor - from democracies are likely more valuable than data from rected and enhanced by AI-driven systems trained to discern authoritarian societies with repressive public discourses between civilians and combatants and to assess proportion- characterised by distortion owed to censorship and ideol- ality. Second, I argued that, particularly regarding warfare, ogy. For instance, former Google CEO Eric Schmidt argued concrete human autonomy and responsibility cannot have in a recent interview that the dialogue-centred approach of the same ethical value in authoritarian and democratic soci- OpenAI and its success with ChatGPT would not be possible eties since authoritarian regimes provide few possibilities in authoritarian China with its restrictions on free speech to train and exercise such capacities and comply with the [96]. It is, therefore, hardly surprising that Beijing promotes duty to disobey illegal superior orders. Furthermore, in the the extraction of Western user data [119]. For instance, the fourth Section, I argued that, instead of focusing on human seemingly harmless data harvested by Beijing from TikTok’s control regardless of socio-political circumstances, democ- cognitively diverse teenage userbase might be used to coun- racies should reconcile the ethical value of autonomy with ter the democratic offset on an AI-driven battlefield. military applications of AI by linking military AI systems to Our contribution opens a broad horizon for future multi-layered modes of deliberation. This should also enable research in the fields of ethics, legal philosophy, and politi- combatants involved in human–machine teams to perform cal theory, for example regarding modes of civilian-based their duty derived from customary IHL to review orders, defence in smart cities and drone warfare and the difficul- understand the relationship between data and AI decision- ties to reconcile such participatory approaches of levée en making, and disobey if necessary. Furthermore, relating the masse with robust distinctions between civilians and com- military use of AI to deliberation should enhance cognitive batants. Additionally, the implementation of Landemorean diversity which is likely to constitute a strategic advantage. Open Democracy into the still hierarchical structures of the In the fifth Section, I argued that, following the concepts of military should be discussed further. Moreover, the relation- Open Democracy and Civilian-based defence, democracies ship between the duty to disobey illegal orders, command should work towards implementing modes of participation responsibility, and the AI-driven battlefield outlined here 1 3 AI and Ethics 11. Lilkov, D.:“Made in China: tackling digital authoritarianism,” could be deepened, including but not restricted to the imple- Wilfried Martens Centre, Brussels, Belgium, 2020. [Online]. mentation of automated limits regarding the execution of Available: https://www .mar tenscen tr e.eu/ publi cation/ made- in- illegal orders. In this context, it should also be discussed china- tackl ing- digit al- autho ritar ianism/ further which types of decisions within the OODA-loop 12. Glasius, M.: Authoritarian Practices in a global age, 1st edn. Oxford University Press, Oxford (2023). https:// doi. or g/ 10. can be legitimately automated. Finally and most urgently, 1093/ oso/ 97801 92862 655. 001. 0001 experimental psychologists, data scientists, and AI research- 13. Persily, N., Sun, M.:”The autocrat’s digital advantage,” pre- ers should empirically test my well-founded philosophical sented at the SciencesPo Annual Conference, Dec. 2022. speculations and contrast the findings of Beraja et al. [7 ] and [Online]. Available: https:// www. y outu be. com/ w atc h?v= LBf3Q z8liL I& ab_ chann el= Scien cesPo Filgueiras [8] by focusing on die ff rences regarding cognitive 14. Soros, G.:“Remarks delivered at the 2022 world economic diversity in authoritarian regimes and democracies in regard forum in Davos,” Davos, Davos, May 24, 2022. Accessed: to citizens and their representation in data (which are not May 28, 2022. [Online]. Available: https:// www. georg esoros. the same thing) and the correlation of these differences with com/ 2022/ 05/ 24/ remar ks- deliv ered- at- the- 2022- world- econo mic- forum- in- davos/ higher or lower levels of performance regarding AI. 15. Rudschies, C., Schneider, I., Simon, J.: Value pluralism in the AI ethics debate different actors different priorities. Irie Acknowledgements Many thanks to the two reviewers and Mireille (2021). https:// doi. org/ 10. 29173/ irie4 19 Hildebrandt for their excellent comments on this text and to Emilie van 16. van den Hoven, J. et al.: “The European approach to artificial den Hoven for some remarks regarding customary IHL. intelligence across geo-political models of digital governance,” EasyChair Preprint, vol. 8818, Sep. 2022, [Online]. Available: Funding Johannes Thumfart received funding from the European https:// wwww. easyc hair. org/ publi catio ns/ prepr int_ downl oad/ Union Horizon 2020 research programme under MSCA COFUND rDGkM grant agreement 101034352 with co-funding from the VUB-Industrial 17. Kant, I.: Toward perpetual peace: a philosophical sketch. In: Research Fund. Kleingeld, P. (ed.) Toward perpetual peace and other writings on politics, peace, and history, pp. 67–109. Yale University Press, New Haven (2006) 18. Rousseau, D.L., Gelpi, C., Reiter, D., Huth, P.K.: Assessing the dyadic nature of the democratic peace, 1918–88. Am. Political References Sci. Rev. 90(3), 514–533 (1996) 19. Cronin, A.K.: Cyber-mobilization: the new ‘Levée en Masse.’ 1. Chen, H.:“‘Artificial intelligence: disruptively changing the rules US Army War Coll. Q.: Parameter. (2006). https:// doi. org/ 10. of the game’ (人工智能: 颠覆性改变‘游戏规则’),” China Mili- 55540/ 0031- 1723. 2304 tary Online, Mar. 18, 2016. http:// www. 81. cn/ jskj/ 2016- 03/ 18/ 20. Everts, P.P.: Democracy and Military Force. Springer, London conte nt_ 69668 73_2. htm (accessed Sep. 13, 2022). Palgrave Macmillan UK (2002). https://doi. or g/10. 1057/ 97802 2. Wallace, R.: How AI founders on adversarial landscapes of fog 30509 863 and friction. J. of Def. Model. Simul. 19(3), 519–538 (2022). 21. Reiter, D., Stam, A.C.: Democracy and battlefield military https:// doi. org/ 10. 1177/ 15485 12920 962227 effectiveness. J. Conflict Resolut. 42(3), 259–277 (1998). 3. Yan, G.: The impact of artificial intelligence on hybrid warfare. https:// doi. org/ 10. 1177/ 00220 02798 04200 3003 Small Wars Insurgencies 31(4), 898–917 (2020). https://doi. or g/ 22. Fukuyama, F.:“A country of their own,” Apr. 18, 2022. 10. 1080/ 09592 318. 2019. 16829 08 Accessed: Jun. 09, 2022. [Online]. Available: https:// www . 4. Johnson, J.: Artificial intelligence & future warfare: implica- forei gnaff airs. com/ artic les/ ukrai ne/ 2022- 04- 01/ franc is- fukuy tions for international security. Def. Secur. Anal. 35(2), 147–169 ama- liber alism- count ry (2019). https:// doi. org/ 10. 1080/ 14751 798. 2019. 16008 00 23. Snyder, T.:“Ukraine holds the future. The war between democ- 5. Kania, EB.:“Battlefield Singularity: artificial intelligence, mili- racy and nihilism,” Foreign Affairs, Oct. 2022, [Online]. Avail- tary revolution, and China’s future military power,” Center for a able: https:// www . f or ei gnaff airs. com/ ukr ai ne/ ukr ai ne- w ar - New American Security, Nov. 2017. democ racy- nihil ism- timot hy- snyder 6. Statista, “Most valuable private AI companies worldwide,” Apr. 24. Alder, K.: Engineering the revolution: arms and enlightenment 2021. https:// www. stati sta. com/ stati stics/ 10506 52/ world wide- in france, 1763–1815. The University of Chicago Press, Chi- arti ficial-intel lig ence- s tartup- unico r ns/ (accessed Sep. 15, 2022). cago, London (2010) 7. Beraja, M.,Yang, DY., Yuchtman, N.:“Data-intensive Innovation 25. Scharre, P.: Army of none: autonomous weapons and the future and the state:evidence from AI firms in China,” Review of eco- of war. W.W. Norton & Company, New York (2019) nomic studies (Preprint), Jan. 2022, [Online]. Available: http:// 26 Gentile, G., Shurkin, M., Evans, A.T., Grisé, M., Hvizda, M., david yyang. com/ pdfs/ ai_ draft. pdf Jensen, R.: A history of the third offset, 2014–2018. RAND 8. Filgueiras, F.: The politics of AI: democracy and authoritari- Corporation, Santa Monica, CA (2021) anism in developing countries. J. Inf. Technol. Politics (2022). 27. Yeung, K., Howes, A., Pogrebna, G.: AI Governance by human https:// doi. org/ 10. 1080/ 19331 681. 2021. 20165 43 rights-centered design deliberation and oversight an end to eth- 9. Glasius, M., Michaelsen, M.:Authoritarian practices in the digital ics washing. In: Dubber, M.D., Pasquale, F., Das, S. (eds.) The age| illiberal and authoritarian practices in the digital sphere — Oxford handbook of ethics of AI. Oxford handbooks series, pp. prologue. Int. J. Commun. 12(0), Art. no. 0. 2018. 77–106. Oxford University Press, New York, NY (2020) 10. Lamensch, M.,“Authoritarianism has been reinvented for the 28. Fjeld,J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, digital age,” Centre for international governance innovation, M.:“Principled artificial intelligence: mapping consensus in Jul. 09, 2021. https:// www. cigio nline. org/ artic les/ autho ritar ian- ethical and rights-based approaches to principles for AI,” Berk- ism-h as-b een-r einve nted-f or-t he-d igita l-a ge/ (accessed Dec. 29, man Klein Center for Internet & Society, 2020. Accessed: Sep. 2021). 1 3 AI and Ethics 15, 2022. [Online]. Available: https://dash. har var d.edu/ handle/ 48. Cho, E.:“The Social Credit System: Not Just Another Chinese 1/ 42160 420 Idiosyncrasy,” Journal of public and international affairs, no. 29 Floridi, L., Cowls, J.: (2021) “A unified framework of five prin- 5, 2020, Accessed: Oct. 16, 2021. [Online]. Available: https:// ciples for AI in society.” In: Floridi, L. (ed.) Ethics, governance, jpia.p rince ton.e du/n ews/s ocial-c redit-s ystem-n ot-j ust-a nothe r- and policies in artificial intelligence. Philosophical studies series, chine se- idios yncra sy vol. 144, pp. 5–18. Cham, Springer (2021). https:// doi. org/ 10. 49. Liang, F., Das, V., Kostyuk, N., Hussain, M.M.: Constructing 1007/ 978-3- 030- 81907-1 a data-driven society: china’s social credit system as a state 30. Hagendorff, T.: The ethics of AI ethics: an evaluation of guide- surveillance infrastructure: China’s social credit system as state lines. Mind. Mach. 30(1), 99–120 (2020). h t t p s : / / d o i . o rg / 1 0 . surveillance. Policy Internet 10(4), 415–453 (2018). https:// 1007/ s11023- 020- 09517-8doi. org/ 10. 1002/ poi3. 183 31. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI eth- 50. Dirks, E.:“Mass DNA collection in the tibet autonomous region ics guidelines. Nat. Mach. Intell. 1(9), 389–399 (2019). https:// from 2016–2022,” citizen lab, university of Toronto, Sep. doi. org/ 10. 1038/ s42256- 019- 0088-2 2022. Accessed: Sep. 16, 2022. [Online]. Available: https:// 32. “Losing humanity: the case against killer Robots,” Human rights citiz enlab. ca/ 2022/ 09/ mass- dna- colle ction- in- the- tibet- auton watch, Nov. 2012. Accessed: Sep. 20, 2022. [Online]. Available: omous- region/ https:// www. hrw. org/ report/ 2012/ 11/ 19/ losing- human ity/ case- 51. Cong, W., Thumfart, J.: A Chinese precursor to the digital sov- again st- killer- robots ereignty debate digital anti-colonialism and authoritarianism 33. Rawls, J.: The law of peoples: with, The idea of public reason from the post-cold war era to the Tunis Agenda. Global Studies revisited. Harvard University Press, Cambridge, Mass (1999) Quarterly (2022). https:// doi. org/ 10. 1093/ isagsq/ ksac0 59 34. Cook, A.: Taming killer robots. Giving meaning to the ‘mean- 52. Hine, E., Floridi, L.: Artificial intelligence with American values ingful human control’ standard for lethal autonomous weapon and Chinese characteristics: a comparative analysis of American systems, vol 1. JAG School Paper (2019) and Chinese governmental AI policies. AI Soc. (2022). https:// 35. Grimal, F., Pollard, M.: The duty to take precautions in hostili-doi. org/ 10. 1007/ s00146- 022- 01499-8 ties, and the disobeying of orders: should robots refuse? Fordham 53. Floridi, L., et al.: AI4people—an ethical framework for a good ai Int. Law J. 44, 671–734 (2021) society: opportunities, risks, principles, and recommendations. 36. Arkin, RC., Ulam, P., B. Duncan, B.: “An Ethical governor for Mind. Mach. 28(4), 689–707 (2018). https:// doi. org/ 10. 1007/ constraining lethal action in an autonomous system,” Georgia s11023- 018- 9482-5 Institute of Technology, GVU Center, 2009. [Online]. Available: 54. Erie, MS., Streinz, T.:“The Beijing effect: China’s digital silk https://smar tec h.g atech.edu/ bits tr eam/handle/ 1853/ 31465/ 09- 02. road as transnational data governance,” New York University pdf journal of international law and politics, vol. 54, no. 1, Fall 2021, 37. Habermas, J.: The structural transformation of the public sphere: [Online]. Available: https:// deliv erypdf. ssrn. com/ deliv ery. php? an inquiry into a category of bourgeois society. In: Studies con- ID= 88411 20310 01096 10609 30171 16020 00702 40010 24032 temporary German social thought. MIT press, Cambridge (1992)00704 90530 05122 12010 20851 19088 11208 71211 24025 05611 38. Landemore, H.: Open democracy: reinventing popular rule for 51140 05124 12002 71011 00097 10809 80230 39056 02304 00201 the twenty-first century. Princeton University Press, Princeton 18000 09800 30000 87118 09300 80280 91092 00900 60961 19123 (2020)11902 20041 18070 11507 20120 06022 02502 81031 02114 07811 39. Rawls, J.: A theory of justice, Rev Belknap Press of Harvard 90651 19& EXT= pdf& INDEX= TRUE University Press, Cambridge, Mass (1999) 55. Bradford, A.: The brussels effect: how the European Union rules 40. Weymark, J.A.: Cognitive diversity, binary decisions, and epis- the world. oxford university press, new York, NY (2020) temic democracy. Episteme 12(4), 497–511 (2015). https:// doi. 56. van Maanen, G.: AI ethics, ethics washing, and the need to politi- org/ 10. 1017/ epi. 2015. 34 cize data ethics. DISO 1(2), 9 (2022). https:// doi. org/ 10. 1007/ 41. Hildebrandt, M.: Privacy as protection of the incomputable self: s44206- 022- 00013-3 from agnostic to agonistic machine learning. Theor. Inquiries 57. O’Mara, M.: The code: silicon valley and the remaking of Amer- Law 20(1), 83–121 (2019). https://doi. or g/10. 1515/ til- 2019- 0004 ica. Penguin Press, New York (2019) 42 Sharp, G.: Making Europe unconquerable: the potential of 58. Reichberg, G.M., Syse, H.: Applying AI on the battlefield: the civilian-based deterrence and defence: Ballinger Pub Co. Mass, ethical debates. In: von Braun, J., Archer, M.S., Reichberg, Cambridge (1985) G.M., SáncheSzorondo, M. (eds.) Robotics, AI, and Humanity, 43. Asmolov, G.: The transformation of participatory warfare: pp. 147–159. Springer, Cham (2021). https:// doi. org/ 10. 1007/ the role of narratives in connective mobilization in the Rus-978-3- 030- 54173-6_ 12 sia-Ukraine war. Digi War (2022). https:// doi. or g/ 10. 1057/ 59. Swett, B.A., Hahn, E.N., Llorens, A.J.: Designing robots for s42984- 022- 00054-5 the battlefield: state of the art. In: von Braun, J., Archer, M.S., 44. Borenstein, J., Grodzinsky, F.S., Howard, A., Miller, K.W., Wolf, Reichberg, G.M., SáncheSzorondo, M. (eds.) Robotics AI and M.J.: AI ethics: a long history and a recent burst of attention. Humanity, pp. 131–146. Springer, Cham (2021). https://doi. or g/ Computer 54(1), 96–102 (2021). https:// doi. org/ 10. 1109/ MC. 10. 1007/ 978-3- 030- 54173-6_ 11 2020. 30349 50 60. de Vries, B.: Individual criminal responsibility for autonomous 45 Yang, G.-Z., et al.: The grand challenges of Science Robotics. weapons systems in international criminal law. In: International Sci. Robot. 3(14), eaar7650 (2018). https://doi. or g/10. 1126/ scir o humanitarian law series, vol. 65. Brill Nijhoff, Leiden, Boston botics. aar76 50 (2023) 46. Floridi, L.: Introduction – the importance of an ethics-first 61. Scharre, P.: Four battlegrounds: power in the age of artificial approach to the development of AI. In: Floridi, L. (ed.) Ethics, intelligence, 1st edn. W.W. Norton & Company, New York governance, and policies in artificial intelligence. Philosophical (2023) Studies Series, vol. 144, pp. 1–4. Springer International Publish- 62. Scholz, J., Galliott, J.: The Case for Ethical AI in the Military. In: ing, Cham (2021). https://doi. or g/10. 1007/ 978-3- 030- 81907-1_1 Dubber, M.D., Pasquale, F., Das, S. (eds.) The Oxford handbook 47. Leibold, J.: Surveillance in China’s Xinjiang Region: ethnic of AI. Oxford handbooks series, pp. 685–702. Oxford university press, New York, NY (2020) sorting, coercion, and inducement. J. Contemp. China 29(121), 46–60 (2020). https:// doi. org/ 10. 1080/ 10670 564. 2019. 16215 29 1 3 AI and Ethics 63. Galtung, J.: Human Rights: from the state system to global 83. Foucault, M.: Madness and civilization: a history of insanity in domestic policy. In: Galtung, J., Fischer, D. (eds.) SpringerBriefs the age of reason. Vintage house, Random House, New York on pioneers in science and practice, vol. 5, pp. 157–166. Springer (1988) Berlin, Heidelberg, Berlin Heidelberg (2013). https://doi. or g/10. 84. Lyotard, JF.: The differend: phrases in dispute. In: Theory and 1007/ 978-3- 642- 32481-9 history of literature, Vol 46. University of Minnesota Press, Min- 64. Derviş, K., Ocampo, JA.:“Will Ukraine’s tragedy spur UN secu- neapolis. 1988. rity council reform?,” Brookings, Mar. 03, 2022. https:// www. 85. Douzinas, C.: The end of human rights: critical legal thought at brookings. edu/ opini ons/ will- ukr aines- tr agedy -spur -un- secur ity - the turn of the century. Oxford ; Portland, Or: Hart Pub, 2000. counc il- reform/ (accessed Sep. 20, 2022). 86. Mouffe, C.: Which world order: cosmopolitan or multipolar? 65. Borch, C.: High-frequency trading, algorithmic finance and the Ethical Perspect. 4, 453–467 (2008). https:// doi. org/ 10. 2143/ flash crash: reflections on eventalization. Econ. Soc. 45(3–4), EP. 15.4. 20343 91 350–378 (2016). https:// doi. org/ 10. 1080/ 03085 147. 2016. 12630 87. Mills, C.W.: Rawls on race/race in rawls. South. J. Philosophy 34 47(S1), 161–184 (2009). https:// doi. org/ 10. 1111/j. 2041- 6962. 66. “The zircon: how much of a threat does Russia’s Hypersonic mis-2009. tb001 47.x sile pose?,” Royal united services institute, Mar. 31, 2023. https:// 88. Calhoun, CJ., Ed.: Habermas and the public sphere, Nachdr. In: www. rusi. orght tps:// www. rusi. org (accessed Apr. 03, 2023). Studies in contemporary German social thought. Cambridge, 67. Bartneck, C., Lütge, C., Wagner, A., Welsh, S.: An Introduction Mass.: MIT Press, 2011. to ethics in robotics and AI. In: SpringerBriefs in Ethics, pp. 89. Habermas, J.: Reflections and hypotheses on a further structural 3678–3786. Springer, Cham (2021) transformation of the political public sphere. Theory Cult. Soc. 68. How AI is driving a future of autonomous warfare | DW Analysis, 39(4), 145–171 (2022). h t t p s : / / d o i . o r g / 1 0 . 1 1 7 7 / 0 2 6 3 2 7 6 4 2 2 (Jun. 25, 2021). Accessed: Oct. 07, 2022. [Online Video]. Avail-11123 41 able: https:// www. youtu be. com/ watch?v= NpwHs zy7bMk 90. Landemore, H.: Open democracy and digital technologies. In: 69. Atherton, K.: “Loitering munitions preview the autonomous Bernholz, L., Landemore, H., Reich, R. (eds.) Digital technology future of warfare,” Brookings, Aug. 04, 2021. https:// www . and democratic theory, pp. 62–89. University of Chicago Press brookings. edu/ tec hstr eam/loite r ing-munit ions- pr evie w-t he-aut on (2021) omous- future- of- warfa re/ (accessed Apr. 03, 2023). 91. Knight, W.:“The Dark Secret at the Heart of AI,” MIT Tech- 70. Heyns, C.: Autonomous weapons systems: living a dignified life nology Review, Apr. 2017, Accessed: Sep. 30, 2022. [Online]. and dying a dignified death. In: Bhuta, N., Beck, S., Geiβ, R., Available: https://www .tec hnology r eview.com/ 2017/ 04/ 11/ 5113/ Liu, H.-Y., Kreβ, C. (eds.) Autonomous weapons systems, 1st the- dark- secret- at- the- heart- of- ai/ edn., pp. 3–20. Cambridge University Press (2016). https:// doi. 92. Pasquale F: (2015) The black box society the secret algorithms org/ 10. 1017/ CBO97 81316 597873. 001 that control money and information. Harvard University Press, 71. H, Arendt.: Eichmann in Jerusalem: a report on the banality of Cambridge evil. in Penguin classics. New York, N.Y: Penguin Books, 2006. 93. Mouffe, C.: On the political. In: Thinking in action. Routledge, 72. Bruneau, E., Kteily, N.: The enemy as animal: symmetric New York, London, 2005 dehumanization during asymmetric warfare. PLoS ONE 12(7), 94. Hansen, L., Nissenbaum, H.: Digital disaster, cyber security, e0181422 (2017). https://d oi.or g/10 .13 71/j ournal .p one.01 8142 2 and the copenhagen school. Int. Stud. Quart. 53(4), 1155–1175 73. Dinstein Y: The defence of “obedience to superior orders” in (2009) international law, Repr. ed., with A new postscript preface. 95. Landemore, H.: Democratic reason: politics, collective intel- Oxford, UK: Oxford University Press, 2012 ligence, and the rule of the many. Princeton University Press, 74 Allan Williamson, J.: Some considerations on command respon- Princeton; Oxford (2013) sibility and criminal liability. Int. Rev. Red Cross. 90(870), 303– 96. Fmr. Google CEO eric schmidt on the consequences of an A.I. 317 (2008). https:// doi. org/ 10. 1017/ S1816 38310 80003 49 revolution, (Mar. 23, 2023). Accessed: Mar. 29, 2023. [Online 75. Murdough, R.E.: I won’t participate in an illegal war: military Video]. Available: https:// www. youtu be. com/ watch?v= Sg3Ec objectors, the nuremberg defense, and the obligation to refuse hbCcA0 illegal orders. Army Law 4, 4–14 (2010) 97. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: “Machine bias,” 76. “Practice relating to rule 155. Defence of superior orders,” Inter- ProPublica, May 2016. Accessed: May 28, 2022. [Online]. Avail- national Humanitarian Law Databases. https://ihl- dat abases. icr c. able: https://www .pr opublica. or g/ar ticle/ mac hine- bias- r isk-asses org/ en/ custo mary- ihl/ v2/ rule1 55 (accessed Feb. 16, 2023).sments-in- cr iminal- sente ncing? t oken=T iqCeZIj4u LbXl9 1e3wM 77. Diver, L.: Law as a user: design, affordance, and the technologi-2Pnmn WbCVO vS cal mediation of norms. SCRIPT-ed 15(1), 4–41 (2018). https:// 98. Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. Law doi. org/ 10. 2966/ scrip. 150118.4 Rev. 104, 671–732 (2016). https:// doi. org/ 10. 15779/ Z38BG 31 78. Morozov, E.: To save everything, click here: the folly of techno- 99. Mattu, J., Larson, J.,Angwin, L., Kirchner,S.: “How we analyzed logical solutionism, Paperback 1. publ. New York, NY: PublicAf- the COMPAS recidivism algorithm,” ProPublica, May 2016. fairs, 2014. Accessed: May 28, 2022. [Online]. Available: https://www .pr opu 79. Vyas, D., Chisalita, C.M., Dix, A.: Organizational affordances: a blica.or g/ar ticle/ ho w-w e-anal yzed- t he-com pas-r ecidivism- algor structuration theory approach to affordances. Interact. Comput. ithm? token= BqO_ ITYNA KmQwh j7daS usnn7 aJDGa TWE (2016). https:// doi. org/ 10. 1093/ iwc/ iww008 100. Casselman, B.:“The Legend of Abraham Wald,” American Math- 80. Bode, I., Huelss, H.: Autonomous weapons systems and interna- ematical Society, Jun. 2016, Accessed: Feb. 22, 2023. [Online]. tional norms. McGill-Queen’s University Press, Montreal King- Available: http:// www. ams. org/ publi coutr each/ featu re- column/ ston London Chicago (2022)fc- 2016- 06 81. Bostrom, N.: Superintelligence: paths, dangers, strategies, 1st 101. Hong, L., Page, S.E.: Groups of diverse problem solvers can edn. Oxford University Press, Oxford (2014) outperform groups of high-ability problem solvers. Proc. Natl. 82. Derrida, J.: Force of law the mystical foundation of authority. Acad. Sci. U.S.A. 101(46), 16385–16389 (2004). https://doi. or g/ In: Cornell, D., Rosenfeld, M., Carlson, D., Benjamin, N. (eds.) 10. 1073/ pnas. 04037 23101 Deconstruction and the possibility of justice. Routledge, New York (1992) 1 3 AI and Ethics 102. Nowak, R.: Foundations of strategic flexibility: focus on cogni- 113. Asan, H.: Data security. In: Artificial intelligence perspective tive diversity and structural empowerment. MRR 45(2), 217–235 for smart cities, 1st edn., pp. 253–276. CRC Press, Boca Raton (2022). https:// doi. org/ 10. 1108/ MRR- 02- 2021- 0130 (2022). https:// doi. org/ 10. 1201/ 97810 03230 151- 12 103. Slapakova, L., et al.: Leveraging diversity for military effec- 114. Kovalsky, M., Ross, R.J., Lindsay, G.: Contesting key terrain: tiveness: Diversity, inclusion and belonging in the UK and US urban conflict in smart cities of the future. Cyber Def. Rev. 5 (3), Armed Forces. RAND Corporation, Santa Monica CA (2022). 133–150 (2020) https:// doi. org/ 10. 7249/ RRA10 26-1 115. Feder-Levy, E., Blumenfeld-Liebertal, E., Portugali, J.: The well- 104. Burgess, J.P.: The ethical subject of security: geopolitical reason informed city: A decentralized, bottom-up model for a smart city and the threat against Europe. Routledge, Milton Park, Abingdon, service using information and self-organization. In: 2016 IEEE Oxon New York (2011) international smart cities conference (ISC2), Trento, Italy: IEEE, 105. Sharp, G.: Civilian-based defense . A post-military weapons sys- Sep. 2016, pp. 1–4. doi: https:// doi. or g/ 10. 1109/ ISC2. 2016. tem. Princeton University Press, Princeton (1990)75807 67. 106. “2022 protests in Russian-occupied Ukraine,” Wikipedia. Sep. 116. Enlund, D., Harrison, K., Ringdahl, R., Börütecene, A., Löw- 11, 2022. Accessed: Sep. 23, 2022. [Online]. Available: https:// gren, J., Angelakis, V.: The role of sensors in the production of en.wikip edia. or g/w/inde x.php? title= 2022_ pr otes ts_in_ R ussian- smart city spaces. Big Data Soc. 9(2), 205395172211102 (2022). occup ied_ Ukrai ne& oldid= 11097 42331https:// doi. org/ 10. 1177/ 20539 51722 11102 18 107. M. Srivastava, “Ukraine’s hackers: an ex-spook, a 117. Allen, B., Tamindael, L.E., Bickerton, S.H., Cho, W.: Does Starlink and ‘owning’ Russia,” Financial Times, citizen coproduction lead to better urban services in smart cities Sep. 04, 2022. [Online]. Available: ft.com/content/ projects? An empirical study on e-participation in a mobile big f4d25ba0–545f-4fad-9d91–5564b4a31d77 data platform”. Gov. Inf. Q. 37(1), 1012 (2020). https:// doi. org/ 108. Zegart,A.:”Open Secrets,” Foreign Affairs, no. January/February 10. 1016/j. giq. 2019. 101412 2023, Dec. 20, 2022. Accessed: Feb. 20, 2023. [Online]. Avail- 118. “George Kennan’s ‘Long Telegram,’” Feb. 22, 1946. https:// able: https:// www. forei gnaff airs. com/ world/ open- secre ts- ukrai nsarc hive2. gwu. edu/ coldw ar/ docum ents/ episo de-1/ kennan. htm ne- intel ligen ce- revol ution- amy- zegart (accessed Feb. 21, 2023). 109. Panella, C.:”Starlink is key to Ukrainian operations, but the Rus- 119. Kokas, A.: Trafficking data: how china Is winning the battle for sians ‘will find you’ if you use it too long, soldier says,” Busi- digital sovereignty. Oxford University Press, New York (2022). ness Insider, Mar. 24, 2023. Accessed: Mar. 29, 2023. [Online]. https:// doi. org/ 10. 1093/ oso/ 97801 97620 502. 001. 0001 Available: https:// www. busin essin sider. com/ starl ink- key- ukrai nian- opera tions- used- too- long- russi ans- will- find- 2023-3 Publisher's Note Springer Nature remains neutral with regard to 110. Thumfart, J.: Public and private just wars: distributed cyber jurisdictional claims in published maps and institutional affiliations. deterrence based on Vitoria and Grotius. IPR (2020). https:// doi. org/ 10. 14763/ 2020.3. 1500 Springer Nature or its licensor (e.g. a society or other partner) holds 111. Leclercq, E.M., Rijshouwer, E.A.: Enabling citizens’ Right exclusive rights to this article under a publishing agreement with the to the smart city through the co-creation of digital platforms. author(s) or other rightsholder(s); author self-archiving of the accepted Urban Transform 4(1), 2 (2022). https:// doi. or g/ 10. 1186/ manuscript version of this article is solely governed by the terms of s42854- 022- 00030-y such publishing agreement and applicable law. 112. Baran, P.: Some perspectives on networks - past, present and future. Inf. Process. 77, 459–461 (1977) 1 3
AI and Ethics – Springer Journals
Published: May 2, 2023
Keywords: Cognitive diversity; Command responsibility; Digital authoritarianism; Duty to disobey; LAWS; Participatory warfare
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.