BCI, PERSONAL IDENTITY AND AUTONOMY. ULYSSES ON THE SHIP OF THESEUS

VANESA MORENTE PARRA1

Abstract: This article analyzes the ethical and legal challenges of Brain-Computer Interfaces (BCIs), focusing on their impact on personal identity and autonomy. It examines the transformative potential of neurotechnology, especially write-in BCIs that intervene directly in brain activity. Drawing from Ortega y Gasset’s zoon technikon and Parfit’s theory of personal identity, it reflects on how these systems may blur agency and self-continuity. The text also addresses the EU regulatory framework and the theory of neurorights. Using Ulysses on the Ship of Theseus as metaphor, it advocates preventive governance based on ethics, human rights, and informed, autonomous consent.

Keywords: Brain Computer Interfaces, Freedom of Thought, Mental Privacy, AI Act, Neurorights.

Summary: 1. Approach to the phenomenon of brain-computer interfaces or BCI. 1.1. What do we talk about when we talk about BCI? 1.2. Types of BCI and Current Practical Applications. 2. Ethical and Legal Framework Applicable to BCI Systems. 2.1. Legal Regulation of BCIs in the European Union. 2.2. Theory of Neurorights: Is Their Recognition Necessary? 3. Ethical and Legal Challenges of BCI. Ulysses on the Ship of Theseus. 3.1. The Interface Market: Cognitive Enhancement, Autonomy of Will, and Personal Identity. 3.2. The State as a “Behaviour Enhancer” Through BCIs. 4. Some Conclusions.

1. Approach to the phenomenon of brain-computer interfaces or BCI

This article addresses a central research question: how do write-in BCIs, as opposed to read-out BCIs, challenge autonomy, personal identity, and freedom of thought? While read-out BCIs extract information from neural activity, write-in BCIs intervene in brain processes, raising distinctive ethical and legal concerns. The contribution of this work is to situate these risks within a legal-philosophical framework, showing that informed consent alone may be insufficient to safeguard individual agency. The analysis unfolds in four parts: (1) theoretical and conceptual foundations, (2) taxonomy of BCIs, (3) Ulysses and the Ship of Theseus as a metaphor for identity alteration, and (4) proposals for preventive governance.

José Ortega y Gasset, in Meditations on Technique, states that there is no man without technique, so our “technical acts” are not a supervening phenomenon in the history of humanity, but rather technique is a constitutive element of human nature itself. It would be as if to say, imitating Aristotle’s zoon politikon, that in Ortega’s view, the human being is zoon technikon. “Technical acts” are those to which the human being devotes their efforts with the primary aim of inventing and then executing a plan of action, which allows both to ensure the satisfaction of basic needs with minimal effort –to change our circumstances– and to create new possibilities by producing objects that do not exist in nature. Man, and technique are so deeply intertwined that one could argue that the human being begins with the development of technique (Ortega y Gasset, 1965, pp. 32–33).

Perhaps BCI systems (Brain-Computer Interfaces) best represent, more than any other “technical act,” the hybridization between the human being and technique, as they constitute the highest expression of a symbiotic relationship between the human, the machine, and the virtual world.2 This is also the idea expressed by Javier Echeverría when he refers to “techno-persons” and their life in the third environment. The “person” has until now lived in two environments, the first being the natural environment and the second the political one, which, in any case, is artificial –except for Aristotle–.3 However, for some time now, the person has become a “techno-person” because their life unfolds, in addition to the natural and political-social environments, in the digital environment or third environment. “Techno-personality” is divided into three specific types: the first type is embodied by the human person who develops much of their life in the third environment; the second type of techno-person would be technological artifacts and software, that is, purely digital personalities; and finally, the techno-persons of fiction, such as those appearing in the film Blade Runner, i.e., fictional characters. In turn, the first type of techno-person –the category in which we humans belong– may be graded according to the time and space –techno-time and techno-space– of their life dedicated to the third environment to the detriment of the first two, that is, how much of their vital time unfolds in the digital environment and how much in the natural and social or political one. According to the thesis of Echeverría and Almendros, if in the not-so-distant future the interface market becomes popular due to its high immersive capacity in virtual realities or parallel digital worlds, it is very likely that the human being will fully become a techno-person and cease to be a “mere person” (Echeverría & Almendros, 2023, p. 89-91 & 105). Thus, the prediction of the science fiction novel Ready Player One by Ernest Cline, published in 2011 and set in 2044, would be fulfilled.

For now, we can only affirm that BCI systems are the most relevant techniques among the so-called “neurotechnologies,” which in turn refer to the devices and procedures used to access, control, investigate, evaluate, manipulate and/or emulate the structure and function of the neural systems of animals or human beings.4 Currently, human brain-machine interfaces (computers, robots or any other device) are being developed more vigorously in the healthcare field, due to their therapeutic potential, especially in relation to certain types of disability, neurodegenerative and psychiatric diseases.5 However, BCI systems also have an undeniable neuro-perfective or meliorative potential, which is of course being explored in the entertainment and “human enhancement” markets.6

A priori, the use of BCI systems raise two questions on which this article focuses. The first is that BCIs can significantly affect the “autonomy of the will.” Although the free and informed expression of the will must be the legitimating cause of interface use, a possible abuse of BCIs could even bring about the loss of personal autonomy. The second question is what kind of use can be made of BCIs, both in the public and private spheres, so that such use is legitimate and does not lead to abuse that may violate the right to the free development of personality and, consequently, the right to build one’s own personal identity.7

1.1. What do we talk about when we talk about BCI?

In September 2024, the EU Council published a report titled “From Vision to Reality: Promises and Risks of Brain Computer Interface,”8 which defined BCI systems as the technique encompassing a series of neurotechnologies that enable direct communication between the human brain and an external device. BCIs detect and interpret the brain’s electrical signals and convert neural impulses into execution commands that control connected devices solely through thought, i.e., without any need for movement.9 Therefore, it is not an exaggeration to state that the advancement of BCIs represents one of the most transformative innovations of the 21st century, opening new possibilities for interaction between humans and technology. Two of the sectors in which BCIs are expected to develop significantly are the healthcare sector—with concrete applications in medical rehabilitation—and the human enhancement sector, particularly cognitive enhancement. In fact, a BCI may be the only viable communication system for users with severe disabilities who cannot speak or use keyboards, mice, or other traditional interfaces. For this reason, their initial applications focused on assisting people with severe disabilities.10 However, their use has now extended toward human enhancement in healthy individuals, as will be analyzed later.

Although this technique has boomed strongly in the current century, the first steps towards decoding brain activity through possible connections between the human brain and machines were already taken in the 20th century. In fact, we have been able to detect and transcribe the brain’s electrical activity for almost a century: in 1929, a German psychiatrist named Hans Berger developed a system capable of recording the brain’s bioelectrical activity, now known as the electroencephalogram (EEG). Long before Berger’s discovery, other researchers had already demonstrated the electrophysiological nature of the brain -Fristsch, Hitzig, Caton, and Cybulsky among others- through studies with animals. Thanks to the intelligent use of the technology available at the time- string galvanometers, dual -coil galvanometers, and oscilloscopes- Berger managed to transcribe brain activity by placing electrodes on the scalp, thus creating the electroencephalogram technique (La Vaque, 1999, p. 3/9).

Half a century later, in 1977, a Belgian researcher named Jacques Vidal, working at UCLA, coined the term Brain-Computer Interface to describe the technique of decoding brain signals in a dialogue between humans and machines (Lotte et al., 2018, p. 2).11 However, it was not until the dawn of the 21st century that research on the brain through BCIs gained real academic importance, evidenced by the first International Congress held in the U.S. in 1999. It was at this international meeting that an academic definition of BCIs was adopted, referring to them as systems that allow the brain to interact with the environment without the intervention of the normal mechanisms of peripheral nerves and muscles. This is achieved by monitoring brain activity in order to translate the user’s intentions into commands for a device.12

From the 1990s to the present, the field of BCI research has expanded significantly. In 2013, the journal Brain Computer Interface was established, and in 2015, the International BCI Society was founded to promote research and the development of technologies that enable individuals to interact with the world through brain signals (Lotte et al., 2018, p. 3-4). The most recent World Congress on BCI was the 11th International Meeting held in 2023, which brought together over 500 participants from around the world. The intensification of academic and research activity is evidence of the extraordinary potential that BCIs hold—not only in the medical and therapeutic fields.

Currently, many technologists agree that in the not-too-distant future, interface applications will be highly diverse. Among the future applications envisioned by engineers and technologists are thought decoding, which could be undeniably useful in criminal law, where determining whether a suspected offender is telling the truth would be very valuable.13 Another future possibility is the establishment of Brain-Brain Interfaces (BBI), a recent field of scientific exploration with enormous potential. Lastly, perhaps the most outlandish possibility proposed is that of technologist Ray Kurzweil, who asserts that in the near future -in fact, he places his predictions in our present, between 2020 and 2030- our minds could be “scanned” and “downloaded” into any software (Kurzweil, 2005, p. 200). This thesis is rooted in the theory of “dataism,” which posits that our minds are composed solely of information that can be translated into binary code -that is, it can be computerized- so it could be downloaded into any digital storage repository.14

There is no doubt that one of the most technically interesting applications currently being developed is the possible connection between the brain and generative AI, known as Artificial Brain Technology (ABT).15 The most imaginative technologists have already predicted a rather disturbing use of this system, as they understand that an interface with generative AI could be used to artificially reprogram human brains. In this case, the brain would be the passive element or recipient of commands originating from the generative AI, which would act as the active and creative element. However, not all predictions are so disturbing; in fact, some uses have a very benevolent purpose, such as interface applications in the fields of automotive technology and home automation, which could be of great help in the daily lives of people with disabilities (Maiseli et al., 2023, p. 4/16).

It is true that the application of BCIs is very promising for technologists and, above all, for individuals who may benefit from them to gain greater autonomy in their daily lives. However, academia cannot overlook the fact that all practical applications of BCIs have undeniable ethical and legal consequences that must be analysed. Therefore, some of these ethical, social, and legal consequences constitute the object of critical analysis in this article.

1.2. Types of BCI and Current Practical Applications

Considering the above, we are now able to offer a canonical definition of BCI, understood as those technological systems that enable direct communication between the human brain and an external device, such as computers, prostheses or robots. Once the electrical signals generated by neuronal activity are detected, they are processed and translated into commands intelligible to the connected devices. This technology is based on neuroscientific principles and uses tools such as electroencephalography (EEG), magnetoencephalography (MEG), and electrocorticography (ECoG). Strictly speaking, a BCI is a technique that measures the activity of the central nervous system (hereinafter CNS) and converts it into artificial output that replaces, restores, enhances or supplements the natural output of the CNS and, therefore, alters the ongoing interactions between the CNS and its external or internal environment.16

Regardless of whether BCIs are defined in a more restrictive or extensive manner, they must always include three essential components (Maiseli et al., 2023, p. 2-3/16):

A)Acquisition of the electrical signal emitted by the brain. This first stage consists of capturing electrophysiological signals that represent specific brain activities. The capture of brain electrical signals requires the extracranial placement -this being the most common system- of an electronic device with electrodes, as used in electroencephalography (EEG) and magnetoencephalography (MEG). Neuroimaging techniques (fMRI) are also employed. In all cases, the captured signals are subjected to a process of filtering (elimination of “noise”), amplification, and digitization.

B)Processing of the captured signals. In this stage, the BCI extracts critical electrophysiological features from the acquired signals to define brain activity and, consequently, decode the user’s intention.

C)Practical applications. The final stage consists of translating the electrical signals into commands capable of controlling external devices, such as a robotic arm.

Although there are different types of BCI, they all follow a standard functional pattern. Any BCI system must include “sensors” to record the brain’s electrical activity; it also includes an algorithm that functions as a “decoder,” transforming the brain’s electrical activity into a command. This command or signal is sent to an “actuator,” which is the third and final element of the BCI and may consist of a simple computer or a robotic arm (Monasterio Astobiza et al., 2019, p. 31). Moreover, as has already been noted, we should now be speaking not only of BCIs, but also of BBIs (Brain-Brain Interfaces) and CCIs (Computer-Computer Interfaces) (Coin et al., 2020, p. 2/9), and of course, the already mentioned ABT systems (Lyreskog et al., 2023, p. 13–14). At first glance, ABTs were born with a benevolent purpose: to assist individuals in their personal development, not only by decoding electrical signals, but also by interpreting conscious and unconscious thoughts that turn thinking into action. However, as will be analysed later, ABTs have such a transformative potential that they require deep and measured ethical and legal reflection.

The different purpose or aim of each interface system gives rise to another classification of BCIs (Brunner et al., 2015, pp. 2-3/10).

Restorative BCI. It consists of recovering a lost or diminished function. It is intended for cases involving the loss of functionality in a limb due to an accident or an acquired disability. From the brain to the limb (command direction).

Replacement BCI. These are neuroprostheses (prostheses controlled mentally). Applied to individuals who have lost a limb due to an accident or acquired disability.

Meliorative or perfective BCI (for example, enhancing immersive experience in a game; virtual reality). In this case, the application has the purpose of “enhancing” a standard functionality. It is not so much about “collecting” information from brain activity as about manipulating that brain activity.

Supplementary or augmentative BCI (augmented reality glasses). This refers to enhancement supported by a device, whether augmented reality glasses or a sensor-equipped suit that increases the immersive capacity of the BCI system.

Rehabilitative BCI (recovering functionality of limbs or senses). From the outside in, the patient/user receives an output -a command- that causes them to move the limb over which they had lost mental control.

BCI as a research tool (decoding brain activity).

All the BCIs mentioned can, in turn, be classified based on whether the method used is invasive or not (Monasterio Astobiza et al., 2019, p. 32). The EU Council, in its report of September 2024 already cited, follows this classification, although it adds an intermediate category:17

Invasive BCIs: Invasive BCIs require the implantation of electrodes directly into brain tissue. This allows high-resolution detection of neural signals, which is crucial for applications where precision is essential, such as controlling advanced robotic prostheses or treating severe neurological disorders. Examples include devices that allow people with paralysis to control a mechanical arm or systems that restore vision in blind patients. However, these interfaces entail significant risks, such as infections, immune rejection, and brain damage. These systems involve surgical procedures to implant electrodes directly into the brain or on its surface. This system can capture the brain’s electrical signals with high resolution, making it technically more precise but potentially more hazardous to health.

Partially invasive BCIs: These are implants that require minimal surgery or rely on pre-existing medical technologies, such as stents and catheters, to insert electrodes very close to the brain, although without the need for open surgery.

Non-invasive BCIs: In this case, the BCI system uses external sensors to capture electrical signals, although this process also captures a great deal of background noise, resulting in less precise outcomes.

The report by the Council of Europe, published in September 2024, highlights a crucial factor regarding BCIs: the direction in which the information flows. Xiao-Yu Sun and Ye Bin, in their article published in 2023, warn that the ethical and legal implications of a BCI differ depending on whether it operates by the read-out method—software reads the brain—or the write-in method—software writes into the brain— (2023, p. 2/9). For greater conceptual clarity, BCIs can thus be classified along two axes: (a) by mode of operation—read-out (extracting neural data) versus write-in (modifying neural processes); and (b) by method of implementation—invasive, semi-invasive, or non-invasive. While read-out BCIs primarily threaten privacy and mental integrity, write-in BCIs directly affect agency and psychological continuity, since they may alter decision-making processes or induce responses without the subject’s full control. This distinction is crucial, as it grounds the argument that write-in BCIs pose unique challenges for autonomy and identity, requiring tailored ethical and legal safeguards. In contrast, write-in BCIs are designed to manipulate the user’s brain activity with the aim of stimulating or inhibiting specific responses. Perhaps the most extreme modality within this type of BCI is Deep Brain Stimulation (DBS), used for various neurological conditions and disabilities such as Parkinson’s disease. In this case, the brain is a passive element, while the device connected to the brain plays an active and manipulative role, which would align with the purpose of meliorative and augmentative BCIs. According to Xiao-Yu Sun and Ye Bin, the write-in BCI modality poses the greatest ethical and legal threats to personal identity and autonomy of the will—or personal agency—as will be further analysed later (2023, p. 3–4/9). Having clarified this taxonomy, the next step is to examine current and potential future applications of these interface systems.

Once the taxonomy of BCIs has been presented, the next step is to examine their current and potential future applications. At present, BCIs are being developed in two very different fields: healthcare, and leisure and entertainment. In the healthcare sector, the applications are particularly promising in areas such as orthopaedics, where the goal is to restore motor functionality in dysfunctional limbs. Patients with amputations, for instance, can control robotic or bionic prostheses solely through brain signals. A notable case is the EU-funded NEBIAS project, in which a patient not only autonomously controlled a bionic hand but also regained the sense of touch and was able to distinguish textures while blindfolded. This was a replacement BCI of an invasive nature.18

BCIs are also proving effective in post-stroke rehabilitation. Through functional electrical stimulation (FES), patients can activate paralyzed muscles and gradually recover mobility. The RecoveriX project19 is a prominent example, demonstrating how rehabilitative BCIs can restore lost functions that may affect half of the body

In addition, Augmentative and Alternative Communication (AAC) systems are being developed for patients with severe paralysis. For example, individuals with amyotrophic lateral sclerosis (ALS) can “write” with their thoughts using BCIs designed by companies such as Neuralink20 or Tobii Dynavox.21 Both aims at a therapeutic, primarily rehabilitative purpose, but differ significantly: Neuralink’s approach is invasive, whereas Tobii’s is non-invasive. They also diverge in scientific grounding, as Neuralink has received wide media coverage but still offers limited peer-reviewed evidence.

Finally, invasive rehabilitative BCIs are being applied to patients with Parkinson’s disease. The company InBrain Neuroelectronics, for instance, is developing graphene-based implants to modulate brain activity and improve symptoms, illustrating how advanced neurotechnology is moving from experimental trials to practical therapeutic applications.22

Another area of application for BCIs is mental health, specifically as a non-invasive and drug-free alternative treatment for depression and anxiety. Some BCIs, combined with transcranial stimulation, have shown effectiveness in emotion regulation, such as in the case of MeRT Therapy -or Magnetic e-Resonance Therapy- which was approved by the U.S. FDA in 2008 specifically for treating depression and anxiety. MeRT is a neuromodulation treatment that combines repetitive transcranial magnetic stimulation (rTMS) with an in-depth analysis of each patient’s brainwaves. The process begins with an electroencephalogram to identify any irregularity or dysfunction in the brainwave pattern. These data are then transformed by experts into a report to formulate a treatment tailored to the brain pattern of everyone which is as unique as their fingerprint.23

The Leon Declaration itself warmly welcomes the development of non-invasive and non-medical interfaces aimed at transforming education, well-being, or entertainment through neurostimulation or brain modulation and stimulation, among other techniques. These applications, the Declaration states, could allow companies to innovate and offer more effective and comprehensive education and a complete and immersive entertainment experience. In fact, there are already several examples of companies in the gaming sector using BCIs to enhance the immersive experience of players, through augmented reality glasses, body sensors, or haptic suits that seek to create a virtual reality increasingly indistinguishable from analogy reality. Companies such as i-BrainTech, an Israeli company that has developed a mind-controlled football video game, which is used by professional athletes to improve their mental and physical performance. This game uses neurofeedback to train motor functions through brain activity. The commercial potential of BCIs in the gaming sector is undeniable, as it is an industry that generated $187.7 billion in 2024.24

As BCIs continue to improve, a market for interfaces will simultaneously develop, not only aimed at entertainment but also at cognitive enhancement. This is precisely one of the theses underpinning technological transhumanism, which advocates for the development of a free market of neurotechnologies whose purpose is cognitive enhancement through brain stimulation provided by BCIs. While most BCIs clearly have a benevolent purpose whether therapeutic or enhancement-related it seems evident that the uses and potential abuses of certain BCIs require a profound and measured ethical and legal analysis of their potential impact on autonomy of will, understood as a driving and constitutive element of personal identity. In this regard, Antonio Diéguez very aptly warns how entrenched in the collective imagination the thesis of the axiological neutrality of any technical or technological process is. In fact, the instrumental thesis of technology, which conceives it as a mere “means” or “instrument” at the service of humans, has been strongly consolidated with the aim of justifying practically any scientific and technological development. However, Diéguez argues, this thesis is not only naïve but also misleading, since by focusing the analysis solely on the artifacts or devices that make up a given technology, it forgets that said technology is also a social, economic, political, and cultural network that lies behind everything and even makes its very existence possible (Diéguez, 2024, p. 43). This idea draws attention to the unavoidable need for every scientific and technological breakthrough to be accompanied by a robust ethical debate and solid legal regulation, whose primary aim must always be to protect individuals’ fundamental rights (Klein & Nam, 2016, p. 123).

2. Ethical and Legal Framework Applicable to BCI Systems

BCIs fall within a very broad regulatory spectrum, since their practical application is directly or indirectly related to AI, data protection, and the safety of medical devices. It has already been noted that interfaces are experiencing significant and promising development in the healthcare field, especially in relation to disability and certain psychiatric and neurodegenerative diseases. The ethical healthcare framework in which BCI systems operate is mainly defined by the Belmont Report, which establishes the three basic ethical principles of healthcare practice. Regarding the first of these -the principle of autonomy or “respect for persons”- it includes at least two ethical convictions. The first is that all individuals must be treated as autonomous agents, and the second, that all persons whose autonomy is diminished have the right to be protected. The practical implementation of these postulates is embodied in the figure of informed consent, which consists of the expression of will that is informed, free, and conscious.25 The written manifestation of informed consent, therefore, is the legitimizing basis for any use or application of BCI systems in the healthcare field.

This requirement is especially demanding in settings marked by structural dependency or constraint (e.g., schools, prisons, long-term care), where voluntariness may be compromised. In such contexts, additional safeguards are warranted to ensure that consent remains genuinely free and informed. Refusal must not entail any detriment for the individual.

On the other hand, the medical application of interfaces finds additional axiological support in the principle of beneficence, as they have a clearly therapeutic purpose in virtually all clinical cases in which BCI systems have been used. Likewise, in any healthcare system with universal access and coverage, the third principle of the Report -the principle of justice- would also be fulfilled, provided that the distribution of BCI systems is carried out according to the criterion of equity.26

The International Bioethics Committee of UNESCO published a report on the ethics of neurotechnologies in December 2021, in which it defines BCI systems and outlines their applications in current life. Likewise, it warns that abuse or illicit or spurious use of these techniques could seriously harm the individual subjected to them. The UNESCO Bioethics Committee thus aligns itself with the regulatory policy set out in the OECD Recommendations on Responsible Innovation in Neurotechnology, published on December 11, 2019, which call for harm avoidance and respect for human rights, particularly privacy, cognitive liberty, and individual autonomy.27

In addition, reference must still be made to the Nuremberg Code (1947) and the Declaration of Helsinki (WMA, 1964, with its subsequent revisions), which remain cornerstones of biomedical ethics. More recently, the OECD Recommendation on Responsible Innovation in Neurotechnology (2019) reaffirmed the principles of integrity, transparency and accountability in this field. Taken together, these instruments provide the ethical background against which binding European regulation must be interpreted.

2.1. Legal Regulation of BCIs in the European Union

Although the EU does not currently have specific regulations for BCI technology, this does not mean that it exists in a legal vacuum. Moreover, this regulatory gap or legal loophole is not the result of a deliberate omission by the European legislator, but rather because interfaces are regulated on a sectorial basis and not in a comprehensive or monographic way. BCI systems are subject to a general legal framework that encompasses aspects such as AI, data protection, and the safety of medical devices. For analytical clarity, this framework can be systematised into four main dimensions: (1) the safety and performance of medical devices, (2) the protection of neurodata under data protection law, (3) the classification of related software under the AI Act, and (4) the obligations imposed on digital platforms under the DSA.

By understanding BCI systems as medical devices, they fall under the scope of Regulation (EU) 2017/745 of April 5 on Medical Devices of the European Parliament and of the Council, whose regulatory purpose is to govern devices with medical applications, including those related to BCIs designed for rehabilitation or diagnosis. This regulation imposes strict requirements for safety, clinical evaluation, and certification to ensure patient safety.

Article 2.1 of the Regulation defines “medical device” as “any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes: diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease or disability.”

Article 2.5 defines “implantable device” as: “any device, including those that are partially or wholly absorbed, which is intended: to be totally introduced into the human body, or to replace an epithelial surface or the surface of the eye, by clinical intervention, and which is intended to remain in place after the procedure. Any device intended to be partially introduced into the human body through clinical intervention and to remain in place after the intervention for at least thirty days shall also be deemed to be an implantable device.”

Finally, Article 2.6 defines “invasive device” as: “any device which, in whole or in part, penetrates inside the body, either through a body orifice or through the surface of the body.”

Therefore, BCI systems—regardless of whether they are invasive, non-invasive, or semi-invasive—fall within the definition of “medical device” and must comply with all the requirements and obligations established in Regulation (EU) 2017/745 of April 5.

Secondly, BCI systems extract information from brain activity that is transformed into personal data, which is why another regulation applicable to BCI systems is the General Data Protection Regulation (GDPR) 2016/679 of April 27. The data extracted through BCI technology have been referred to as neurodata. These are defined as the information collected from the brain and/or the nervous system, although one must also consider as neurodata the inferences drawn from that original data—that is, “metaneurodata.” Neurodata and metaneurodata are always personal and sensitive data, since brainwaves and other forms of neurodata—including metadata—make us uniquely and singularly identifiable.28

The use of BCI systems may give rise to two types of data: on the one hand, biometric data29 related to both the structure or morphology of the brain and its functionality—this would be the material information of the brain;30 and, on the other hand, emergent or derived data resulting from brain activity, both conscious and unconscious—this would correspond to the “mental” or psychological dimension of brain information.31 Regardless of how the data extracted from brain activity captured by a BCI system are classified, they fall into the so-called “special category of data.”32 The processing of this data is, in principle, prohibited unless there is the voluntary, free, specific, unambiguous, and informed consent of the individual subject to the interface. However, in contexts of special subjection or clear power imbalance, consent may not be considered freely given. Recital 43 GDPR expressly warns that consent is unlikely to be valid when the controller is a public authority or where there is a clear imbalance between the data subject and the controller (e.g., schools, prisons). In such cases, controllers should rely on alternative lawful bases under Article 6.1—notably performance of a task carried out in the public interest (Art. 6.1 e)) or compliance with a legal obligation (Art. 6.1 c))—and, for special-category neurodata, on a condition of Article 9.2, such as substantial public interest (9.2 g), healthcare (9.2 h), public health (9.2 i) or scientific research with appropriate safeguards (9.2 j) in conjunction with Art. 89). Any reliance on these bases remains subject to necessity and proportionality, strict purpose limitation and robust safeguards (DPIA, DPO involvement, access controls, data minimisation).

The data extracted, in any case, must be the “minimum” necessary to fulfil the specific purpose—principle of data minimization.33 However, complying with this requirement in the application of BCI systems is quite complex, as a single intervention can yield data of diverse nature and may even result in “unexpected findings” for which the data subject had not given consent—hence the need for the information sheet to anticipate this eventuality. Lastly, the processing of this type of data requires the controller to exercise proactive responsibility (accountability) and to adopt specific and qualified security measures—security by design and by default—including the necessary impact assessment (DPIA).34

Thirdly, BCI systems that rely on AI (ABT) could fall within the regulatory scope of the AI Act (Regulation 2024/1689 of June 13—European AI Act). Considering that in most of their applications BCI or ABT systems serve a therapeutic purpose, they could be considered high-risk systems insofar as they affect human health. In such cases, the ABT system would first be subject to a conformity assessment by the competent authority before the AI systems used can be placed on the market or put into service. Additionally, AI providers must issue a declaration of conformity with EU law (Art. 47 AI Act). Secondly, a risk management system for known and foreseeable risks must be implemented and documented, especially when the health and fundamental rights of individuals are potentially at stake. A sound data governance system must also be documented, particularly when dealing with especially sensitive data (Art. 10 AI Act). Thirdly, ABT systems will be subject to human oversight; that is, they will be designed in such a way that they can be effectively monitored by natural persons during the period they are in use, which includes equipping them with appropriate human-machine interface tools. Furthermore, Article 14.2 of the AI Act continues by stating that “the objective of human oversight shall be to prevent or minimize the risks to health, safety, or fundamental rights that may arise when a high-risk AI system is used according to its intended purpose or when it is reasonably foreseeable that it may be misused, particularly when such risks persist despite the implementation of other requirements laid down in this section.”

Finally, the Digital Services Act (DSA, EU 2022/2065) does not directly regulate BCIs but may become relevant when they are integrated into online platforms such as gaming, social media or neuroadaptive advertising. In those cases, its provisions on transparency, the prohibition of manipulative design, and the prevention of addictive behaviours could indirectly apply to BCIs. If such integration is not present, however, the DSA should not be invoked to avoid conceptual ambiguity.

Taken together, these instruments—MDR, GDPR, AI Act and, where appropriate, the DSA—define the normative framework in which BCIs must be developed and used within the EU. This framework must also be interpreted in the light of ethical instruments such as the Nuremberg Code (1947), the Declaration of Helsinki (WMA, 1964, with subsequent revisions), and the OECD Recommendation on Responsible Innovation in Neurotechnology (2019).

2.2. Theory of Neurorights: Is Their Recognition Necessary?

The Report of the International Bioethics Committee of UNESCO, published in December 2021 on the ethics of neurotechnologies -which has already been mentioned- warns that certain techniques for understanding and manipulating the brain, such as BCI systems, could affect or alter personal autonomy, the free development of personality, and identity. In the face of this technological challenge, the UNESCO Bioethics Committee raises the question of whether the current legal framework that guarantees the protection of human rights is and will be sufficient. That is, it asks whether the legal interests intended to be safeguarded by human rights will indeed be protected in this new scenario of technological development. This question is rhetorical because the answer had already been provided four years earlier by neurologist Rafael Yuste—head of the Brain Project—and his research team at Columbia University through their proposal of neurorights. In 2017, Rafael Yuste, Sara Goering, and other researchers published an article in the scientific journal Nature aimed at disseminating the proposal of neurorights (Yuste et al., 2017, p. 153 et seq.). This proposal was adopted almost immediately by both the United Nations and the Chilean Parliament, which initiated a process of recognition and guarantee of neurorights.35 The catalogue of neurorights consists of five specific rights: rights to mental identity; rights to mental liberty, mental privacy, equality of access to cognitive enhancement, and non-discrimination—that is, the prevention of algorithmic bias.

The proposal of neurorights has also found support in the academic field, specifically in the work of Marcelo Ienca and Roberto Andorno, although in their case it involves four rights rather than five: the right to cognitive liberty; the right to mental privacy; the right to mental integrity; and, finally, the right to psychological continuity (Andorno & Ienca, 2017, p. 10/27).

However, several authors have criticized the proposal of neurorights. At the international level, the most authoritative voice on the matter is that of Jan Christoph Bublitz (2022, p. 3 et seq/15.), while at the national level, the prominent figure is Rafel de Asís (2022, p. 60 et seq.). For reasons of space, it is not possible to analyse in detail the arguments that have been presented to reject the theory of neurorights. Here, three basic arguments are upheld: the legal interests that these new rights aim to protect are already legally safeguarded both at the international and national levels through constitutional charters and implementing legislation; moreover, the issue is not so much the recognition of new rights, but rather the effectiveness of the rights that already exist; finally, through the creation of fundamental rights, an attempt is made to limit the individual freedom of the autonomous and conscious subject in favour of a supposedly justified paternalism. On many occasions, excessive protection -or outright prohibitions- of brain interventions are based on what could be called a “suspicious will” presumed to be vitiated. There is suspicion toward the free will of the person who wishes to “enhance” themselves because it is assumed that the motives behind it are neither rational nor truly free, but emotional and coerced by a consumerist and perfectionist ideology that is, moreover, negatively criticized. There is distrust regarding the lawfulness of informed consent in the context of enhancement, because it is assumed that the affected subject lacks the basic capacity, consenting out of misinformation. Precisely, the following section focuses on analysing the paradoxical relationship between the free expression of consent and personal identity in the context of interfaces, whether therapeutic or enhancing.

3. Ethical and Legal Challenges of BCI. Ulysses on the Ship of Theseus

From an ethical-legal perspective, BCI systems could be classified according to three criteria: the first is the teleological criterion, which considers the purposes pursued by BCIs; the second is the axiological criterion, which focuses on the ethical principles governing their therapeutic or non-therapeutic application; and finally, the subjective criterion, which considers the type of patient or user of the BCI.

According to the first of these criteria, BCIs can have either a therapeutic or a non-therapeutic purpose, and within the latter category, a further distinction can be made between those used for recreational or playful purposes and those aimed at enhancement. Therefore, BCIs can be used for three distinct objectives: as a therapeutic tool; as mere entertainment; and, finally, for cognitively enhancing a person. Depending on how the BCI is used -that is, whether it is a read-out or a write-in BCI, and whether it is invasive or not- different ethical criteria will apply. This indicates that both the teleological and axiological criteria are closely related. In fact, there are authors who advocate for a specific ethical and legal regulation for each type of BCI, as each one poses different ethical challenges.36

The third criterion for selecting BCIs is the subjective one, which considers the type of user of the device; that is, it examines whether it is an adult with full legal capacity; an adult with a disability who needs support—guardianship or curatorship; and, finally, whether it is a minor. According to the Convention on the Rights of Persons with Disabilities (CRPD) signed in New York in 2006, all signatory States must take the necessary measures to ensure that persons with disabilities have access to technologies and can use them to improve their health status (Article 9 and 25 CRPD). Therefore, a person with intellectual disabilities has the same legal capacity as anyone else and merely requires appropriate and effective support, with the aim of safeguarding their fundamental rights and preventing abuse (Article 12 CRPD).

Finally, in the case where the user of a BCI is a minor, the Convention on the Rights of the Child of 1989 establishes in Article 12 that children must be able to participate in the decision-making process. In fact, the Declaration of Helsinki of the World Medical Association (revised in 2013) affirms that when a minor reaches the age of 12, they may “assent” together with their legal representatives, who must give consent for the use of a BCI. Moreover, the child’s best interests shall prevail over science and society.

These regulatory requirements determine that for the use of a BCI to be legitimate, it must necessarily be preceded by the expression of the user’s free, informed, unequivocal, and current will. Of course, attention must be paid to the type of BCI to be used, since the implications for the user are not the same if it is a write-in or a read-out BCI, and whether it is invasive or not. In any case, the ethical foundation for the use of a BCI is determined by individual freedom or personal agency, as could only be expected in the context of liberal democracies.

However, the problem that arises is that the user of a write-in BCI -i.e., the insertion of commands that “rewrite” the brain- may fall into a paradox that we have here termed the paradox of “Ulysses tied to the mast of the Ship of Theseus.” Homer tells us in Book XII of the Odyssey that Ulysses -Odysseus- after being warned by Circe, tries to avoid a potential situation of loss of self-control or akrasia that could lead him to death. Aware that he might lose dominion over himself, Ulysses asks his rowers to plug their ears with wax and to tie him to the mast of the ship when passing by the island of the sirens. In this way, he can enjoy their song while avoiding irreversible harm.37

The paradox lies in the fact that “our Ulysses” -the BCI user- may be tying himself to the mast of the Ship of Theseus, which changed so much in appearance -its identity- that it was nearly impossible to recognize. At what moment did our Ulysses -the BCI user- stop being Ulysses and become someone else? If Ulysses has not lost his “agency,” then which expression of will should be respected- the one of the first Ulysses or that of the second?

Perhaps this is the greatest challenge faced by the ethical and legal analysis of BCI use. While the explicit -and written- expression of the subject’s free will is the key that opens or enables any use of a BCI, this very exercise of freedom may lead the user into a situation of total alienation or loss of self-control. That is, the free and voluntary use of a BCI, in the name of the free development of one’s personality, could entail a complete loss of self-mastery and subject the user to the “slavery of the BCI,” which constitutes an emptying of freedom itself.

Is individual freedom an unlimited fundamental right? Can an individual with full legal capacity subject themselves to a regime of slavery and lose their dignity through a free decision? Should personal self-destruction be legally restricted? These are the questions we will attempt to answer in the next two sections.

3.1. The Interface Market: Cognitive Enhancement, Autonomy of Will, and Personal Identity

Whether the use of a write-in BCI has a therapeutic purpose or not such as applications of BCIs in the entertainment sector or for cognitive enhancement the same ethical principles apply, namely: the principle of respect for the person or principle of autonomy, the principle of beneficence -which includes the principle of non-maleficence- and, finally, the principle of justice. As stated in the Belmont Report, informed consent -the free, explicit (and written) manifestation of unequivocal will- is the enabling tool for the lawful application of a BCI in the context of medical treatment or a clinical trial.38

Informed consent is nothing more than the practical expression of the principle of autonomy of the will, or respect for the individual’s freedom, which is why it serves as the key to enable the use of a BCI in any context, provided that it is expressed by a person with full legal capacity.

From a human rights perspective, freedom of thought must be analysed in its dual dimension: the forum internum, referring to the inner sphere of beliefs and thoughts, and the forum externum, referring to their outward manifestation. This distinction, recognised both in Article 9 of the European Convention on Human Rights and Article 10 of the Charter of Fundamental Rights of the European Union, is crucial for assessing the ethical and legal risks of BCIs. The forum internum enjoys absolute protection and cannot be restricted under any circumstances, whereas the forum externum may be subject to limitations when it collides with other rights or public interests. Write-in BCIs, by directly intervening in neural processes and potentially altering decision-making and thought formation, threaten the absolute dimension of the forum internum. This explains why their regulation must be particularly rigorous: they risk undermining the very freedom of thought that underpins personal identity and autonomy in democratic societies.

As previously noted, certain uses of BCIs are intended to deliberately modify some traits of the user’s personality or identity, which poses no problem if they are applied in the medical field and meet ethical and legal requirements.39 However, what happens if a market emerges for behavioural or cognitive enhancement, where the user can purchase a write-in BCI whose purpose is to alter or modify certain traits of their personality—in other words, of their personal identity? In this case, the BCI would serve as a “driver” of the free development of personality, since identity is not static but rather a construct in continuous development and adaptation.

Many authors have ventured to define such a complex concept as personal identity -Descartes, Kant, and Locke, among others- however, due to space constraints, we will focus here on a contemporary author who has become a true reference on the subject: Derek Parfit. This author understands that personal identity comprises two dimensions: a quantitative or material one, which corresponds to the physical or biological part of identity; and a qualitative or formal dimension, which corresponds to the psychological or biographical aspects of the individual (Parfit, 2004, p. 375).

Theories formulated throughout history on personal identity have leaned toward one of these two dimensions, giving rise to two types of theories. The first is a theory of identity based solely on the bodily and cerebral dimension and its temporal continuity. That is, it is a theory focused only on the material part of identity, and therefore on biological identity. The second theory is based on psychological continuity or biographical identity. This theory, in turn, holds that a person is the same at two different moments in time if they recognize or remember themselves in a past event. According to this thesis, people who experience episodes of amnesia, senility, or temporary insanity somehow lose their identity.

Finally, the reductionist theory -advocated by Parfit himself- conceives identity as a complex phenomenon that must be explained in a sectorial manner, that is, through simpler parts. Parfit argues, personal identity is not what truly matters, but rather the continuity of our memories, values, and, to some extent, our character. It is about identifying a set of qualities that do not form something univocal, but which make us unique and unrepeatable.

Personal identity, therefore, would be a polyhedral and dynamic issue, as it is something unfinished and in constant construction, though it retains a unique essentiality. Parfit’s thesis, which views identity as an incomplete project, should be complemented by the theory of mind proposed by Andy Clark and David Chalmers. Both authors claim that the brain does not construct the mind -our immaterial identity- in an autarkic manner, but rather through multiple external connections such as vision, experiences, and various technical and material supports. According to the “active externalism” thesis, the cognitive process results from a coupling between the brain and all external elements, which actively shape the individual’s conscious entity (2011, p. 65 et seq.).

Our mind is an “extended mind,” which is not limited or confined to the narrow material boundaries of our brain, among other reasons because the brain is in constant interaction with its environment. Moreover, if we accept, as Javier Echeverría argues, that human beings now develop their mental life in a third environment -that is, the digital environment- then we must agree that our brain constructs reality and, consequently, our identity through the multiple digital devices we use -mobile phones, tablets, computers, etc.-. In this context, therefore, we should speak of “technopersonality” rather than personal identity.

However, this new environment of “technopersonalities” will not be qualitatively affected by the proliferation of interfaces, but only quantitatively, since BCIs have significantly enhanced the immersive capacity of the third environment or digital environment.40 The free development of personality now finds external supports that strengthen these personality changes, which add even more dynamism to personal identity. Autonomy of will and personal identity converge in the guiding principle of any liberal democracy, which is none other than the free development of personality. However, with the potential use of a BCI for cognitive enhancement, the question arises regarding the limits of one’s own will: Is autonomy of will unlimited? Could a person voluntarily bring about the end of their own freedom and, consequently, of their human dignity? This question arises at the very boundary between the State as guarantor of fundamental rights and the individual freedom of a BCI user. The governance of BCIs must strike a balance between, on one hand, respect for fundamental rights -especially autonomy, identity, and privacy- and, on the other, the user’s free development of personality, to avoid unjustified paternalism. Precisely to prevent state interference of a paternalistic nature, this article advocates for a governance of prevention rather than direct state intervention through prohibitions. This prevention could be structured around two normative domains: one belonging to the realm of rules and the other to that of principles, following Ronald Dworkin’s classification.

Considering that any adult with full legal capacity can make “freedom-limiting” decisions -that is, they can bring about the end of their own individual freedom- a possible solution to avoid this paradox could be the consolidation of a "good practice of use", which would in any case be optional for the user but prescriptive for the provider a rule. In the medical field, this clinical instrument has been referred to as the “Ulysses Contract,” which would be a specific and concrete modality of an “advance directive.” Ulysses Contracts aim to anticipate a possible situation of akrasia or lack of personal mastery or self-control (Connor et al., 2022, p. 691).

Diachronic consent and identity change. To address foreseeable identity drift or akrasia induced by write-in BCIs, advance directives (Ulysses contracts) can specify: (i) objective triggers (usage thresholds, clinical signs, third-party observations), (ii) authorised enforcers (named relatives/clinicians), (iii) permitted actions (temporary suspension, parameter limits, lockouts, voluntary admission), and (iv) review and revocation conditions. While enforceability is jurisdiction-dependent, the normative function of such clauses is to preserve the person’s authentic prior will under foreseeably states of diminished self-control, in line with autonomy, beneficence and non-maleficence.

In the clinical context, this is known as a “Crisis Plan” or “Advance Care Planning.” It involves opening a process of information-sharing, counselling, and dialogue between the patient and specialists, which may result in the so-called “Ulysses Contract,” in which the patient’s preferences regarding future treatment or hospitalization are included (Schwarz et al., 2025, p. 5). Although these contracts differ in their nature and legal force depending on the country, they share the same spirit: allowing the patient to express their wishes and preferences regarding treatment or hospitalization if they lose the capacity to express such deeply personal matters in the future. However, it must be considered that if a patient had previously chosen a particular treatment before losing their agency or capacity for self-determination, and later, after losing that capacity, opposes receiving any treatment, they obviously cannot be forcibly subjected to that treatment. In other words, the Ulysses Contract finds its own limitations within the patient’s autonomy of will (Connor et al., 2022, p. 691). Nevertheless, for our purposes, it may prove to be a very useful tool for a BCI user who, facing the threat of developing an addiction, a personality change, or even a mental health problem as a consequence of repeated use of a write-in BCI, may request that people in their environment disconnect the BCI or, if necessary, arrange for admission to a mental health facility. Ulysses Contracts have been widely developed and well received in both the field of mental health and addiction treatment, and this accumulated clinical experience could be usefully leveraged to more effectively implement this tool in the regular use of write-in BCIs (Connor et al., 2022, p. 704).

The other way for protecting the fundamental rights of BCI users lies in the development and effective guarantee of the principles of beneficence and non-maleficence required by the Belmont Report. These principles materialized through a preventive legal regulation that focuses on responsible and benevolent technical design, which, in any case, respects fundamental rights and avoids any harm or injury. It seems evident that the misuse of BCIs could lead to an addiction to the interface comparable to so-called “non-substance addiction,” as is the case with gambling, pornography addiction, or work addiction. The problem with non-substance addictions is that they pose a serious ethical and legal challenge, since, unlike substance addictions, they involve legal behaviour that only becomes problematic for the individual and their environment when its use or consumption becomes excessive.41

As is well known, many online services -games, apps, social media platforms, etc.-deliberately employ techniques for capturing attention and providing immediate rewards, applying the current neuroscientific knowledge about the brain and its addiction to dopamine. For this reason, the Digital Services Act (EU) 2022/2065 of 19th October 2022 (DSA, by its English acronym) requires online service providers to design their platforms responsibly and ethically to prevent addiction and compulsive behaviours, especially in the case of minors, people with disabilities, or those at risk of vulnerability. Article 25 requires that “providers of online platforms shall not design, organize, or operate their online interfaces in a way that deceives or manipulates the recipients of the service, or otherwise materially distorts or impairs the ability of recipients of their service to make free and informed decisions.” To ensure compliance with this requirement, Articles 34 and 35 of the DSA oblige very large online platforms to carry out risk assessments of the systemic effects resulting from the functioning of their services, especially the potential negative effects on mental health, algorithmic manipulation, or the encouragement of addictive behaviours that may result in excessive use of their services, the generation of dependency, or even the loss of self-control—akrasia. This preventive legislation thus aims to avoid the use—and possible abuse—of these systems from eventually affecting two especially protected legal interests: personal identity and autonomy of the will.

Finally, the principle of justice established in the Belmont Report requires a fair distribution of public resources. This principle of distributive justice therefore demands the intervention of the State as guarantor of equitable access to cognitive enhancement through a BCI. Obviously, we are referring to BCIs in the healthcare field; however, one might also consider whether the State should act not only as a direct provider of BCI-based therapeutic services, but also in other areas where the State already operates as an educator or behaviour enhancer. These areas are education and the penitentiary system. The final section will be devoted to analysing the application of BCIs in the public sector.

3.2. The State as a “behaviour enhancer” through BCIs

At present, the BCIs used in the public sector are only applied in the medical field, although it is not unreasonable to imagine other possible uses of cognitive enhancement BCIs in the educational or penitentiary domain, since in these settings the State acts as an “educator.” It should be noted that, both in the field of compulsory formal education and in educational and social reintegration programs implemented in penitentiaries,42 the State has a clear aim of improving the behaviour of the administered or supervised individuals, through a systematic “socializing intervention.” Considering this educational function of the State, would it then be legitimate to use write-in BCIs to increase the State’s effectiveness in these socializing processes? That is, could the State use write-in BCI systems in educational centres and prisons to instil civic values in students and inmates that allow for better social coexistence?

The possibility that the State could “enhance behaviour” through write-in BCIs, especially in institutional contexts such as schools or penitentiaries, raises serious concerns from the perspective of respect for personal autonomy. It is possible—and even desirable for some—to replace the current persuasive model of education with one of direct intervention, even for morally laudable purposes, such as promoting empathy or inhibiting violent behaviour (Schermer, 2009, p. 226). However, in contexts of asymmetric power, such as those that exist in the educational or penitentiary sphere, the objection to direct intervention through BCIs becomes particularly significant, since consent could be seriously compromised because of the imbalance inherent in a relationship of “special subjection.”

Indeed, this is precisely what the 2016 General Data Protection Regulation of the EU warns about when it states that consent given in certain contexts may be “flawed” when provided within the framework of a special subjection relationship with the Public Administration. Jan C. Bublitz has warned that allowing the State to directly intervene in mental processes for corrective or preventive purposes may result in a technologically sophisticated form of mental coercion, violating the right to freedom of thought—even when the individual consents (Bublitz, 2014, p. 12-15/25). The author seeks to avoid the risk of a potential “soft paternalism” based on a moral program of civic improvement (idem, p. 14/25).

The right to freedom of thought defended by Bublitz (2022, p. 13/15) is recognized in Article 9 of the European Convention on Human Rights and in Article 10 of the Charter of Fundamental Rights of the EU, where it is considered absolute in its internal dimension—that is, it admits no restrictions, not even from the State for preventive or educational purposes. Therefore, the use of write-in BCIs by the State in the contexts could represent an unjustified direct intrusion into the most intimate space of human beings, thereby dragging the democratic Rule of Law towards the logic typical of totalitarian States.

Faced with this potential threat, authors such as Marcelo Ienca and Roberto Andorno have repeatedly advocated for the adoption of an international Declaration within the framework of the United Nations that enshrines and guarantees neuro-rights at the global level. However, for the time being, the only initiative underway is a Recommendation on the Ethics of Neurotechnologies promoted by UNESCO, in which Marcelo Ienca participates as rapporteur. These authors advocate for the necessary adoption of a set of neuro-rights to prevent possible interferences by the State and even by the private sector. These neuro-rights would protect individuals from the use of both write-in and read-out BCIs. Through the recognition of the right to cognitive liberty, the right to mental integrity, and, finally, the right to psychological continuity, individuals would be protected against potential public and private intrusions that could be carried out via write-in BCIs. In turn, they demand the recognition of mental privacy in view of the possibility of reading brain activity without the consent of the user of a read-out BCI.43 However, it has already been argued here that approving a catalog of neuro-rights is legally redundant, since all the legal interests that these new rights aim to enshrine are already guaranteed by the traditional fundamental rights enshrined in any Western Constitution.

In any case, the use of a BCI must be preceded by the express and unequivocal manifestation of informed consent, which is the enabling and legitimizing instrument of any human interaction with technology. Nevertheless, personal agency finds its limit in those spaces where the individual does not decide within a realm of real and full freedom, but rather within an apparent one. Both the educational and penitentiary contexts are determined by a state prescription—whether a law or a court sentence—that obliges minors to attend school (and, consequently, their parents or guardians to ensure it), and convicts to enter prison. These individuals are involuntarily subjected to a special subjection relationship that results in a relational asymmetry, where any possibility of making fully free decisions is suppressed, even if the individuals have received the necessary and comprehensible information and all ethical and legal requirements have been fulfilled. In these cases, what is at stake is not only freedom of thought—as defended by Bublitz—but the very possibility of dissent, that is, of refusing even to adopt prevailing social norms. The mind thus emerges as the last inviolable stronghold of the person, as the final personal resistance against possible state intrusions which, under the pretext of "improving" individuals’ behaviour, may ultimately seek to "dominate" their innermost will.

The refusal of any use of write-in or read-out BCIs in the contexts constitutes a clear expression of the idea of freedom as non-domination, characteristic of republicanism.44

Conversely, allowing the State to interfere with our brains through technology would be akin to turning us into transparent citizens, in the purest style of Winston Smith in the novel 1984. Therefore, it is only in a context of equity, information, and full respect for fundamental rights and democracy that the uses—and even the abuses (if they are voluntary)—of BCIs can be considered legitimate.

4. Some Conclusions

The incorporation of Brain-Computer Interfaces (BCIs) into human experience, especially in their write-in modality, compels us to rethink traditional notions of personhood, autonomy, and identity. Ortega y Gasset’s idea of the zoon technikon, along with Javier Echeverría’s concepts of the “techno-person” and the “third environment,” show that technological mediation is no longer external or occasional, but constitutive of contemporary subjectivity. For this reason, ethics and law must abandon purely instrumental views and acknowledge the transformative—and sometimes disruptive—potential of technology in the configuration of the human being.

Among all neurotechnologies, write-in BCIs pose the greatest risk to the autonomy of the will and to the subject’s identity. Their ability to directly alter brain activity raises the question of whether informed consent can continue to function as a sufficient criterion of legitimacy. If a person freely authorizes interventions that may undermine that very freedom, we are faced with a classic dilemma of liberal democracies: can one freely renounce one’s own freedom? This is the paradox of Ulysses on the Ship of Theseus, which questions which of the two wills ought to be legally respected.

Following Derek Parfit, and Clark and Chalmers, identity is not a fixed essence, but a narrative process, extended and in constant construction. However, this flexibility has limits. Write-in BCIs may put at risk the psychological continuity that enables us to recognize ourselves as the same person over time and, consequently, to be held ethically and legally responsible for our actions. Law and ethics must safeguard this narrative coherence, beyond mere physical integrity.

This article defends the importance of anticipatory reflection from a legal-philosophical perspective in the face of the challenges posed by disruptive technologies such as BCIs. The philosophy of fundamental rights defines a model of anticipatory regulation focused on ethical design, risk evaluation, and respect for fundamental rights. In this context, the “Ulysses Contract”—inspired by clinical practice—may prove a useful tool for users to anticipate situations of loss of control, addiction, or personality change. Although not without limitations, it offers a point of balance between user autonomy and the prevention of future harm.

The proposal to create new fundamental rights—neurorights—responds to understandable concerns. However, the interests they aim to protect—such as mental privacy, cognitive liberty, or psychological continuity—are already guaranteed by classical fundamental rights. The problem does not lie in the lack of rights, but in the need to ensure the effectiveness of the rights that already exist. Legal inflation may weaken the normative force of the system and distract from the real challenge: guaranteeing rights in the face of new technological threats.

In contexts of special subjection such as schools or prisons, the use of write-in BCIs to promote “desirable” behaviour directly clashes with freedom of thought, the free development of personality, and personal autonomy. Even when informed consent is obtained, asymmetrical relationships compromise its validity. Allowing the State to intervene directly in mental processes for formative or corrective purposes brings us dangerously close to an authoritarian model incompatible with a democratic rule of law. The mind must remain the last inviolable stronghold of human freedom. In a society driven by optimization, performance, and intellectual and physical perfection, the possibility of dissenting, of not wanting to improve—in other words, of being imperfect—must continue to be a fundamental right. Therefore, any regulation of BCIs must guarantee individual freedom and the privacy of our thoughts, for anything less would amount to ceding the sovereignty of our minds to others.

Bibliography

ANDORNO, R. and IENCA, M., (2017). “Towards new human rights in the age of neuroscience and neurotechnology”. Life Sciences, Society and Policy, 13(1), pp. 5. https://orcid.org/10.1186/s40504-017-0050-1 https://lsspjournal.biomedcentral.com/articles/10.1186/s40504-017-0050-1

ARISTÓTELES (1986). Política. Madrid: Alianza.

ASÍS, R. (2022). “Sobre la propuesta de los neuroderechos”. Derechos y Libertades, 47(II). https://e-revistas.uc3m.es/index.php/DYL/article/view/6873/5469

BITTLE, J. (2020). “Lie detectors have always been suspect. AI has made the problem worse”. MIT Technology Review. https://www.technologyreview.com/2020/03/13/905323/ai-lie-detectors-polygraph-silent-talker-iborderctrl-converus-neuroid/

BRUNNER, C., BIRBAUMER, N., BLANKERTZ, B., GUGER, C, KÜBLER, A., MATTIA, D., MILLÁN, J., MIRALLES, F., NIJHOLT, A., OPISSO, E., RAMSEY, N., SALOMON, P. and MÜLLER-PUTZ, G. (2015). “BNCI Horizon 2020: towards a roadmap for the BCI community”. Brain-Computer Interfaces, 2(1). https://doi.org/10.1080/2326263X.2015.1008956

BUBLITZ, J. C. (2014). “Freedom of Thought in the Age of Neuroscience”. Research Gate. https://doi.org/10.25162/arsp-2014-0001

BUBLITZ, J. C. (2022). “Novel Neurorights: From Nonsense to Substance”. Springer Nature, 15. https://doi.org/10.1007/s12152-022-09481-3

COIN, A., MULDER, M. and DUBLJEVIC, V. (2020). Ethical aspects of BCI tecnology: what is the state of the art?”. Philosophies, 5, e31. https://www.mdpi.com/2409-9287/5/4/31

CONNOR, T. A., CHEN, S., CHO, M., McCOY, L. and DAS, S. (2022). “Steering clear of Akrasia: An integrative review of self‐binding Ulysses Contracts in clinical practice”. Bioethics, 37(7), 690-714. https://doi.org/10.1111/bioe.13197

DIÉGUEZ, A. (2024). Pensar la tecnología. Una guía para comprender filosóficamente el desarrollo tecnológico actual. Shackleton Ed.

ECHEVERRÍA, J. and ALMENDROS, L. (2023). Tecnopersonas. Cómo las tecnologías nos transforman, 2ª ed. (1ª ed. 2020). Gijón: Trea Ensayos.

HARARI, Y. (2017). Homo Deus. Breve historia del mañana. Madrid: Debate.

HOMERO. La Odisea. https://bibliotecadigital.ilce.edu.mx/Colecciones/ObrasClasicas/docs/Odisea.pdf

KLEIN, E. and NAM, C. S. (2016). “Neuroethics and brain-computer interface (BCIs)”. Brain Computer Interface, 3(3). https://doi.org/10.1080/2326263X.2016.1210989

KURZWEIL, R. (2005). The Singularity Is Near: When Humans Transcend Biology. New York: Viking / Penguin Books.

LA VAQUE, T. J. (1999). “The History of EEG Hans Berger”. Journal of Neurotherapy: investigations in Neuromodulations, Neurofeedback and Applied Neuroscience, 3(2), 1-9 https://doi.org/10.1300/J184v03n02_01 (Published on line: 20th October 2008).

LOTTE, F., NAM, Ch. S. and NIJHOLT, A. (2018). Brain-Computer Interfaces Handbook: Technological and Theoretical Advance. Taylor & Francis (CRC Press). https://inria.hal.science/hal-01656743/document

LYRESKOG, D. M., ZOHNY, H., SINGH, I. and SAVULESCU, J. (2023). “The Ethics of Thinking with Machines: Brain-Computer Interfaces in the Era of Artificial Intelligence”. International Journal of Chinese and Comparative Philosophy of Medicine, 21(2). https://doi.org/10.24112/ijccpm.212676

MAISELI, B., ABDALA, A. T., MASSAWE, L. V., MBISE, M., MKOCHA, K, ALLY, N, MICHAEL, J. and KIMAMBO, S. (2023). “Brain Computer interface: trends, challengers and threats”. Brain Informatic Springer Open. https://braininformatics.springeropen.com/articles/10.1186/s40708-023-00199-3

MONASTERIO ASTOBIZA, A., AUSÍN, T., TOBOSO, M., MORTE, R., APARICIO, M. and LÓPEZ, D. (2019). “Traducir el pensamiento en acción: Interfaces cerebro-máquina y el problema ético de la agencia”. Revista de Bioética y Derecho, 46. https://scielo.isciii.es/scielo.php?pid=S1886-58872019000200003&script=sci_arttext

ORTEGA Y GASSET, J. (1965). Meditaciones de la técnica. Madrid: Austral (Espasa Calpe).

PARFIT, D. (2004). Razones y Personas. Madrid: Antonio Machado Ed.

PETTIT, P., (1999). Republicanism: A Theory of Freedom and Government, Paidós, Barcelona.

SAHA, S., MAMUN, K, A., AHMED, K., MOSTAFA, R., NAIK, G., DARVISHI, S., KHANDOKER, A., H., BAUMERT, M. (2021). “Progress in Brain Computer Interface: Challenges and Opportunities”. Frontiers in Systems Neuroscience, 15. https://www.frontiersin.org/journals/systems-neuroscience/articles/10.3389/fnsys.2021.578875/full

SAVULESCU, J. and ZOHNY, H. (2024). “When two become one: Singular Duos and the Neuroethical Frontiers of Brain-to-Brain Interfaces”. Cambridge Quarterly of Healthcare Ethics, 33(4), 494-506. https://doi.org/10.1017/S0963180124000197 https://www.cambridge.org/core/journals/cambridge-quarterly-of-healthcare-ethics/article/when-two-become-one-singular-duos-and-the-neuroethical-frontiers-of-braintobrain-interfaces/4416F124088FF38A262A5BD62F55750A

SCHWARZ, J., MEIER-DIEDRICH, E., SCHOLTEN, M. and BLEASE, C. R. (2025). Integration of Psychiatric Advance Directives into the Patient-Accessible Electronic Health Record: Exploring the Promise and Limitations. J Med Internet Res.

SCHERMER, M. (2009). The Mind and the Machine: On the Conceptual and Moral Implications of Brain–Machine Interaction”. Nanoethics, 3, 217–230. https://doi.org/10.1007/s11569-009-0076-9

SEARLE, J. (2000). The Mystery of Consciousness. Barcelona: Paidós, Barcelona.

SUN, X. Y, and YE, B. (2023). “The functional differentiation of brain-computer interfaces (BCIs) and its ethical implications. Humanities and Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-02419-x

YUSTE, R., GOERING, S., AGÜERA Y ARCAS, B., BI, G., CARMENA, J. M., CARTER, A., FINS, J. J., FRIESEN, P. GALLANT, J., HUGGINS, J. E., KELLMEYER, P., KLEIN, E., MARBLESTONE, A., MITCHELL, C. PARENS, E., PHAM, M., RUBEL, A., SADATO, N., SULLIVAN, L., TEICHER, M., WASSERMAN, D., WELEX, A., WHITTAKER, M. and WOLPAW, J., (2017). “Four ethical priorities for neurotechnologies and AI”. Nature. https://www.nature.com/articles/551159a

Legal and Normative References

International Legal Framework

Nuremberg Code (1947).

Declaration of Helsinki (1964).

Belmont Report (1979).

Convention on the Rights of the Child (1989).

Convention on the Rights of People with Disability (2006).

Recommendation on Responsible Innovation in Neurotechnology (2019)

Recommendation on the Ethics of Neurotechnology (2021).

EU Legal Framework

Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices.

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 General Data Protection Regulation (GDPR).

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Artificial Intelligence Act (AI Act).

Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 Digital Services Act (DSA).

European Convention on Human Rights, (Article 9 Freedom of thought).

Charter of Fundamental Rights of the European Union, (Article 10 Freedom of thought), conscience and religion.

Received: 2nd July 2025

Accepted: 8th October 2025

_______________________________

1 Assistant Collaborating Professor University Pontificia Comillas ICADE (vmorente@icade.comillas.edu).

2 Julian Savulescu and Hazem Zohny have recently published an article in which they address the ethical issues that may arise not from the hybridization between an individual and a machine or AI, but between brains—what has come to be known as Brain-to-Brain Interfaces (BBIs). Among the ethical questions they raise, the following are particularly noteworthy: What identity emerges from the hybridization of two brains? What consequences could the voluntary connection of two adult brains via interfaces—based on sufficient information—have for “mental privacy”? Would this constitute a “personal unity” or rather two individuals—a duo? Savulescu, J. and Zohny, H., “When Two Become One: Singular Duos and the Neuroethical Frontiers of Brain-to-Brain Interfaces”, Cambridge Quarterly of Healthcare Ethics (2024), 33:4, pp. 494–506. https://www.cambridge.org/core/journals/cambridge-quarterly-of-healthcare-ethics/article/when-two-become-one-singular-duos-and-the-neuroethical-frontiers-of-braintobrain-interfaces/4416F124088FF38A262A5BD62F55750A

3 “The city is one of the natural things and that man is by nature a civic (political) animal. And the enemy of civic society is, by nature and not by accident, either an inferior being or more than a man. (...) And this is what is proper to humans in contrast to the other animals: to possess, uniquely, the sense of good and evil, of just and unjust, and of the other moral distinctions. The communal participation in these is what founds the household and the city. By nature, the city is prior to the household and to each one of us. For the whole is necessarily prior to the part". Aristóteles (1986), La Política, Alianza, Madrid, p. 62 (1253 a.)

4 Leon Declaration on European Neurotechnology: A Human Rights-Based and Person-Centered Approach. Spanish Presidency. Council of the European Union, 23–24 Oct. 2023. https://digital.gob.es/dam/es/portalmtdfp/DigitalizacionIA/declaracion_de_Leon.pdf1

5 In the United States, we find the BRAIN Initiative (Brain Research Through Advancing Innovative Neurotechnologies) promoted by the NIH (National Institutes of Health), which is currently developing the MAP Project, aimed at investigating the dynamic interrelations between brain regions that underlie certain pathologies and disabilities.

In China, the China Brain Project focuses on studying the properties of individual nerve cells and how they communicate through synapses to produce cognitive functions such as consciousness, memory, and reasoning, as well as to prevent brain diseases. It also seeks to leverage knowledge about the brain to advance AI technologies. In Europe, the Human Brain Project—funded by the EU—is being developed with the goal of synthetically reproducing human brain capacities. Source: TechDispatch on Neurodata, Spanish Data Protection Agency (AEPD) https://www.aepd.es/guias/neurodatos-aepd-edps.pdf

6 The Leon Declaration acknowledges the undeniable commercial potential of BCI systems by recognizing that the non-medical applications of any neurotechnology represent a new opportunity to transform education, well-being, and entertainment through neurostimulation or brain modulation and stimulation. It explicitly refers to the private sector’s interest -especially the gaming and entertainment industries- in brain-computer interfaces, which increased significantly in 2022. See León Declaration (see citation 2).

7 As will be addressed in another section of this chapter, some authors argue that a “controlled use” of interfaces -or even of Deep Brain Stimulation (DBS)- would be ethically justified if it serves to prevent a potential danger or harm to society. For example, eradicating the sexual drive of a pedophile through brain stimulation. The legitimacy of these “behavior-enhancing interfaces” is understood to lie in the free and informed consent of the individual undergoing the intervention; otherwise, such practices would constitute a serious violation of personal freedom and integrity. Lyreskog, D.d M.; Zohny, H.; Singh, I.; and Savulescu, J., (2023), “The Ethics of Thinking with Machines: Brain Computer Interfaces in the Era of Artificial Intelligence”, International Journal of Chinese and Comparative Philosophy of Medicine, 21(2), p. 23-24. https://ora.ox.ac.uk/objects/uuid:bc99679d-b4c8-4639-b9a5-eb14359bd75a

8 https://www.consilium.europa.eu/media/fh4fw3fn/art_braincomputerinterfaces_2024_web.pdf?utm_campaign=20241002-art-research-paper&utm_content=visual&utm_medium=social&utm_source=chatgpt.com

9 Counsil of the UE From vision to reality: Promises and risks of Brain Computer Interface 2024. https://www.consilium.europa.eu/media/fh4fw3fn/art_braincomputerinterfaces_2024_web.pdf?utm_source=linkedin.com&utm_medium=social&utm_campaign=20241002-art-research-paper&utm_content=visual

10 Most BCI applications and uses focus on motor disabilities. The Roadmap document, part of the Horizon 2020 Project of the European Commission, presents four case studies of individuals who have used each of the four BCI systems—Replace, Restore, Improve, and Enhance—to illustrate the range of use cases for these highly promising techniques, especially for people with severe physical disabilities. https://openlib.tugraz.at/download.php?id=56194931c6b87&location=browse

11 https://inria.hal.science/hal-01656743/document

12 Saha, S., Mamun, K, A., Ahmed, K., Mostafa, R., Naik, G., Darvishi, S., Khandoker, A., H., Baumert, M., (2021), “Progress in Brain Computer Interface: Challenges and Oportinites”, Frontiers in Systems Neuroscience, Feb., Vol. 15. https://www.frontiersin.org/journals/systems-neuroscience/articles/10.3389/fnsys.2021.578875/full?utm

13 Although the polygraph technique has been questioned many times due to its lack of technical and scientific reliability, it still enjoys a high degree of credibility in the collective imagination. This has led to the use of algorithms that “detect lies” in settings such as airports or job interviews being perceived as even more legitimate, despite also lacking technical reliability. Bittle, J. (2020), “Lie detectors have always been suspect. AI has made the problem worse,” MIT Technology Review, March 13th. https://www.technologyreview.com/2020/03/13/905323/ai-lie-detectors-polygraph-silent-talker-iborderctrl-converus-neuroid/

14 Some authors refer to it as the “religion of dataism”, which, according to Yuval Harari, understands the universe as a flow of data, whereby each human being is a unit of information that integrates seamlessly into the digital society. Harari, Y. (2017), Homo Deus: A Brief History of Tomorrow, Debate, Madrid, p. 428.

15 It may also be referred to as AI-BCI, although the specialized literature more commonly refers to it as ABT (Artificial Brain Technology). Lyreskog, D. M., Zohny, H., Singh, I., and Savulescu, J., (2023), The Ethics of Thinking with Machines: Brain-Computer Interfaces in the Era of Artificial Intelligence,” International Journal of Chinese and Comparative Philosophy of Medicine, 21(2), p. 13.

This article explains how this type of interface has been particularly successful in the therapeutic field, especially when used as a tool for people with disabilities. Ibid., p. 14.

16 This definition is taken from the Horizon 2020 Project, which has already been cited. This initiative, led by the European Commission since 2013, aims to establish the state of the art in the European context—it is not a project designed to fund specific research activities—regarding the development of certain technologies, including human brain–machine interface systems. The project is coordinated by Graz University of Technology (Austria), although it includes the participation of 11 other internationally renowned research centres specializing in BCI systems.

17 Council of the UE “From vision to reality”, p. 4

18 https://cordis.europa.eu/article/id/150378-nebias-the-worlds-most-advanced-bionic-hand

19 https://recoverix.com/es/stroke-study-results/

20 In August 2020, Elon Musk presented Neuralink. This company develops intracranial interfaces, coin-sized devices designed to be implanted in individuals with tetraplegia or ALS. In 2023, Neuralink received FDA (Food and Drug Administration) approval to begin human trials, and so far, it has implanted its chips in three patients, who, according to reports, are showing promising results.
Neuralink is the paradigmatic example of the growing interest in the private sector in developing neuroscientific applications across various fields, particularly in the educational and—most notably—the entertainment sectors.

21 https://es.tobiidynavox.com/

22 https://inbrain-neuroelectronics.com/

23 https://mertclinics.com/ Biometric data are regulated under Regulation (EU) 2016/679 of the European Parliament and of the Council of 27th April 2016 on the protection of natural persons about the processing of personal data. The Regulation defines biometric data as: “personal data resulting from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data.”

Biometric data are classified as a category of “special categories of personal data” under the GDPR.

24 https://newzoo.com/resources/blog/global-games-market-revenue-estimates-and-forecasts-in-2024

25 Article 8 of Law 41/2002, 14th November, the basic law regulating patient autonomy and rights and obligations in the field of clinical information and documentation.

Article 9 governs consent by representation and establishes that, in all cases, the decision of the legal representative must always be made in accordance with the greatest benefit to the patient’s life or health.

26 As previously noted, BCI systems may have a purely research-oriented purpose, rather than a therapeutic one, even if therapy is a medium- or long-term goal. In such cases, attention must also be paid to the ethical principles set out in the Nuremberg Code, as its adoption in 1947 was intended to establish the minimum standard of moral legitimacy that must be observed and respected in any research process. https://www.conicyt.cl/fonis/files/2013/03/El-C%C3%B3digo-de-Nuremberg.pdf

27 Recommendation of the Council on Responsible Innovation in Neurotechnology, OECD (Organisation for Economic Co-operation and Development) 11th December 2019. https://legalinstruments.oecd.org/en/instruments/oecd-legal-0457

28 International Report Committe of Bioethics (UNESCO) Ethical Issues of Neurotechnology, Dec. 2021, p. 31 et seq.

29 Article 4(14) of the EU General Data Protection Regulation (GDPR) defines biometric data as: “personal data resulting from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopy data.”

30 The European Data Protection Supervisor (EDPS), and by extension the Spanish Data Protection Agency (AEPD), follow a primarily physiological classification of data extracted from neuronal activity. They classify such data into “morphological or structural data,” “functional data,” and, finally, “data relating to the peripheral nervous system.”

Considering this taxonomy, the EDPS and AEPD classify neural data solely based on their material or biometric dimension. However, this initial classification is complemented by three additional categories based on purpose, one of which includes psychological data and data intended for neuroenhancement.

In TechDispatch on Neurodata, by the European Data Protection Supervisor and the Spanish Data Protection Agency, 2024. https://www.aepd.es/guias/neurodatos-aepd-edps.pdf

31 According to the “emergentist” thesis, defended by authors such as John Searle, the mind and consciousness arise—or emerge—as immaterial entities from a material basis, namely the brain. That is, mind and brain are two different dimensions, although they are indissociable. The mind—or consciousness—cannot be understood without the brain; in fact, the brain determines the mind.

Emergentism represents a third line of argument to explain the mind/brain relationship, positioned between dualism and monism. The dualist or “compatibilist” thesis is based on the possible distinction between brain and mind, the latter being understood as self-awareness, or even as the soul of each person.

On the other hand, the monist or “incompatibilist” thesis—supported by many neuroscientists—denies that human beings possess a dual material and immaterial nature. Monism holds that the human being is composed solely of organic matter; that is, the soul is a creation of the human brain. Searle, J. (2000), The Mystery of Consciousness, Paidós, Barcelona, p. 17–30.

32 Article 9 GDPR (General Data Protection Regulation) EU.

33 Article 5.1 c) GDPR (EU).

34 Article 5.2 and 24 GDPR (EU).

35 Chile is a global pioneer in granting legal protection to the so-called neurorights. In 2021, the Chilean Senate approved a constitutional reform to guarantee a new set of rights aimed at protecting the human brain. The reform amends Article 19 of the Chilean Constitution, establishing that scientific and technological development must serve people and respect their physical and psychological integrity. https://courier.unesco.org/es/articles/chile-pionero-en-la-proteccion-de-los-neuroderechos

36 Sun Xiao-Yu and Ye Bin advocate for a specific governance model for each type of BCI, distinguishing between write-in BCIs and read-out BCIs. In the former, the legal interests at risk are autonomy and personal identity, while in the latter, it is privacy that is primarily at stake. https://www.nature.com/articles/s41599-023-02419-x

37 Homero, La Odisea, p. 268-269. https://bibliotecadigital.ilce.edu.mx/Colecciones/ObrasClasicas/_docs/Odisea.pdf

38 https://www.bioeticayderecho.ub.edu/archivos/norm/InformeBelmont.pdf

39 The international ethical and deontological framework on “good practices in medical research” is defined, on the one hand, by the Nuremberg Code, adopted on 20th August 1947, and, on the other hand, by the Belmont Report, adopted on 18th April 1979.

40 According to Maartje Schermer, the digital or technological environment in which we currently develop our personality has not only changed our mind but has also modified our very physical dimension. Schermer argues that, with technological development, it is becoming increasingly difficult to conceptually distinguish between “human” and “machine,” as cyborgs are becoming more common, blurring the boundary between the two. Is a person connected to a BCI a human being or a machine? The author maintains that the answer lies in the notion of personality. If there is a personality capable of making decisions and assuming responsibility, then we are dealing with a “person” in the full moral and legal sense of the term. Schermer, M. (2009), “The Mind and the Machine. On the Conceptual and Moral Implications of Brain-Machine Interaction,” Neuroethics, 3, p. 221-222.

41 In some countries, such as Spain, France, Italy, or Mexico, it is possible to initiate legal proceedings to declare a person as “prodigal” when they squander or irrationally waste their assets—such as in cases of gambling addiction—thus endangering the financial stability of their own family. These declarations are usually resolved by assigning a temporary guardianship (or legal guardianship) to oversee and protect the person’s financial decisions.

42 The EU adopted Commission Recommendation 2023/681 of 8 December 2022 on the procedural rights of suspected or accused persons subject to pre-trial detention and on the material conditions of detention, which calls on Member States to invest in the social reintegration of prisoners and to adapt to their individual needs.

43 In January 2025, the United Nations Special Rapporteur, Ana Brian Nougrères, published a report on the right to privacy in relation to the regulation of neurotechnologies and the use of neurodata. This report highlights two key aspects that must be considered when regulating read-out BCI systems.

First, it stresses that neurodata are biometric data that allow for personal identification with no margin of error, as they are comparable to a fingerprint.

Second, neurodata provide such deep insight into the individual that they enable predictions to be made about the characteristics or predispositions of the subject from whom the neurodata originate.

44 Philip Pettit is the contemporary reference for republicanism, and in his work of the same name he defines freedom as non-domination, distinguishing it from freedom as non-interference, which corresponds to the negative concept of liberal freedom.

Freedom as non-domination is the freedom of the city; it is the kind of freedom that one enjoys in the presence of others, and by virtue of a social design—understood as republican—none of them exercises domination over the individual. Pettit, P. (1999), Republicanism: A Theory of Freedom and Government, Paidós Ed., Barcelona, pp. 95–96.