MIGUEL ÁNGEL ELIZALDE CARRANZA1
Abstract: This paper examines the potential application of the European Union’s Artificial Intelligence Act (Regulation 2024/1689) to neurotechnologies (NTs), which increasingly rely on artificial intelligence (AI) for functions such as neuroimaging, brain-computer interfaces, and neurostimulation. In the light of growing ethical and human rights concerns, including the potential risks of inferring private thoughts, manipulating behavior, and undermining individual autonomy, the study evaluates how the AI Act’s provisions on prohibited and high-risk AI systems apply to AI-assisted NTs. It analyzes four categories of prohibited practices assessing their potential application to NTs: subliminal, manipulative or deceptive techniques; criminal offence risk assessment and prediction; emotion recognition in workplaces or education; and biocategorisation based on sensitive characteristics. It also considers high-risk classifications under Annexes I and III of the AI Act, with particular attention to medical device regulations and profiling activities. The findings suggest that while the AI Act does not directly apply to NTs, its regulation of AI systems that enable or enhance NT functions exerts a substantial regulatory impact. This framework may mitigate many NT-related risks, indicating that calls for creating new “neurorights” may be unnecessary.
Keywords: Neurotechnology, Artificial Intelligence, AI systems, Prohibited AI Systems, High-risk AI systems, Neurorights.
This paper explores the potential application of the European Union’s (EU) Regulation 2024/1689 on Artificial Intelligence (AI Act) to neurotechnologies (NTs), which are increasingly supported by Artificial Intelligence (AI) (Zhou et al., 2025; Onciul et al., 2025). This is important given the growing international concern about NTs governance in the light of their significant ethical and human rights implications (United Nations Human Rights Council [UNHRC], 2024). The main source of concern is that these technologies could be misused to infer private thoughts or to control individuals’ behaviors, thereby depriving them of their agency (Hain et al., 2023). In the words of the Advisory Committee of the UNHRC (2024, para. 5):
“Neurotechnologies are unique and socially disruptive because they generally: (a) enable the exposition of cognitive processes; (b) enable the direct alteration of a person’s mental processes and thoughts; (c) bypass the individual’s conscious control or awareness; (d) enable non-consensual external access to thoughts, emotions and mental states”.
Although most of these technologies are “still in their infancy” (IEEE Brain, n.d.), concerns are grounded in scientific developments showing the potential of NTs (Bhidayasiri, 2024). AI is facilitating and accelerating the development of these technologies. As UNESCO’s assistant director-general for social and human sciences, Gabriela Ramos, observes:
“We are on a path to a world in which algorithms will enable us to decode people's mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions” (AFP, 2023).
In addition to the United Nations (UNHRC, 2024) and UNESCO (2025), other international organizations have also been conducting studies to develop normative responses to these risks, such as the Organization for Economic Cooperation and Development (OECD, 2025); the Organization of American States (Inter-American Juridical Committee, 2023); the Council of Europe (CoE & OECD, 2021); and the European Union (European Parliament Research Service [EPRS], 2024). This study aims to contribute to these efforts by analyzing the extent to which the new AI Act provides legal coverage for NT-related concerns.
Moreover, this study also aims to contribute to the debate on whether there is a need to adopt new forms of protection to address NTs governance challenges. In this regard, a group of scientists and scholars consider the existing international legal framework insufficient to address NTs concerns and have called for the adoption of new human rights to protect the human mind, the so-called “neurorights” (Ienca & Andorno, 2017; Yuste et al., 2017). Although it seems that this debate is coming to an end (Bublitz, 2024a), with some of the most legitimate voices already declaring that there is no evident need for them (UNHRC, 2024; EPRS, 2024; CoE & OECD, 2021), and even one of its most prominent advocates, the Neurorights Foundation, has stepped back, erasing from its website the mission statement that called for the creation of new human rights (Neurorights Foundation, n.d.), the findings of this study could give additional support to the idea that “neurorights” are not needed.
Now, this study does not aim to provide a comprehensive legal analysis of all the provisions of the AI Act potentially applicable to AI/NTs. That is neither feasible, given space limitations, nor necessary to achieve the intended purposes described above. This study focuses primarily on prohibited and high-risk AI systems under the AI Act with the greatest potential to apply to the main sources of ethical and human rights concerns related to AI/NTs. It also should be clarified from the outset that, unless otherwise indicated, references to articles in the text should be understood as referring to the AI Act.
The implementation schedule of the AI Act underscores the good timing of this study. While the Act’s general application is scheduled to begin on 2 August 2026, the rules on prohibited AI systems began applying on 2 August 2025. Some of the rules on high-risk systems will take effect alongside the AI Act in 2026, whereas others are deferred until 2 August 2027.
This study is structured as follows: after the introduction, Section 2 provides a definition of an AI System under the AI Act, as well as an analysis to determine whether AI/NTs satisfy the requirements of this definition. Section 3 describes the ethical and human rights concerns related to AI/NTs in the light of the AI Act’s purpose, based on human-centric approach. This section also includes a general overview of non-therapeutic applications of AI/NTs. Section 4 analyses certain prohibited AI systems under the AI Act and their potential coverage of AI/NTs. The analysis is limited to prohibited subliminal, manipulative, or deceiving techniques; individual criminal offence risk assessment and prediction; emotion recognition in the workplace or in educational institutions; and biocategorisation based on sensitive characteristics. Section 5 examines whether AI/NTs are covered by practices considered as high-risk under the AI Act. Finally, Section 6 contains the conclusions of the study.
There is no generally accepted definition of AI (Julià-Pijoan, 2024; Press, 2017). In fact, there is no single approach to defining AI. Some scholars focus on AI as a field of study, others on its autonomy, some compare computational systems with human intelligence, and still others focus on the technologies or applications of AI (Buiten, 2019). Unsurprisingly, agreeing on a definition of AI systems to determine the scope of application of the AI Act was one of the most controversial tasks for the drafters (European Law Institute, 2024).
The AI Act’s definition is based on a series of distinguishing characteristics of AI systems. First, these systems must be a machine-based, meaning that they must be operated by a machine (Recital 12). The term “machine” includes both hardware and software and emphasizes that AI systems must be computationally driven and based on machine operations.
Another characteristic is that AI systems must be designed to operate with autonomy. This refers to their ability to function with some degree of independence from human intervention (Recital 12; OECD, 2024), which can range from a fully human-controlled system to complete independence (ISO/IEC 22989:2022). This requirement is satisfied even by a minimum capacity to function without human involvement.
In addition, AI systems may exhibit “adaptiveness” after deployment. This concept refers to self-learning capabilities, which allow the system to change while in use (Recital 12). For example, with the same input, an AI system could produce different outcomes depending on its stage of evolution. In any case, the use of the conditional “may” in the definition indicates that a lack of adaptive capabilities does not disqualify a system from being considered an AI system.
A key characteristic of AI systems in the AI Act’s definition is their capability to make inferences from the input they receive. This capability refers, on the one hand, to the process of generating outputs, such as predictions, content, recommendations, or decisions, that can influence physical and virtual environments; and, on the other hand, to their ability to derive models or algorithms, or both, from inputs or data. The AI Act mentions two techniques with the capacity to make inferences: machine learning (ML) approaches and logic- and knowledge-based approaches. ML approaches learn from data how to achieve specific objectives, whereas logic- and knowledge-based approaches do not learn from data but instead make inferences from human-encoded knowledge or symbolic representation such as facts, rules, and relationships (Recital 12). The capacity of an AI system to infer transcends basic data processing by enabling learning, reasoning, or modelling. Therefore, the AI Act does not apply to systems that automatically perform operations based solely on rules defined by humans, without incorporating learning, reasoning or modelling (Recital 12).
Moreover, the AI Acts’ definition clarifies that AI systems may be used either on a stand-alone basis or as a component of other products. In the latter case, it is not necessary the direct integration of the AI system into the product; it is sufficient that it forms part of the product’s functionality. This broad understanding of AI system is particularly relevant for the potential application of the AI Act to NTs.
To avoid confusion, it must be observed that the AI Act also contains references and regulations for AI models (Recital 97), which are components of AI systems that perform specific tasks (Boquien, 2024).
NTs refer to instruments, devices, or procedures to access, monitor, investigate, assess, manipulate, or simulate the anatomy of the brain and its neural and synaptic processes (OECD, 2025), as well as the peripheral nervous systems (Imec, n.d.). There are various types of NTs, and different ways to classify them. One group consists of neuroimaging technologies that detect, monitor, and measure brain signals, generating visual representations of the structure (static vision) and functioning (dynamic vision) of the nervous system. Neuroimaging NTs are widely used for research and the detection of neurological dysfunctions or injuries. These technologies are not new; the electroencephalogram (EEG), which detects the brain’s electrical currents, is more than one hundred years old. Magnetic Resonance Imaging (MRI) is one of the most common NTs used to get structural representations of the nervous system, combining magnetic fields and radio frequencies to detect oxygen concentration variations in the brain. Computed Tomography (CT), based on X-rays, is also a structural neuroimaging NT. For their part, NTs with the capacity to generate images of brain functions include, for example, Magnetoencephalography (MEG), Positron Emission Tomography (PET), and Functional MRI (fMRI) (Andorno, 2023).
Another group of NTs concerns technologies with neuromodulation or neurostimulation capacities, meaning that they allow external intervention of the central and peripheral nervous system, inhibiting or activating neural processes by delivering electrical or other agents directly into specific areas of the brain (International Neuromodulation Society [INS], 2023). These NTs have various applications such as the treatment of chronic pain; movement disorders, including Parkinson’s disease; epilepsy; and psychiatric conditions like depression and compulsive obsessive behavior (INS, 2021). In general, NTs that operate in direct contact with the brain, under the scalp, and that often require surgery to be implanted are known as invasive NTs. Non-invasive NTs operate from outside the cranium. Deep Brain Stimulation (DBS) exemplifies invasive techniques, whereas non-invasive methods include Transcranial Magnetic Stimulation (TMS), Transcranial Electric Stimulation (tES), Transcranial Direct Current Stimulation (tDCS), and Transcranial Ultrasound Stimulation (TUS) (Alfihed et al., 2024).
Brain-Computer Interfaces (BCIs) form another group consisting of computer-based systems that detect and analyze brain signals, which are then translated into commands sent to other devices to generate a specific output (Shih et al., 2012; Alimardani & Hiraki, 2020). BCI usually detect brain signals through other NTs, such as EEG, MEG, and fMRI. BCI can be bidirectional, allowing communication to be sent and received from and to the brain. These systems are rapidly evolving, allowing real-time interactions between the brain, which can receive continuous feedback, and computers, that can adapt to changing environments, creating closed-loop systems (Belkacem et al., 2023).
AI, for their part, draws on computational sciences and engineering to develop machines that simulate humans’ cognitive functions, such as learning, problem-solving, decision-making, linguistic interaction, or autonomy (UNESCO, n.d.). AI has a close connection with neuroscience, which has inspired the design of many AI systems, for example, ML systems, including deep learning (DL) (Hassabis et al., 2017; Surianarayanan et al., 2023). More important for this study, AI systems, coupled with increased computational power, have been instrumental in the recent exponential growth of neuroscience research capacity.
The brain is an extremely complex organ, composed of one hundred billion neurons, with countless possibilities for synaptic connections. Every action, every thought, all feelings and impulses, conscious or unconscious, are triggered by neural interconnections, which differ in each case. Therefore, understanding brain dynamics requires processing enormous amounts of data. Until relatively recently, there was a limited capacity to study these complex neural interactions. AI now enables the processing of brain data obtained with NTs (AI/Ns), identifying patterns in neural processes and extracting inferences from them (Onciul et al., 2025; Haslacher et al., 2024). In BCI, for example, once brain data is collected through NTs, AI may be used to reduce signal noise, to process and classify resulting data, and to generate commands for devices, such as robotic arms, prostheses, or computers to translate brain data into typed language. AI is also used to conduct simulations of brain activity to understand cognitive functions and neural triggers of behavioral patterns (Schalk et al., 2024). AI is also increasingly converging with neuroimaging NTs. In this context, brain image data analysis is facilitated, and in some cases made possible, with AI. Among other applications, AI speeds up data acquisition, improves the signal-to-noise ratio, reconstructs images, and records neural activity. AI image processing is also used to assist with dose calculations (Surianarayanan et al., 2023). In neurostimulation, ML can process neural signals to determine when to trigger the stimulation and to perform automatic, patient-specific adjustments to its intensity, frequency, and patterns (Chandrabhatla et al., 2023; Borda et al., 2023).
In this way, AI is becoming inexorably linked to NTs, to the point that some scholars even consider AI an NT itself (Hain et al., 2023). More importantly, AI used to assist NTs may be considered an AI system under the AI Act, as they are machine-based and commonly have the capacity to make inferences. It is worth recalling that ML, which is one of the most common types of AI used by NTs (Onciul et al., 2025; Zhou et al., 2025; Surianarayanan et al., 2023), is mentioned as an example of an AI System in the AI Act (Recital 12). Furthermore, whether AI is considered an NT itself or merely as an independent system that forms part of NTs functionalities, given that both are considered an AI system under the AI Act, they will fall within scope of application of this regulation. To be clear, even if the AI Act does not directly regulate NTs, it generally applies to AI systems that assist them (Bublitz et al., 2024). It could be said that in this indirect way, the AI Act is likely to have a normative impact on the AI/NT outputs, which is the most relevant aspect for NTs governance purposes, as elaborated in the next section.
Given AI’s increasing importance in global economic perspectives, and the fact that the EU is falling behind the US and China in this sector, the EU needs to promote the development and use of AI systems. At the same time, the EU is concerned with ethical and human rights risks posed by AI (European Commission [EC], 2020). Therefore, the purpose of the AI Act is, on the one hand, to establish a uniform legal framework that promotes human-centric and trustworthy AI systems, and on the other, to ensure that AI systems placed on the EU market do not pose risks to health, security, or EU’s values as enshrined in the Charter of Fundamental Rights of the EU (CFREU) (Art.1).
The reference to a human-centric AI system was developed in the Ethics Guidelines for Trustworthy AI, published in 2019 by the Independent High-Level Expert Group on AI (AI HLEG). These guidelines lack binding legal force, but their principles are embedded in the AI Act (Bird & Bird, 2025). The AI HLEG described human-centric AI as an approach in which “human values” are the gravitational center of the development, deployment, use and monitoring of AI systems. These values, enshrined in the CFREU and other human rights instruments, are anchored in respect for human dignity (AI HLEG, 2019).
In international human rights instruments, human dignity is traditionally the foundation of all other specific rights. This approach is adopted in the International Covenant on Civil and Political Rights which recognizes in its preamble that the rights it protects “derive from the inherent dignity of the human person”. Another possibility, although not as common, is to include human dignity as an individual right. This is the case of the Basic Law of the of the Federal Republic of Germany, which includes human dignity as an inviolable right (Art. 1). The CFREU, which is the instrument referred to by the AI Act as enshrining the values of the Union, combines the two models. Commentaries to Article 1 in Explanations relating to the Charter of Fundamental Rights observe that, in addition to being a fundamental right, human dignity is part of the substance of all the rights contained in the Charter (EU, 2007). As a standalone right, the scope of application of human dignity has not been clearly differentiated from that of other Fundamental rights in the Charter. The jurisprudence of the German Constitutional Court could shed light on possible interpretative approaches. Human dignity, considered a fundamental right, has been interpreted by the German Constitutional Court as embodying protection against any act that deprives human beings of their inherent value as persons and reduces them to the status of objects (EU Network of Independent Experts on Fundamental Rights, 2006). In the specific context of AI systems, respect for human dignity requires that their development and application “serve and protect humans’ physical and mental integrity, personal and cultural sense of identity, and satisfaction of their essential needs” (AI HLEG, 2019, p. 10).
In addition to human dignity, the Ethics Guidelines for Trustworthy AI list other fundamental rights and values that support the development of trustworthy human-center AI systems. These include freedom of the individual, which requires protecting the ability to make decisions independently, free from external interference; respect for democracy, justice and the rule of law; equality, non-discrimination, and solidarity; and citizens’ rights (2019). Rooted in fundamental rights, the AI HLEG identifies complementary ethical imperatives for trustworthy AI: respect for human autonomy, meaning that AI systems should uphold individuals’ self-determination, avoiding subordination, manipulation, deceit, and similar threats; prevention of harm, which involves respecting human dignity and protecting individuals physical and mental integrity; fairness, requiring an equal distribution of cost and benefits of AI, and ensuring non-discriminatory access to technology, education, and services; and explicability, which refers to the transparency of AI systems in terms of both their capabilities and intended purposes (AI HLEG, 2019).
Ethics and human rights therefore play an important role in the implementation of the AI Act. Consequently, it is essential to assess whether developments in AI and NTs may contradict any of these fundamental rights or ethical imperatives.
There is no doubt that NT represents a highly positive development, particularly in its potential to treat significant neurological diseases. However, there are some ethical and human rights potential implications arising from the misuse or negligent application of NTs, including: 1) the potential decoding neural activity to extract private, sensitive information such as memories, opinions, emotions; and 2) methods to stimulate the brain to influence its processes, including its resulting behavior (UNHRC, 2024). Some scholars refer to the “reading” of the brain, for potential decoding, and the “writing” of the brain for the use of NTs to feed information directly into it (Kunz et al. 2025; Tang et al. 2023; Roelfsema et al., 2018). Other ethical concerns, which will not be addressed here, include potential unequal access to neurological solutions for population with neurological conditions and a possible race for neurotechnological enhancements in healthy populations (Yuste, 2017).
At their current stage of development, AI/NTs do not provide unrestricted access to the subjective content of the human mind, such as thoughts, perception, or decision-making. Moreover, studies on this field are often conducted with limited samples or have not been replicated (Julià-Pijoan, 2020). Still, significant progress is being made in recording and decoding neural activity, identifying some correlations between mental states and brain processes, commonly with ML methods (Tortora et al., 2020). In this way, fragmented or selective information of mental states is inferred. For instance, fMRI has been used to infer words that a person was imagining (Blitz, 2017) and to reconstruct visual images from brain activity (Miyawaki et al., 2008). Neuroscientists have also tried to reconstruct illusions and dreams (Kamitani & Tong, 2005). Brain signal decoding can also infer landmark places, moods or objects a person has seen (Rainey et al., 2020). Some BCI employ invasive techniques to decode the intention to make a move (Roelfsema et al., 2018). Convolutional Neural Networks, a subset of ML, can process brain signals captured from EEG and identify emotions, at least to a limited extent (Zhou et al., 2025). Moreover, AI/BCI can convert brain signals into computer-generated text or sound, with reported accuracy rates as high as 97% (UC Davis Health, 2024). This can be done in real-time, and most advanced systems may decode inner speech of 125,000-word vocabulary directly read from neural activity (Kunz et al., 2025). This suggests an emerging capacity to access the semantic content of the mind. As Roelfsema et al. observe, “[a]dvances in recording and decoding of neural activity may allow future researchers to read the human mind and reveal detailed percepts, thoughts, intentions, preferences, and emotions” (2018, p. 600). These developments are major scientific achievements and represent immense hope for people with disabilities, for example, those affected by locked-in syndrome and unable to speak. However, they also raise potential privacy risks (Bublitz, 2024a). For instance, if a BCI is implanted to help a person who cannot speak, and not adequate privacy safeguards are adopted, inner speech that would normally be kept private could be decoded (Kunz et al., 2025).
Neuromodulation also raises ethical and legal concerns (Farina & Lavazza, 2002). Through invasive and non-invasive NTs, it is increasingly possible to intervene in neuronal circuits with effects on brain processes and behaviors, including “psychiatric symptoms (e.g. anorexia, psychosis), neurological rehabilitation, motor control, visual perception, self-regulation, and social cognition” (Lucchiari et al., 2019, p. 1). Some experiments with animals show the potential risks of neuromodulation. By employing non-invasive ultrasonic waves, scientists have been able to control the choice behavior of macaque monkeys. By changing the polarity of the applied ultrasound waves, scientists could influence macaque decisions between two possible choices, effectively directing them to pick one target or the other. These scientists consider that “[t]here are, therefore, tantalizing opportunities to apply ultrasonic neuromodulation to noninvasively modulate choice behavior in humans” (Kubanek et al., 2020, p. 6). Another invasive BCI experiment involved recording of mice brain signals while they were eating and then activating the neuronal circuits previously recorded, leading the mice to eat even when they were not hungry (Yuste et al., 2017). Other experiments have been successful in inserting false images into mice brains which they could not differentiate from reality (Yuste, 2022). The mice were trained to drink whenever a specific visual stimulus appeared in real life. Using neuromodulation techniques, researchers artificially activated the specific brain patterns that were normally triggered when the mice saw that stimulus. As a result, the mice drank as if they were seeing the image. In the words of the lead scientist, “[w]e manipulated it like a puppet,” adding that “[w]hat we can do today in mice could be done tomorrow in humans” (Ansede, 2025, np). Although neuromodulation could help in treating health problems, such as addictive behaviors (Kubanek et al., 2020), it has the potential to grant third parties control over domains previously reserved for each individual. If these technologies reach this level, there could be major implications for fundamental rights (UNHRC, 2024).
Moreover, direct-to-consumer AI/NT devices and services are being developed or are already in the market for non-therapeutic purposes targeting healthy individuals. Here, the ethical and human rights concerns are more significant than in the public health sector, even if only because the number of potential users is much larger than those requiring treatment for neurological conditions (Nuffield Council on Bioethics, 2013). Though the number of devices of this kind already on the market is limited, it is likely to experience growth in the short to medium term, boosted by significant public and private investment (Grillner et al., 2016).
Self-health is one sector which is taking advantage of NTs’ development to offer direct-to-consumer products such as watches, headbands, earbuds, and glasses, which allow real-time measurements and monitoring of physiological and neural processes offering feedback to the user. Some advanced devices are affordable, wearable (Hain et al., 2023), and wireless EEG-based BCI, that can store data recordings and allow Bluetooth data transmission to mobile phones. Some devices employ functional Near-Infrared Spectroscopy (fNIRS), either alone or in combination with EEG, and can also be used for neuromodulation, inter alia, through tDCS (Cannard et al., 2020). For example, the headband Muse, a bidirectional BCI by Interaxon, uses AI-driven Foundational Brain Model transforming data from EEG+fNIRS into actionable insights to improve mental fitness (mental performance, optimize sleep, improve focus and find calm) (Muse, 2025; Voll, 2025).
AI/NT are being applied to intervene in brain processes to enhance human cognition capacities. “Cognition” refers to mental processes responsible for the organization of information, from “acquiring information (perception), selecting (attention), representing (understanding) and retaining (memory)” (Bostrom & Sandberg, 2009, p. 312). For Bostrom & Sandberg (2009), the term “enhancement”, on the one hand, indicates the amplification or extension of these capacities and, on the other hand, that the improvement targets healthy populations. Neuralink is a company working on the development of BCI that includes among its objectives “to unlock human potential” (Neuralink, n.d.). Recently, the company received authorization to being trials in humans of a wireless BCI, which is surgically implanted in the brain. The first human with this implant was able to move a computer mouse with his brain (Duffy, 2024). NextMind has also developed a BCI that utilizes machine learning to convert neural activity into direct digital commands, enabling the use of electronic devices (EU-Startups, n.d.).
Monitoring the performance of professionals and operators in workplace environments is another area in which AI/NTs are being applied (Farahany, 2023). Thousands of workers are already being monitored using AI/NT systems (Beltran, 2023). Mental fatigue and lack of vigilance, for example, represent a serious risk in aviation, transportation, mining and industrial activities (Alimardani & Hiraki, 2020). Some devices have adaptive capabilities, meaning that they can adjust performance based on the input received and selectively trigger programed response actions (Krol & Zander, 2017). For example, if a lack of attention is detected from the brain signals of a commercial driver, an algorithm can turn on the in-car stereo and even select the most appropriate type of music (Liu et al., 2013). Vibre, a BCI corporation, advertises that its neuroframe obtains “objective information about the mental state of at-risk workers, preventing dangerous situations, errors, and potentially fatal accidents” (Vibre, n.d.). In addition to working environments, NT-based performance monitoring has application in the educational sector, where the ability to concentrate is essential for learning (Ko et al., 2017). BrainCo, Inc., a BCI enterprise, has an EEG based headband to measure students’ concentration, which can inform about their attention levels to educators (Johnson, 2017).
AI/NTs are also being used to develop brain-controlled video games (Cockrell School of Engineering, 2024; Bardhan, 2023). BCIs combine NTs with AI, such as EEG with deep learning neural networks, to read brain activity, decode a player’s intention to perform an action, and send the instruction to the game (More et al., 2023). Some wearable EEG devices available on the market, such as Nautili from GTec and EPOC X from Emotiv, have been used in conjunction with AI models to create video-games that respond directly to decoded brain signals (More et al., 2023; Pelley, 2024). Although these technologies would be beneficial for people with severe motor disabilities, they could also pose a risk for the privacy, particularly – although not exclusively – for young and vulnerable sectors of the population.
Furthermore, in some countries significant funding is being channeled into the development of AI/NT military applications (Kosal & Putney, 2023). Although an important part of the effort is directed towards the development of therapeutic solutions, AI/NT is also being used to increase defense and warfighting capabilities in military personnel (Munyon, 2018). Suppression of emotions such as fear, enhanced attentiveness for faster detection of threats and targets, and implantable BCI which allow hands off control of weapons (e.g. jetfighters), are examples of non-therapeutic AI/NT military applications (Chamberlain III, 2023). The complete fusion of AI and BCI is the goal of some projects, which aim to enable bidirectional communication between soldiers and AI-driven computers for shared operational control (Moreno et al. 2022). One example of BCI soldier enhancement in advanced stage of development is the so-called Human Assisted Neural Device, which allows a soldier to control a robot remotely using brain signals (Evans, 2012). Although it could be argued that these enhancements are needed to increase survival options in the battlefield, it raises important ethical concerns (Moreno et al., 2022), including the extent to which soldiers are duty-bound to grant consent to invasive ‘super soldier’ enhancements which may deprive them from their agency (Caron, 2018).
NT have also been used as a means to provide evidence before courts (Julià-Pijoan, 2020; Hafner, 2019; Farisco & Pertini, 2014; De Kogel & Westgeest, 2015; Catley & Claydon, 2015). The potential use of NT in courtrooms include brain-based lie detectors; identification of knowledge that only a person that was in a crime scene would know (guilty knowledge test); to infer if an individual is familiar with a place or an object which could be connected to a crime (concealed information); to determine if an individual is mentally sane to stand trial; to uncover a subject’s preferences or character traits, such as anti-social behaviors or risk related sexual preferences; and prediction of recidivism or future arrest (Ligthart et al., 2021). Neuroimaging are the most extended NTs in this field, such as EEG and fMRI (Blitz, 2017). For example, the private firms No Lie MRI and Cephos, both employing fMRI, claim to be able to detect when someone is lying with 88% accuracy (Pearson, 2006). Also, BCI based on deep learning are currently being used as lie detectors (Khailil et al., 2023) and for risk assessment to predict violent behavior (Tortora et al., 2020). Given the potential of AI/NT both to predict recidivism (Kiehl et al., 2018; Delfin et al., 2019) and to modify behavior through brain stimulation, there is a risk that they will be ordered by courts as a preventive or rehabilitation measure (Douglas, 2014).
Neuromarketing is also attracting the attention of investors and the private sector. Lucid Systems, a US company, advertise themselves as being able to tell companies what consumers really think about their products, not what they say about them (Abi-Rached, 2008). Some industries have an interest in emotion recognition as also to improve customer satisfaction (Geetha et al., 2024).
Given the ethical and human rights concerns relating to the potential of AI/NTs it is relevant to determine which of these risks may be curbed by the AI Act.
The AI Act employs a risk-based approach to classify and regulate AI systems. There are four levels of risk: unacceptable, high-risk, transparency risk, and minimal to no risk. The AI Act prohibits AI systems whose level of risk is deemed unacceptable, although some exceptions exist. Rules on prohibited AI systems began to apply on 2 February 2025 (Art. 113(a)). AI systems considered high-risk are permitted, subject to strict conditions to ensure that they are safe and trustworthy. AI systems posing transparency risks are subject to disclosure obligations, whereas low-risk or de minismis AI systems are not covered by AI Act (Almada & Petit, 2025; Razquin, 2024).
The prohibition of AI practices is because they inherently contradict fundamental rights and EU values. Specifically, what is prohibited is the introduction of an AI system on the internal market for the first time (placing on the market), the supply of an AI system for first use by third parties or development and deployment for one’s own use in the Union (putting into service), and its subsequent use or deployment in any form after the AI system has been placed on the market or put into service (use) (Art. 5(1); EC, 2025). These prohibitions mainly affect providers and deployers involved in placing an AI system on the market or putting them into service, as well as importers, distributors, and product manufacturers located in the Union or whose activities, or output related to AI systems, have an impact on the Union (Art. 2(1)).
As the EU lacks competence in matters of military, defense, and national security, the AI Act does not apply to AI systems used for these purposes. Therefore, all military uses of AI/NTs are excluded. However, if the AI system is used for both exempted and non-exempted purposes, it will still fall under the scope of the AI Act. Moreover, research and development of AI systems prior to their placement on the market or being put into service are not covered by the AI Act (Art. 2(6) and 2(8)). Personal use of AI systems, when it is non-profit and non-professional in nature, is also excluded (Art. 2(10)).
Significantly, the AI Act does not preclude the application of other relevant EU legal instruments, such as those concerning the protection of fundamental rights, personal data, employment and workers’ rights, consumer protection, and product safety (Arts. 2(7) and 2(9); Recital 9).
The first category of prohibited AI systems is described in Article 5(1)(a) of the AI Act. An AI system would fall under this prohibition if the following three conditions are met simultaneously: 1) the system deploys subliminal, purposely manipulative or deceptive techniques; 2) it has the object or effect of materially distorting behavior of a person or a group, appreciably impairing their ability to make an informed decision, resulting in a decision that the person or group would not have otherwise made; 3) it causes, or is reasonably likely to cause, significant harm.
Subliminal techniques are those that operate beyond consciousness (Art. 5(1)(a); Recital 29). The initial yardstick, therefore, is whether an AI system employs techniques or stimuli, i.e. image, video or sound, that are beyond conscious awareness. These AI systems are problematic because they may bypass rational control of the information before it is incorporated through neural processes. For example, in traditional modes of receiving information, such as listening or reading, an individual can rationally analyze verbal or visual input and decide whether it is sufficiently convincing to incorporate it as learned knowledge or to discard it if it is not. However, if information is perceived by the central nervous system below the level of conscious awareness, it may still be assimilated by the brain, effectively bypassing the individual’s ability to exercise rational scrutiny or protective judgement (Zohny et al., 2023). Even AI systems that are within reach of conscious perception may be prohibited if they employ purposefully manipulative or deceptive techniques (Art. 5(1)(a)). The expression “purposefully manipulative” refers to AI systems that influence, alter or control a person’s behavior in a manner that negatively affects self-governing capacities. For example, an AI generated text specifically tailored to manipulate a target individual. For their part, AI “deceiving techniques” are those that present misleading or false information with the intention or the effect of deceiving and altering the behavior of the receiver. In manipulative and deceiving systems, what is relevant is not that a given AI generated stimulus, e.g. manipulated text or a false image, is perceived by the affected individual, but the fact that he or she is unable to control or resist the AI influence, or that they can still be deceived, affecting the autonomy to make decisions (EC, 2025). Furthermore, intention by the deployer of the AI system to manipulate or deceive is not a requirement, given that some AI systems can learn to develop these intentions. What matters is the actual manipulative or deceiving effect (Recital 29).
In addition, AI systems must have the objective or the effect of materially distorting the behavior of a person or a group. In this context, material distortion implies a degree of coercion, manipulation or deceit that goes beyond lawful persuasion (EC, 2025). Moreover, that effect ought to result from “appreciably” affecting their capacity to make informed and autonomous decisions (Art. 5(1)(a)). The use of the word “appreciably” in this context points to the need for a causal link between the technique employed and the impact on the individual’s ability to decide freely. Besides, the impact of AI systems on the ability to make informed and free decisions should be substantial, leaving minor influences outside the scope of this provision (EC, 2025).
The AI Act also requires that the impact on the ability to make informed decisions leads the person or group to take a decision that, absent the exposure to the technique, they would not have taken. It is not necessary to demonstrate that an individual’s decisions were materially altered. Requiring such proof would amount to requiring the demonstration of negative acts. The evidentiary threshold would instead be met by demonstrating that such systems are capable of affecting informed decision-making, undermining individual autonomy (EC, 2025).
Lastly, the AI-related altered behavior must be of such nature that it causes, or is reasonably likely to cause, significant harm to a person or a group of people (Art. 5(1)(a)). In other words, there must be a causal link between the altered behavior and the harm caused or likely to be caused, which may be physical, psychological, financial, economic, or societal (Recital 5). Moreover, even though the AI Act aims to provide a high level of protection (Recital 8), falling within this prohibition requires a significant, actual or likely, adverse impact involving one or a combination of the aforementioned types of harm. The determination of the significance of the harm will require a case-by-case assessment, considering factors such as severity, intensity, extension, duration, reversibility, cumulative effects, context, and vulnerability of the affected individuals. Here, the intention to cause significant harm is not a condition. In addition, the likelihood of the occurrence of ham will take into consideration if it could not have been reasonably foreseen or it resulted from external factors beyond the control of the provider or deployer (EC, 2025).
It important to mention that, in addition to the general exceptions mentioned above, manipulative practices “should not affect lawful practices in the context of medical treatment”. Although legitimate medical treatments, particularly those based on informed consent, are not covered by this prohibition, it remains unclear whether coercive treatments are also excepted (Bublitz et al., 2024). Also, other legitimate commercial practices are not to be considered in themselves manipulative or harmful, such as advertising. However, if there is evidence of harmful manipulation they will be prohibited (Recital 29).
Applying these conditions to AI/NT is not a straightforward exercise, as they do not strictly fall into what is usually considered subliminal, manipulative or deceptive. A common interpretation of “subliminal” refers to techniques “perceived by or affecting someone’s mind without being aware of it” (Stevenson, 2015). In other words, subliminal techniques present a stimulus below the threshold of conscious awareness, yet one that the brain can still perceive (Bermúdez et al., 2023). Therefore, perception and conscious awareness are not inexorably linked. There can be perception without conscious awareness (Merikle, 2001). Examples of traditional subliminal techniques include hiding an image in a larger composition of visual stimuli, projection of an image for extremely short periods of time that viewers do not consciously notice it, or “subaudiable” messages (Moore, 1982). In this context, the brain functions as it normally would, with neural firing, synaptic activity, and chemical reactions resulting in feelings, thoughts, or behaviors. The only difference is that the input is perceived without the subject being consciously aware of it. AI/NTs share with traditional subliminal techniques the capacity to operate below the level of conscious awareness. However, they differ in that AI/NTs can influence brain activity without involving perception at all. AI/NT, in particular, brain stimulation techniques, can directly activate or inhibit brain processes at a neuronal level, altering cognitive states or behavior while entirely bypassing perception. Manipulation, on the other hand, often refers to persuading someone, in a clever or unscrupulous way, to think or do something which could be ethically questionable or for the benefit of the manipulator. The difference between manipulation and deceit is that in the latter persuasion includes concealing or misrepresenting the truth. Importantly, in both manipulation and deceit the goal is to convince or trick the target subject’s internal decision-making processes resulting in thoughts, choices or behaviors.
In principle, AI/NTs could fall into the category of activities prohibited by Art. 5(1)(a) of the AI Act (Bublitz et al., 2025). Both the AI Act and the EC have expressly recognized this possibility. Recital 29 mentions that BCI based on machine learning may use subliminal techniques to evade rational control. In addition to BCI’s, in its Guidelines on prohibited artificial intelligence practices, the EC mentions NTs in general as entailing “the risk of sophisticated subliminal manipulation” (2025, para. 66). Even if AI/NT could be considered both subliminal and manipulative, it seems that they are better covered by subliminal techniques understood as those operating below conscious level. As AI/NT do not usually employ persuasion methods which appear to be inherent to manipulation or, for that matter, deceit.
AI/NTs potentially satisfy the requirement of altering behavior capabilities, as recognized in Recital 29 and by the EC (2025, para. 66). This prohibition applies to neurostimulation devices with the capacity to alter behavior. Neuroimaging, by contrast, falls outside this prohibition because it merely reads and represents brain signals and does not directly alter behaviors (Bublitz et al., 2025).
The EC added some confusion to the “changing behavior” requirement in one of its examples of potentially prohibited subliminal techniques:
“For example, a game can leverage AI-enabled neuro technologies and machine-brain interfaces that permit users to control (parts of) a game with headgear that detects brain activity. AI may be used to train the user’s brain surreptitiously and without their awareness to reveal or infer from the neural data information that can be very intrusive and sensitive (e.g. personal bank information, intimate information, etc.) in a manner that can cause them significant harm” (2025, para. 66; emphasis added).
This example confirms the potential application of the Article’s 5(1)(a) prohibition to AI/NTs, such as BCI. However, it seems to illustrate AI/NT’s reading capabilities –revel or infer from the neural data-, and not ‘its writing capabilities’, which are usually connected to behavioral changes as it is required to prohibit subliminal, manipulative or deceptive practices.
AI/NT also can impair a person’s ability to make an informed decision, as they can bypass rational control and alter behavior (Bublitz et al., 2025; Zohny et al., 2023). For example, some experiments have applied neuromodulation techniques to posterior medial frontal cortex, which mediates adjustments in adherence to political and religious beliefs, which could have numerous consequences and alterations in human social life and behavior (Holbrook et al., 2016). As was mentioned before, the mere possessing of the capacity to impair an individual’s ability to make an informed decision is sufficient to meet this condition of the AI Act. Moreover, the experiment referred to in previous sections, where neuroscientists claimed to have turned mice into puppets with NTs that could be used in humans in the future, shows to what extent the ability to make inform decisions could be affected.
For its part, a case-by-case assessment is needed to determine the existence of harm caused by AI/NTs, or the likelihood that they would cause harm. In the context AI/NTs use the most likely type of harm is psychological, which covers mental health, and emotional well-being. However, it could be difficult to assess the required threshold of significance, given that psychological negative impacts might appear only after a period of time (EC, 2025).
Article 5(1)(d) prohibits placing on the market, putting into service, or the use of an AI system for making risk assessments of natural persons to assess or predict the risk of the commission of a criminal offence. It is a requirement that this assessment or prediction is based solely on the profiling of a natural person or on assessing their personality traits and characteristics. The prohibition only applies if all the above-mentioned conditions are cumulatively met. The rationale behind this provision is that absent objective criminal actions, no one should be subject to criminal penalties based on predictions of their future behavior (EC, 2025).
The first specific requirement of this provision is concerned with assessments or predictions of a criminal offence by a natural person. A prediction means advancing future events with the information available in the present. Here, a prediction is basically an estimation of how likely it is that a natural person will commit a crime, not necessarily the anticipation of specific details about a particular crime or date in which the predicted crime will occur (Julià-Pijoan, 2020; Skeem & Monahan, 2011). Given that the prohibition relates to criminal offences, it basically applies to law enforcement authorities. In general, profiling or assessments by private parties is prohibited if it is carried out on behalf of law enforcement authorities, under their instructions or control. Importantly, even if the AI Act does not apply to national security matters because the EU does not have competence, the organization or administration of justice is not covered by this exception.
Some AI/NTs, in particular, neuroimaging techniques such as EEG, fMRI, sMRI, coupled with ML have shown potential for forensic purposes (Ligthart et al., 2021; Poldrack et al., 2018; Farah et al., 2014). The interest in neuroimaging to predict criminal behaviors is based on the assumption that biological variables, such as brain structure, composition or processes, provide more reliable information than traditional demographic or psychological variables (Van Dongen et al, 2025). For example, impulsiveness or lack of restraint disregarding potential consequences is one of the main factors studied to predict criminal behaviors which, in turn, may inform criminal justice decisions, such as bail determination, jail sentence, probation, parole and reintegration treatments (Glenn & Raine, 2014). Different areas of the brain are involved in impulse control, such as basal ganglia, dorsolateral prefrontal cortex, but the anterior cingulate cortex (ACC) seems to be the dominant one. A model based on measuring ACC activity with fMRI has been tested to predict probability of rearrest (Aharoni et al., 2013). Also, age is a factor that scientists have found to be important to assess the risk of reincarceration. MRI and ML have been used to develop a brain-based model of age to predict antisocial behavior (Kiehl et al., 2018). Even if these technologies are not yet fully developed, and most studies are based on limited samples or have not been replicated (Julià-Pijoan, 2020), their use as a prediction tool in the context of criminal justice remains the goal (Van Dongen et al., 2025).
The second requirement is that these predictions must be solely based on profiling a natural person or on an assessment of their personality traits or characteristics. Profiling means automated processing of personal data to analyse or predict aspects concerning that natural person (Art. 4(4) General Data Protection Regulation [GDPR]). In the present case, the purpose is to assess or predict the risk of future criminal behavior. Profiling often includes the assessment of personality traits or characteristics. For its part, the notion “solely” means that the prediction is made exclusively based on an AI automated profile or, if there is some degree of human participation, AI is the only meaningful base of the prediction. To exclude human predictions from this prohibition, they must be based on objective and verifiable evidence (Art. 5(1)(d); EC, 2025).
AI, and ML in particular, has increasingly been used for the automated processing of data obtained through NTs to predict antisocial behavior (Poldrack et al., 2018). Some models combine sociodemographic and clinical data with sMRI and/or fMRI and ML (Gou et al., 2021; Yu et al., 2022). Combined AI/NTs structural neuroimaging and traditional methods seem to provide more accurate predictions than risk assessments conducted separately (Van Dongen et al., 2025). One technical challenge with AI/NTs prediction models is that most studies tend to produce group estimations which might not reflect the reality of a given individual (Poldrack et al., 2018). This is relevant for law enforcement because as some scholars observe ‘[w]hile science attempts to discover the universals hiding among the particulars, trial courts attempt to discover the particulars hiding among the universals’ (Buckhotz & Faigman, 2014, R864, quoting Faigman et al., 2014). This is also relevant for the potential application of this prohibition to AI/NTs, which is limited to predictions about individualized natural persons. However, the so called ‘group-to-individual' problem may affect reliability of the technique or the accuracy of the AI/NT risk estimation but does not change the fact that the intention is to create an individual prediction which is sufficient for the application of Art. 5(1)(d).
Another category of prohibited AI systems potentially applicable to AI/NTs is described in Article 5(1)(f) of the AI Act. An AI system would fall under this prohibition if it is placed on the market, put into service, or used to infer emotions of a natural person in the workplace or in education institutions. These practices are prohibited due to risks of discrimination and privacy intromission, which are exacerbated by occurring in context of asymmetric relationships. This prohibition does not apply when the system is used for medical or safety reasons, which will be interpreted in a strict form.
The AI Act defines “emotion recognition system” as an AI system “for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data” (Art. 3(39)). This definition introduces various elements not mentioned in Art. 5(1)(f). First, “identifying” is covered by a broad understanding of inferring. In this context, identifying implies data processing to detect the presence of an emotion previously programed in the emotion recognition system; while “inferring” in strict sensu implies analysis or other processing of data by the system which can deduce information (EC, 2025). Second, in addition to emotions, inferring or identifying intentions is also prohibited. Emotions are not defined, only some examples are mentioned: happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, and amusement (Recital 18). This list is not exhaustive. Moreover, the term “emotions” should receive a wide interpretation. However, it does not cover mere expressions, gestures or movements, such as smiling or jumping, as these are not emotions in themselves (EC, 2025). Unsurprisingly, the term “intentions” is not defined either, but its ordinary meaning refers to something a person wants and plans to do (Cambridge Dictionary, 2025), which coincides with its definition in the field of psychology as “a prior conscious decision to perform a behaviour” (American Psychological Association, 2018). Therefore, even if thoughts as such are not mentioned, they will be protected from inferences if they are inexorably linked to intentions. Importantly, emotions and intentions should be differentiated from physical states, such as pain or fatigue, because the latter are not covered by Art. 5(1)(f). Although this exclusion was introduced to prevent accidents, allowing monitoring of professional drivers and pilots (Recital 18), the distinction is not entirely clear as “[a]ll emotions are physical in that they have some bodily basis and also compromise a mental and often experimental side” (Bublitz, 2024b, 445). Third, the information used to infer or identify emotions or intentions of a natural person should be based on biometric data, as defined in the AI Act. This requirement is added to Art. 5(1)(f) to avoid inconsistency with other provisions of the AI Act, i.e. Annex II(1)(c) and Art. 50(3).
In general, biometric means any form of measurement of the body (Biometrics Institute, n.d.). For the purposes of the AI Act, the notion “biometric data” is to be understood as “personal data resulting from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person, such as facial images or dactyloscopic data” (Art. 3(34)). According to Recital 14, this definition “should be interpreted in the light of the notion of biometric data” in Art. 4(14) of the GDPR 2016/679, among others. However, the GDPR definition should not limit that of the AI Act, which does not require, as the GDPR does, that biometric data “allow or confirm the unique identification” of a person (EC, 2025). This requirement was not included because, in the context of the AI Act, the use of biometric data is not centered in the authentication or identification of a natural person as in the GDPR. In words of the EC, in the AI Act, “biometric data is used for emotion recognition, biometric categorization or other purposes” (2025, para. 252).
Biometric data may be obtained from physical, physiological or behavioral characteristics of a natural person. Physiological data may come from physical, structural attributes of a person’s body which in normal conditions remain unaltered, for example, fingerprints, face anatomy, iris pattern, etc. Even minuscule biological or chemical structures, if they can be measured and identified, such as DNA and odor, may serve as biometric data of this category. For its part, the sources of behavioral data are personal distinctive characteristics of movements, gestures and motor-skills such as handwriting style, walking movements, rhythm and force of contact with a keyboard, etc. (Biometrics Institute, n.d.).
This prohibition only applies if emotion recognition occurs in workplaces or educational institutions. These two spaces are to be interpreted in a broad sense. In this regard, the workplace is not limited to physical or virtual locations where hired employees perform their assigned tasks and responsibilities. The prohibition to read emotions includes recruitment, training and probatory phases. Similarly, educational institutions cover both public and private entities offering certificates after course completion. Here, the protection goes beyond registered students, it also encompasses admission processes (EC, 2025). Regardless, medical and safety reasons are accepted exceptions to this prohibition, but given the relevance of the fundamental rights at stake, these justifications will be subject to a strict interpretation. In particular, the emotion reading will be accepted as an exception only in case of explicit need to address a legitimate medical or safety interest, and the measures taken are not disproportionate. If authorized, the data obtained may not be used for other purposes (EC, 2025).
Brain signals that can be measured, identified or obtained through AI/NTs may qualify as biometric data, both in the strict sense of the GDPR and in the broad sense of Art. 3(34) of the AI Act (Rainey et al., 2020a; Ienca & Malgieri, 2022; Jwa & Poldrack, 2022; Klonovs et al, 2013; Aloui et al., 2018). In any case, satisfying the strict definition of biometric data in the GDPR is not a condition for the application of the prohibition of emotion recognition in the AI Act where the notion is to be interpreted in a broad sense. As mentioned before, in this context, data will be accepted as biometric data if it results from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person (Art. 3(34)) and it is used for emotion recognition. Processing may entail the collection of raw neural data (such as electrical signals, structural imaging, or dynamic activity), filtering to reduce noise, storage, AI-based decoding of neural data, and drawing inferences from it. Neural data processing may pursue different objectives, such as the diagnosis of neurological disorders, enabling the control of external devices (e.g. a robotic limb), or enabling stimulation of the subject’s brain (neurofeedback), among others (AEPD & EDPS, 2024). Neural data processing will fall within the scope of the prohibition in Article 5(1)(f) of the AI Act when its purpose is emotion recognition. In this respect, it is highly illustrative that, in its commentary to the prohibition of emotion recognition, the EC included EEG among the examples of behavioral biometrics, as “repeated motions and associated rhythmic timings/pressures of body features” (2025, para 251). Moreover, it also made an express reference to neuro data or brain data, in general, in the framework of this prohibition (2025, footnote 158).
Various neuroimaging techniques, such as EEG, fMRI, Magnetoencephalography (MEG), coupled with deep learning AI systems are being used for emotion detection. Even if AI/NTs cannot read the mind, as if it were an open book, significant progress is being made in the capacity to infer emotions (Bublitz, 2024b; Halkiopoulos et al., 2025). Importantly, given that emotions are the result of multiple complex factors, ML models are playing a critical role in making possible the processing of these multimodal data (Ahmed et al., 2023; Geetha et al., 2024). Moreover, as described before, AI/NTs are being used for monitoring cognitive states and performance of workers (Farahany, 2023, Beltran, 2023) and in the educational settings (Ko et al., 2017; Johnson, 2017).
Extraction, deduction or inference of some sensitive information from bio personal data could result in unfair or discriminatory treatment and in violation of fundamental rights, such as privacy and the principle of nondiscrimination (EC, 2025). Therefore, the AI Act in Art. 5(1)(g) prohibits: 1) biometric categorisation systems; 2) that categorise individually natural persons based on their biometric data; 3) to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sexual orientation. In addition to the general exceptions mentioned above, this prohibition does not apply to labelling or filtering of lawfully acquired datasets, such as images, based on biometric data or biometric categorisation in law enforcement.
The AI Act defines “biocategorisation systems” as an AI system “for the purpose of assigning natural persons to specific categories on the basis of biometric data” (Art. 3 (40)). To make sense of this definition, the specific categories to which a person could be assigned to must be defined taking into consideration the sensitive information that cannot be deduced or inferred. That is, a biocategorisation system that assigns persons to specific categories defined based on race, political opinions, trade union membership, religious or philosophical beliefs, and sexual orientation (Recital 16). Therefore, a biocategorisation system is not centered in identifying individuals or verifying the identity of a natural person, its purpose is to process biometric data to deduce or infer if a natural person has the features or characteristics to be assigned to a specific category or group (EC, 2025). Systems that process biometric data that are ancillary to another commercial service have been left outside the definition of biocategorisation system, if they are strictly necessary for objective technical reasons (Art. 3(40)). “Ancillary” means that the system is intrinsically linked to another legitimate service, i.e. it cannot be used outside the context of the commercial service to which it is linked (Recital 16). For example, an online sunglasses shop offering customers the opportunity to preview on the screen how the product fits. The systems can only process biometric data for the purpose of product preview and are therefore subordinated and inexorably linked to the commercial service (EC, 2025). However, the “ancillary” exclusion should not be used to circumvent Art. 5(1)(g) prohibition (Recital 16). Thus, if an AI system deduces political inclinations of users of a social media, based on the material they upload, and then uses this information to send targeted political messages, it will be prohibited even if the ancillary test is satisfied (EC, 2025).
Importantly, there is only one definition of “biometric data” in the AI Act (Art. 3(34)). Therefore, the explanation of that definition provided above is also applicable in the context of the biometric categorisation of sensitive characteristics. Thus, AI/NTs, particularly data resulting from the use of neuroimaging techniques such as EEG, fMRI, MEG, seem to qualify as biometric data.
The list of sensitive characteristics in this prohibition is exclusively limited to race, political opinions, trade union membership, religious or philosophical beliefs, sexual orientation. AI/NTs have the potential to infer or deduce these sensitive characteristics. For example, human beliefs, including those of philosophical, religious, or political nature, are mind states (Cristofori & Grafman, 2017) which have a brain representation, often made of complex causal and neural interactions (Churchland & Churchland, 2012). Neuroimaging NT’s have been applied to study these representations in the brain (Seitz, 2017). It is important to emphasize that NTs do not read specific beliefs directly, they identify mental activation establishing correlations between these neural processes and their drives (Abi-Rached, 2008). The data obtained can be used to train AI systems to make inferences, deductions or predictions about beliefs. Political attitudes of individuals have been studied based on their neural responses to images of presidential candidates read by MRI (Kaplan et al., 2007). Also, neural interactions of maintaining beliefs in the face of counterevidence have been investigated (Kaplan, et al., 2016). Attempts have been made to read from a person’s brain activity, activation of amygdala and the insula, rejection of a given political party and to use this information to make inferences about their future political behavior (Iacoboni et al., 2007). In this context, a new field of study has emerged, termed neuropolitics which combines neuroscience and political sciences to uncover the neural basis of political decisions (Abi-Rached, 2008), with the potential to infer political affiliations and predict behavior (Qvartrup, 2024).
For its part, trade union membership could be inferred through BCI’s decoding of inner speech (Kunz et al.; 2025) or by reading P300 signals2 with the help of neuroimaging techniques. P300 signals allow us to make inferences about a subject familiarity or unfamiliarity with a given stimulus (Bublitz et al., 2025). Following this approach, some studies have used NTs to deduce private information, such as their bank or living area (Frank et al., 2017). It seems that trade union membership or sexual preferences could also be inferred if the proper stimulus is applied.
However, in addition to the general exceptions, some forms of biocategorisation are not covered by this prohibition. In this regard, labelling or filtering lawfully acquired biometric datasets, such as images, are not covered (Art. 5(1)(g)). The legality of the acquisition of biometric datasets is determined according to Union and national law (Recital 30). Even if the notion “lawfully” appears to qualify only the method of acquiring biometric datasets, it seems logical to interpret that it also applies to the intended use. For example, compliance with the conditions for the valid acquisition of biometric datasets would not justify filtering or labelling biometric data to unfairly favor a specific racial group in any given selection process. However, it would be justified if the purpose is to identify and correct algorithms that have been trained or have learned to discriminate based on age, race, gender, etc. (EC, 2025; European Union Agency for Fundamental Rights, 2018). In connection with the intended purpose, the AI Act expressly excludes from this prohibition biocategorisation in the context of law enforcement, for example to identify victims of a crime (Recital 30).
Some AI systems are classified as high-risk under the AI Act. These systems are not prohibited. However, to be introduced into the EU market, put into service, or used, they must comply with both general and specific mandatory requirements designed to avoid unacceptable risks to the Union’s basic interests, including safety, health, or human rights (Recitals 46 and 47). It is important to note that being classified as a high-risk AI system does not imply that its uses are legal. All EU legislation still applies and, in conjunction with national law, it could be relevant to make this determination.
The AI Act defines two routes to identify high-risk AI systems. On the one hand, an AI system will be considered high-risk if it is intended to be used as a safety component of a product, or is itself a product, and is required to undergo a third-party conformity assessment before it is placed on the market or put into service, according the harmonization legislation listed in Annex I (Art. 6(1)). Therefore, not all products or safety components regulated by the legislation listed in Annex I are classified as high-risk. This category applies exclusively to those that are required by Annex I legislation to pass a third-party conformity assessment before they can be introduced or used in the EU market. These encompass AI systems that are products and AI systems that serve as a safety component of another product, without necessarily being integrated into it (Art. 6(1)). Application of rules on high-risk AI systems in Annex I will begin on 2 August 2027 (Art. 113(c)). On the other hand, an AI system will be categorized as high-risk if it is on the list of Annex III, without further requirements (Art. 6(2)). In any case, AI systems listed in Annex III will not be considered high-risk if they do not pose a significant risk to health, safety, or human rights (Art. 6(3)). Application of rules on high-risk AI systems in Annex III will begin on 2 August 2026 (Art. 113).
It is important to note that providers conduct self-evaluations of their AI systems to determine whether they are high-risk. If market surveillance authorities consider that a provider underclassified an AI system as non-high-risk, the authorities may conduct their own evaluation to determine whether the proper classification is high-risk (Art. 80). Deliberated misclassification by a provider is punishable by an administrative fine (Art. 90(4-7)).
Annex I Section A contains a list of legislations, the most relevant of which for the present study is EUs secondary legislation on medical devices, because it covers the primary field of application AI/NTs.
Medical devices are governed by Regulation No. 2017/745, of 5 of April 2017, which has been applicable since 26 May 2021 (MDR). The MDR contains rules for the placing on the market, making available on the market, or into service of all medical devices and their accessories (Art. 1(1); EC, 2021). “Medical device” is defined according to its nature (device) and intended purpose (medical). Regarding their nature, the notion includes any instrument, apparatus, appliance, software, reagent, material or other articles that can be used alone or in combination. These devices must be intended by the manufacturer to be used for human beings for various listed specific medical purposes. These purposes include the diagnosis, monitoring, prediction, treatment, alleviation or compensation of disease, injury or disability. It also covers the investigation and modification of the anatomy or of a physiological or pathological process or state (Art. 2(1)).
AI/NTs are devices in the sense of the MDR definition. For example, EEG, fMRI, BCI are instruments, apparatus or appliances. These technologies are now commonly used in combination with AI, which is a type of very specialized and complex software (California Learning Resource Network, 2025). Moreover, AI/NTs may serve various medical purposes. AI/NTs are used to conduct research about brain structure, functions and neural processes; to interact with the central nervous system, which is a form of modification of a physiological process (Bublitz & Ligthart, 2024); to alleviate or compensate for injuries or disabilities such as allowing brain control of prosthesis; to predict the evolution of intracranial tumors; and for the treatment of Alzheimer disease, etc. (Zhou et al., 2025).
All medical devices must comply with the General Safety and Performance Requirements (GSPR) delineated in Annex I of the MDR to be marketed in the EU. To demonstrate compliance with these requirements, conformity assessment procedures have been established that vary according to the level of risk of the devices and their intended purpose (Annexes IX-XI). From lowest to highest, device risks are graded as Class I, IIa, IIb, and III. As Bublitz & Ligthart observe (2024), NTs fall into various these risks categories. Deep brain stimulation is classified as Class III, because it implies direct contact with the brain; as well as closed-looped devices, such as BCIs; and all NTs whose operation depends on electricity, defined in MDR as active devices. Most noninvasive brain stimulation NTs are Class IIb, given that these are non-implantable, administer or exchange energy with the body, and are potentially hazardous as they apply energy to the brain. However, if noninvasive brain stimulation is not deemed potentially hazardous, it would be considered Class IIa. MRI also falls into Class IIa, as an active device for diagnosis and monitoring which supplies energy that can be absorbed by the body. EEG also is Class IIa, as it serves to diagnose and monitor cerebral functions, which are likely to be considered vital physiological processes.
MDR risk classification determines the extent of third-party oversight and participation in conformity assessment procedures (Recital 15). Only manufacturers of certain categories of Class I medical devices may opt for self-certification, meaning that third-party involvement is not required. Other Class I medical devices, as well as all Class II and some Class III require certification by a Notified Body, which are organizations designated by EU Member States to assess devices’ conformity with the MDR before these are placed on the market. Class III implantable medical devices may also require involvement of Competent Authorities of Member States or expert panels (Paul-Zieger, 2024).
Furthermore, given that certain technologies may be used for both medical and non-medical purposes while giving rise to the same risks, the MDR also applies to some devices without an intended medical purpose (MDR Art. 1(2); Recital 12). MDR’s Annex XVI contains the list of these non-medical devices, which includes brain stimulation equipment that apply electrical currents or magnetic or electromagnetic fields through the cranium to inhibit or activate neural processes in the brain (point 6). On 1 December 2022, the EC adopted Implementing Regulation No. 2022/2346 laying down common rules for non-medical devices, and some specific provision for noninvasive brain stimulation NTs (Regulation 2022/2346, Annex VII). Invasive brain stimulation NTs are not covered, because they are not currently being marketed for non-medical purposes (Regulation 2022/2346, Recital 4). However, the EC may add new devices to the list in MDR Annex XVI (MDR, Art. 4). In any case, the Implementing Regulation No. 2022/2347 of the EC classified noninvasive brain stimulation as Class III, which requires a third-party conformity assessment. However, the scientific basis for the risk classification adopted by the EC was highly criticized, and the Class III status for noninvasive brain stimulation for non-medical purposes is now under review (EU Ombudsman, 2024; Bublitz & Ligthart, 2024).
Thus, under the MDR, most AI/NTs are considered medical devices, the large majority of which require a third-party conformity assessment to be placed on the market. Likewise, noninvasive brain stimulation NTs for non-medical purposes fall under the scope of the MDR and are subjected to third-party conformity procedures. In the category of non-medical devices, in addition to invasive brain stimulation, the other notable omission from MDR’s scope of application refers to noninvasive neuroimaging NTs (Bublitz & Ligthart, 2024). In any case, all AI/NTs that required third-party conformity assessment under the MDR are considered a high-risk AI system under Annex I of the AI Act.
As mentioned earlier, Annex III contains a predefined list of areas and AI systems operating within them that are considered high-risk, based on the severity of potential harm and the likelihood of its occurrence (Art. 6(2); Recital 52). However, if an AI system referred to in Annex III does not pose a significant risk to health, safety, or fundamental rights, it will not be considered high-risk. This would be the case if the AI system does not materially influence the outcome of decision-making, for example, if it only performs a narrow procedural task or its use is limited to the improvement of a task previously completed by a human. This exception does not apply when an AI system performs profiling of a natural person, in which case it will always be high-risk (Art. 6(3)). Given that AI/NTs regularly entail automated processing of various forms of brain signals, either alone or in combination with other personal data, they will be considered high-risk if used to:
“[E]valuate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements” (GDPR Art. 4(4)).
This categorisation covers many potential uses of AI/NTs, for example, those related to neuromarketing and self-care devices. Despite the general categorization of profiling as a high-risk AI system, the AI-Act classifies certain specific forms of profiling as high-risk, some of which are discussed below.
Also worth noting is that the Commission may add new AI systems to or remove specific AI systems from Annex III, through delegated acts. New systems may be added to Annex III if they are intended to operate within one of the predefined areas and their level of risk is at least equal to that of AI systems already included (Art. 7(1)). An AI system may be removed from the list if it no longer poses a significant risk and its removal does not result in a reduction of the level of protection for health, safety and human rights (Art. 7(3)).
Some of the AI systems referred to in Annex III have a close connection with AI/NTs described above as prohibited practices, but the latter (i.e. high-risk AI systems) are projected to cover additional dimensions that pose a significant risk to health, safety, or fundamental rights. This section focuses solely on providing a general, but not comprehensive, overview of high-risk areas with the greatest potential to be applicable to AI/NTs, with special attention to those aspects that are distinct from prohibited practices to which they may be related.
Biometrics, when not prohibited by Union or national law, is one of the areas of high-risk AI systems in which AI/NTs may be used (Annex III(1)). As mentioned before, AI/NTs are considered prohibited when they are used to infer or deduce a closed list of sensitive characteristics of a natural person (race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation) (Art. 5(1)(g)). The use of brain data or neural data by AI/NTs, either alone or in combination with other personal data, will be considered high-risk if its purpose is to infer or deduce any other personal characteristics or attributes different from those already prohibited (Annex III(1)(b); Bublitz et al., 2024), such as cognitive capacities, personality type, hobbies, or consumer preferences.
Moreover, emotion recognition by AI systems, which include recognition of intentions, is also prohibited by the AI Act (Art. 5(1)(f)) when it occurs in the workplace or in education institutions. AI/NTs will be considered high-risk whenever they are used for emotion or intention recognition in any other setting (Annex III(1)(c)). This could encompass the use of neuroimaging NTs for neuromarketing and recreational devices (Bublitz et al., 2024).
Another area of potential application to AI/NTs relates to employment and workers’ management. In this area, AI systems are classified as high-risk if they are used to make decisions affecting terms of employment. These terms include work relationships; the promotion or termination of work-related contractual relationships; the allocation of tasks based on individual behavior, personal traits, or characteristics; and the monitoring and evaluation of employees’ performance and behavior (Annex III(4)(b)). This category of high-risk AI systems covers inferences or deductions other than those relating to emotions and intentions. For example, using AI/NTs to deduce or infer cognitive capacities of workers to determine who is promoted, hired, or dismissed would be considered high risk.
Law enforcement is also an area of high-risk AI systems that may cover certain applications of AI/NTs, provided their use is not prohibited under relevant Union or national law. As this category concerns law enforcement, it applies to AI systems used by national or Union public authorities, or by third parties acting on their behalf or under their instructions. Within this framework, AI systems are classified as high-risk when used to assess the risk of a natural person becoming the victim of criminal offences (Annex III(6)(a)). The same applies to AI systems intended to be used as polygraphs or similar tools (Annex III(6)(b)). These uses, in the context of migration and border controls, are high-risk too (Annex III(7)(a)). However, under Art. 5(1)(d) of the AI Act, prediction of future criminal behaviors was prohibited when risk assessments are based solely on the profile, characteristics, or personality traits of a natural person. High-risk AI systems are those used to assess the risk of a natural person offending or re-offending when based not solely on profiling of natural persons, as well as systems used to assess personality traits and characteristics or past criminal behavior of natural persons or groups (Annex III(6)(d)).
Section 2 of the AI Act contains the general requirements for the introduction, distribution, or use of high-risk AI Systems in the EU market. Providers of high-risk AI systems bear the main responsibility for ensuring that Section 2 prescriptions are complied with (Art. 16(2)). Importers and distributors are obliged to ensure that the AI system is in conformity with these obligations, before it is placed on the market (Arts. 23(1)) or it is made available on the market (Art. 24(1)). For that, they must verify that the provider has fulfilled its responsibilities.
The first requirement is that a risk management system should be established, implemented, documented and maintained (Art. 9(1)). The purpose is that risks are supervised, controlled, mitigated, reviewed and updated during the entire life cycle of the AI system (Art. 9(2)). There are some specific steps that must be followed in this respect: a) identification and analysis of known and reasonably foreseeable risks for health, safety and human rights, with special attention on the effects on minors and vulnerable people, when the AI system is used according to its intended purpose; b) estimation and evaluation of the risks that may emerge when it is properly used and those reasonable foreseeable in case of misuse; c) evaluation of risks that may be identified in the post-market supervision; d) adoption of appropriate and specific measures to minimize these risks to an acceptable level. The focus is solely on risks that may be reasonably mitigated or eliminated through the development or design phase of AI systems, or by providing relevant technical information (Art. 9(3)). Prior to their entry into the market or, if appropriate, in the development phase, these systems must be tested to identify the most adequate and specific risk measures and to ensure they perform well according to their purpose (Art. 9(6-7)).
The second requirement concerns the establishment of data governance and management practices to ensure data quality and accessibility for the training, validation and testing of AI models. The purpose is to ensure good and safe performance of AI systems and to avoid discrimination that might arise from poor-quality training data. Therefore, data governance and management practices should ensure data that is relevant, sufficiently representative, and to the best extent possible, free of errors and complete. Furthermore, compliance with data protection law should be an integral part of data governance and management practices. Special attention must be paid to possible biases in datasets having a negative impact on health, safety and human rights. To this end, exceptionally, providers are authorized to process special categories of personal data for bias detection and correction, if adequate safeguards for fundamental rights and freedoms are adopted (Art. 10; Recital 67).
The elaboration of clear and comprehensive technical documentation to demonstrate compliance with Section 2 specifications is the third requirement. This documentation, which is to be prepared before the AI system enters the market or it’s put into service and must be kept updated afterwards, is used by national authorities and notified bodies to perform their duties and to conduct conformity assessments. Annex IV describes the minimum elements to be drawn up in the documentation (Art. 11(1)). For AI systems covered by harmonized legislation listed in Annex I, Section A, a single set of technical documentation will be sufficient, but it should include all the information required by the AI Act and the harmonized legislation (Art. 11(2)).
The fourth requirement is that AI systems must permit traceability through automatic recording of their functioning details or events (logs) over their lifetime. This record-keeping must allow the identification of substantial modifications in the functioning of an AI system or situations in which they present a risk with potential negative effects on health, safety and human rights beyond reasonable or acceptable levels, considering their intended purpose (Art. 12(2)(a)). The event recording system must be designed to facilitate operation, and post-market monitoring (Art. 12(3-4)). Remote biometric identification systems are subject to additional and specific requirements (Art. 12(3)).
Now, given that to carry out their obligations under the AI Act deployers need to be able to assess the performance of AI systems, transparency and provision of information is the focus of the fifth requirement. Providers must facilitate information to the deployer about the AI system including the instructions for use; characteristics, capabilities and limitations of performance; precluding uses; any known or foreseeable circumstances that may affect or alter how it works or may create a risk; pre-determined measures allowing human oversight; technical details to facilitate interpretation of outputs; description of the mechanisms to allow collection, storage and interpretation of registered events; and maintenance or care measures. All information should be meaningful, accessible, concise but complete, correct, clear, understandable, and made available in a language that can be understood by the target deployers. Appropriate examples should be included whenever possible (Art. 13; Recital 72).
The sixth requirement is to incorporate in AI systems the technical elements necessary to allow human oversight while they are in use, such as human-machine interface. The aim is to prevent or minimize risks of both intended uses and misuses of AI systems. The human oversight tools could be incorporated by the provider in the design phase, before the AI system is placed on the market, and/or be readily available for implementation by the deployer once it has entered the market. The information about the characteristics, functionalities and limitations of AI systems mentioned above is of particular importance for facilitating the provision of instructions from the deployer to the person in charge of the human oversight. This person must be enabled to understand how the AI system works in order to detect and address abnormalities, dysfunctions and unexpected performance; and to interpret its outputs to choose the safest course of action in any particular circumstance. To facilitate this task, in particular to halt AI systems in a safe manner, they should allow human intervention while in use through a “stop button or an equivalent mechanism. Remote biometric identification systems are subject to additional requirements, except when they are used for law enforcement, migration, border control or asylum, and in cases where Union or national law deems this requirement disproportionate (Art. 14).
The seventh and last general requirement addresses accuracy, robustness and cybersecurity of AI high-risk systems over their lifetime. Accuracy refers to the degree of precision of output (TÜV AI-Lab, 2024); robustness expresses the extent to which the system remains stable and its outputs are reliable in the face of unexpected changes, errors, faults, and inconsistencies (Art. 15(4); Recital 75;); whereas cybersecurity indicates the level of vulnerability to malicious attempts by third parties to alter their use, performance or to gain access to information (Art. 15(5); Recital 76). The Commission in cooperation with other stakeholders is responsible for the development of benchmarks and methodologies to measure accuracy and robustness of AI systems. Moreover, instructions for use shall include references to what is considered an adequate level of accuracy and provide pertinent accuracy metrics. Technical solutions should be adopted to ensure robustness, which could include backup or fail-safe plans (Art. 15(4)) and an appropriate level of cybersecurity (Art. 15(5)).
In addition, there is a specific obligation to conduct a fundamental rights impact assessment (FRIA) for deployers of Annex III high-risk AI systems who are bodies governed by public law or private entities providing public services, among others (Art. 27). The FRIA consists of various detailed descriptions required from the deployer, including: the processes in which the system will be used in line with its intended purpose; the period of time and frequency with which is intended to be used; specific risks likely to have a negative impact on certain categories of natural persons; the implementation of human oversight measures; and measures to be taken in case of materialization of risks, including the arrangements for internal governance and complaint mechanisms (Art. 27(1)). This assessment should be submitted to market authorities before the first use of the AI system (Art. 27(2)) and is likely to be required, at the very least, for AI/NTs destined for healthcare services (Recitals 58 and 96; Bublitz et al., 2024).
The AI Act does not directly apply to NTs. Nevertheless, many of the potential uses of NTs that raise human rights concerns could be curbed by the normative effect of the AI Act on AI systems used to support NTs. However, this is not a straightforward exercise, as many conditions must be met cumulatively.
To be covered by the AI Act, AI supporting NTs need to have at least minimum autonomy to operate without human intervention and must be capable of making inferences from input they receive. This includes generating outcomes such as predictions, content, recommendations or decisions. The AI Act cites ML as an example of AI with sufficient autonomy and inference capacity. ML is widely used to assist NTs. In principle, logic-based and knowledge-based AI approaches could also meet the autonomy and inference capacity conditions. However, AI systems of this type used to support NTs will not be covered if they merely perform operations based solely on human-defined rules, without incorporating learning, reasoning or modelling. Moreover, in this context, AI may be used on a standalone system or as a component of other products. Therefore, whether AI is embedded in NT or used independently to support them does exclude it from the AI Act’s scope.
The AI Act does not apply to spheres in which the EU does not have competence. Consequently, AI/NTs applications in the military, defense, and national security domains are excluded, as are research activities and personal uses.
Navigating the above-mentioned conditions and exclusions will require a case-by-case assessment. In general, however, the AI Act contributes to address the core governance concerns regarding NTs potential to infer (“read”) the inner private contents of the mind, and to influence or control individual’s decisions and behavior (“writing the mind”), circumventing rational control. Notably, the AI Act is designed to address health, security, and fundamental risks associated with AI systems before they enter the market. This preventive approach is precisely what the Advisory Committee of the UNHRC (2024) has called for, given that human rights concerns related to NTs are extremely serious but have not yet materialized.
Under the AI Act, the potential use of AI/NTs to “read the mind” by inferring highly sensitive information is prohibited, and their use to infer other personal information is classified as high-risk. More specifically, within the framework of “biometric categorisation systems”, the potential use of AI/NTs to infer from biometric data an individual’s political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation is prohibited. An exception exists where biometric categorisation is based on lawfully acquired biometric datasets and is carried out for lawful purposes, including law enforcement activities to identify victims of crime. It is worth noting that the EC has recognised brain signals and neural data as biometric data. AI/NTs’ potential use to infer personal information, attributions or characteristics other than those expressly prohibited is classified as high-risk. This includes, for example, biometric categorisation relating to cognitive capacities, personality type, hobbies, or consumer preferences.
In addition, AI/NTs used to infer personal preferences could be considered a profiling activity. Such profiling is classified as high-risk if it is not already subject to a specific prohibition. This could apply, for example, to neuromarketing.
The potential use of AI/NTs to predict future criminal conduct is also prohibited if prediction is based solely on automated AI processes without significant human involvement. The national security exception does not cover the organization of justice; therefore, law enforcement authorities are subject to this prohibition. If the prediction is not solely based on profiling of a natural person, it will be considered high-risk, as well as AI/NTs’ potential use as lie detectors in the context of law enforcement.
AI/NTs’ potential use to infer emotions and intensions in the workplace and educational settings is also prohibited. The reading of thoughts will be prohibited if they are inexorably linked to intentions, even if they are not expressly prohibited as independent category. Medical and safety reasons, which will be subject to strict interpretations, could justify emotion recognition. Emotion and intentions recognition in any other setting is classified as high risk. Moreover, if an AI/NTs is used to make decisions affecting the terms of employment, such use will be considered high-risk. Reading workers’ cognitive capacities to determine who is promoted, hired or dismissed will be considered high-risk.
AI/NTs’ potential use to “write the mind” is also prohibited under the AI Act as subliminal, manipulative, or deceiving AI systems. The logic of this prohibition is to protect individuals’ right to decide freely by any AI system with the capacity to alter behavior in ways that evade rational control an unacceptable-risk AI system. AI/NTs neurostimulation devices have the capacity to alter human behavior through direct intervention at the neural level. The EC and the AI Act itself recognize that some AI/NTs, in this case BCI, may alter behavior with subliminal manipulative techniques that operate beyond conscious awareness. Moreover, neurostimulation, both invasive and noninvasive, is considered high risk, as such devices must undergo a third-party conformity assessment under the MDR. This also applies to non-invasive neuromodulation devices without medical purposes.
It should be noted that other EU legislation applies alongside the AI Act. Any potential governance vacuum for NTs could be addressed, for example, through existing human rights rules. It seems, therefore, that the proposal to create new “neurorights” is not justified, at least in the EU context.
ABI-RACHED, J. M. (2008). ‘The implications of the new brain sciences’. European Molecular Biology Organization Reports, 9(12), 1158-1162.
AFP (2023). AI-supercharged neurotech threatens mental privacy: UNESCO, France 24, 13 July 2023. (Accessed: 14 August 2025). Available at: https://www.france24.com/en/live-news/20230713-ai-supercharged-neurotech-threatens-mental-privacy-unesco
AHARONI, E., et al. (2013). ‘Neuroprediction of future rearrest’. Proceedings of the National Academy of Sciences, 110, 6223–6228. https://doi.org/10.1073/pnas.1219302110
AHMED, N., et al. (2023). ‘A systematic survey on multimodal emotion recognition using learning algorithms’. Inteligent Systems with Applications. 17, 200171, 1-19. https://doi.org/10.1016/j.iswa.2022.200171
ALFIHED, S., et al. (2024). “Non-Invasive Brain Sensing Technologies for Modulation of Neurological Disorders”, Biosensors, 14(7), 1-23. https://doi.org/10.3390/bios14070335
ALIMARDANI, M. and HIRAKI, K. (2020). ‘Passive Brain-Computer Interfaces for Enhanced HumanRobot Interaction’. Frontiers in Robotics and AI, 7, Article 125, 1-12. https://doi.org/10.3389/frobt.2020.00125
ALMADA, M. and PETIT, N. (2025). ‘The EU AI Act: Between the Rock of Product Safety and the Hard Place of Fundamental Rights’. Common Market Law Review, 62, 85-120.
ALOUI, K., et al. (2018). “Using brain prints as new biometric feature for human recognition”, Pattern Recognition Letters, 113, 38-45. https://doi.org/10.1016/j.patrec.2017.10.001
AMERICAN PSYCHOLOGICAL ASSOCIATION (2018). APA Dictionary of Psychology. (Accessed: 2 November 2025). Available at: https://dictionary.apa.org/
ANDORNO, R. (2023). Neurotecnologías y derechos humanos en América Latina y el Caribe: Desafíos y propuestas de política pública. UNESCO Office for Latin America and the Caribbean. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000387079 (Accessed: 7 August 2025).
ANSEDE, M. (2025). ‘Rafael Yuste, neuroscientist: ‘We have to avoid a fracture in humanity between people who have cognitive augmentation and those who do not’. EL PAÍS. (Accessed 14 May 2025). Available at: https://english.elpais.com/science-tech/2025-01-18/rafael-yuste-neuroscientist-we-have-to-avoid-a-fracture-in-humanity-between-people-who-have-cognitive-augmentation-and-those-who-do-not.html
BARDHAN, A. (2023). ‘Mind-Control Gaming Isn’t Sci-Fi, It’s Just Science’. Kotaku. (Accessed: 22 May 2025). Available at: https://kotaku.com/virtual-reality-oculus-headset-meta-vr-video-game-ui-1850938750
BELKACEM, A. N., et al. (2023). ‘On Closed-Loop Brain Stimulation Systems for Improving the Quality of Life of Patients with Neurological Disorders’. Frontiers in Human Neuroscience, 17, 1085173. https://doi.org/10.3389/fnhum.2023.1085173
BELTRAN DE HEREDIA RUIZ, I. (2023). Inteligencia artificial y neuroderechos: la protección del yo inconsciente de la persona, Aranzadi: Navarra.
BERMÚDEZ, J. P., et al. (2023). ‘What Is a Subliminal Technique? An Ethical Perspective on AI-Driven Influence’. Conference: 2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS), 1-10. https://doi.org/10.1109/ETHICS57328.2023.10155039
BHIDAYASIRI, R. (2024). ‘The Grand Challenge at the Frontiers of Neurotechnology and its Emerging Clinical Applications’. Frontiers in Neurology, 15:1314477, 17 january. https://doi.org/10.3389/fneur.2024.1314477
BIOMETRICS INSTITUTE (n.d.). Physiological and Behavioural Biometrics. (Accessed: 11 July 2025). Available at: https://www.biometricsinstitute.org/physiological-and-behavioural-biometrics/
BIRD & BIRD (2025). European Union Artificial Intelligence Act: a guide. 7 April.
BLITZ, M. J. (2017). Searching Minds by Scanning Brains. Palgrave Studies in Law, Neuroscience, and Human Behavior. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-50004-1_3
BOQUIEN, M. (2024). ‘Difference Between an AI System and an AI Model’. Dastra Blog, 17 July. (Accessed: 1 August 2025). Available at: https://www.dastra.eu/en/article/difference-between-an-ai-system-and-an-ai-model/57721
BORDA, L., et al. (2023). ‘Automated calibration of somatosensory stimulation using reinforcement learning’. Journal of NeuroEngineering Rehabilitation, 20, 131. https://doi.org/10.1186/s12984-023-01246-0
BOSTROM, N. and SANDBERG, A. (2009). ‘Cognitive Enhancement: Methods, Ethics, Regulatory Challenges’. Sci Eng Ethics, 15, 311–341. https://doi.org/10.1007/s11948-009-9142-5
BUBLITZ, J. CH. (2024a). ‘Neurotechnologies and Human Rights: Restating and Reaffirming the Multi-Layered Protection of The Person’. The International Journal of Human Rights, 28(5), 782-807. https://doi.org/10.1080/13642987.2024.2310830
BUBLITZ, J. CH. (2024b). ‘Banning Biometric Mind Reading: The Case for Criminalising Mind Probing’. Law, Innovation and Technology, 16(2), 432-462. https://doi.org/10.1080/17579961.2024.2392934
BUBLITZ, J. CH., et al. (2024). “Implications of the novel EU AI Act for Neurotechnologies”, Neuron, 112, 3013-3016; https://doi.org/10.1016/j.neuron.2024.08.011
BUBLITZ, J. CH. and LIGTHART, S. (2024). ‘The new regulation of non-medical neurotechnologies in the European Union: overview and reflection’. Journal of Law and the Biosciences, 11(2), July-December, lsae021, 1-15. https://doi.org/10.1093/jlb/lsae021
BUBLITZ, J. CH., et al. (2025). ‘Brain Stimulation May Be a Subliminal Technique Under the European Union's Artificial Intelligence Act’, European Journal of Neuroscience, 61(8), e70115, 1-4. https://doi.org/10.1111/ejn.70115
BUCKHOLTZ, J.W. and FAIGMAN, D.L. (2014). ‘Promises, promises for neuroscience and law’. Current Biology, 24(18), R861–R867
BUITEN, M. C. (2019). ‘Towards Intelligent Regulation of Artificial Intelligence’. European Journal of Risk Regulation, 10(1), 41-59. https://doi.org/10.1017/err.2019.8
CALIFORNIA LEARNING RESOURCE NETWORK (2025) Is AI a Software? CLRN, 21 April. (Accessed: 29 July 2025). Available at: https://www.clrn.org/is-ai-a-software/
CAMBRIDGE DICTIONARY (2025). Intention. (Accessed: 9 July 2025). Available at: https://dictionary.cambridge.org/dictionary/english/intention
CANNARD, C., et al. (2020). ‘Chapter 16 - Self-health monitoring and wearable neurotechnologies’, in RAMSEY, N. F.; MILLÁN, J. DEL R. (eds.). Handbook of Clinical Neurology: Brain-Computer Interfaces, 168. Amsterdam: Elsevier, 207–232. https://doi.org/10.1016/b978-0-444-63934-9.00016-0
CARON, J. F. (2018). A Theory of the Super Soldier: The Morality of Capacity-Increasing Technologies in the Military. Manchester University Press.
CATLEY, P. and CLAYDON, L. (2015). ‘The use of neuroscientific evidence in the courtroom by those accused of criminal offenses in England and Wales’. Journal of Law and the Biosciences, 2, 510–549. 10.1093/jlb/lsv025.
CHAMBERLAIN III, V. D. (2023). ‘A Neurotechnology Framework to Analyze Soldier Enhancement Using Invasive Neurotechnology’. U.S. Naval War College, Newreport, RI. (Accessed: 9 July 2025). Available at: https://apps.dtic.mil/sti/trecms/pdf/AD1209159.pdf
CHANDRABHATLA, A. S., et al. (2023). ‘Landscape and Future Directions of Machine Learning Applications in Closed-Loop Brain Stimulation’. NPJ Digital Medicine, 6(79), 1-13. https://doi.org/10.1038/s41746-023-00779-x
CHURCHLAND, P. S. and CHURCHLAND P. M. (2012). ‘What are beliefs?’ in KRUEGER, F., & GRAFMAN, J. The Neural Basis of Human Belief Systems (1st ed.). Taylor and Francis. Retrieved from https://www.perlego.com/book/1685250/the-neural-basis-of-human-belief-systems-pdf
COCKRELL SCHOOL OF ENGINEERING (2024). Universal Brain-Computer Interface Lets People Play Games with Just Their Thoughts. (Accessed: 22 May 2025). Available at: https://cockrell.utexas.edu/news/archive/9841-universal-brain-computer-interface-lets-people-play-games-with-just-their-thoughts
COUNCIL OF EUROPE [CoE] & OECD (2021). Neurotechnologies and Human Rights Framework: Do We Need New Rights? – Rapporteurs’ Report of the Round Table. Strasbourg: CoE & OECD. (Accessed: 9 August 2025). Available at: https://rm.coe.int/round-table-report-en/1680a969ed
CRISTOFORI, I. and GRAFMAN, J. (2017). ‘Neural Underpinnings of the Human Belief System’ in ANGEL, H-F., et al. (eds.). Processes of Believing: The Acquisition, Maintenance,and Change in Creditions, New Approaches to the Scientific Study of Religion 1, 111-123. https://doi.org/10.1007/978-3-319-50924-2_8
DE KOGEL, C.H. and WESTGEEST, E. J. M. C. (2015). ‘Neuroscientific and behavioral genetic information in criminal cases in the Netherlands’. Journal of Law and the Biosciences, 2(3), 580–605. https://doi.org/10.1093/jlb/lsv024
DELFIN, C., et al. (2019). ‘Prediction of recidivism in a long-term follow-up of forensic psychiatric patients: incremental effects of neuroimaging data’. PLoS One, 14(5), 1-21. https://doi.org/10.1371/journal.pone.0217127
DOUGLAS, T. (2014). ‘Criminal Rehabilitation Through Medical Intervention: Moral Liability and the Right to Bodily Integrity’. Journal of Ethics, 18, 101–122. https://doi.org/10.1007/s10892-014-9161-6
DUFFY, C. (2024). ‘First Neuralink human trial subject can control a computer mouse with brain implant, Elon Musk says’. CNN. Available at: https://edition.cnn.com/2024/02/20/tech/first-neuralink-human-subject-computer-mouse-elon-musk/index.html (Accessed: 15 May 2025).
EU NETWORK OF INDEPENDENT EXPERTS ON FUNDAMENTAL RIGHTS (EU NIEFR) (2006). Commentary of the Charter of Fundamental Rights of The European Union, June 2006. (Accessed: 20 July 2025). Available at: https://sites.uclouvain.be/cridho/documents/Download.Rep/NetworkCommentaryFinal.pdf
EU OMBUDSMAN (2024). Case 157/2023/VB, opened on 15 March 2023; Decision on 25 April. (Accessed: 15 August 2025). Available at: https://www.ombudsman.europa.eu/en/case/en/63216
EUROPEAN DATA PROTECTION BOARD & EUROPEAN DATA PROTECTION SUPERVISOR [AEPD & EDPS] (2024). TechDispatch on Neurodata. Luxembourg: Publications Office of the European Union. (Accessed: 01 November 2025). Available at: https://www.aepd.es/guides/neurodata-aepd-edps.pdf
EUROPEAN COMMISSION (2020). White Paper: On Artificial Intelligence – A European Approach to excellence and trust. Brussels. EU Doc. COM(2020) 65 final. (Accessed: 20 July 2025). Available at: https://commission.europa.eu/documents_en?prefLang=es&f%5B0%5D=document_title%3Awhite%20paper%20artificial%20intelligence
EUROPEAN COMMISSION (2021). Questions and Answers: Application of Regulation on Medical Devices – EU rules to ensure safety of medical devices, Press Corner Q&A 21/2619, Brussels, 26 May. Available at: https://ec.europa.eu/commission/presscorner/api/files/document/print/en/qanda_21_2619/QANDA_21_2619_EN.pdf (Accessed: 29 July 2025)
EUROPEAN COMMISSION (2022). Implementing Regulation (EU) 2022/2347 of 1 December 2022 laying down rules for the application of Regulation (EU) 2017/745 of the European Parliament and of the Council as regards reclassification of groups of certain active products without an intended medical purpose. Official Journal of the European Union L 311/94, 2.12.2022.
EUROPEAN LAW INSTITUTE (2024). The concept of ‘AI system’ under the new AI Act: Arguing for a Three-Factor Approach. Viena.
EUROPEAN PARLIAMENT AND COUNCIL (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal of the European Union, L 119, 1–88.
EUROPEAN PARLIAMENT AND COUNCIL (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance).
EUROPEAN PARLIAMENTARY RESEARCH SERVICE [EPRS] (2024). The protection of mental privacy in the area of neuroscience. Brussels: European Parliament. (Accessed: 14 August 2025). Available at: https://www.europarl.europa.eu/RegData/etudes/STUD/2024/757807/EPRS_STU(2024)757807_EN.pdf
EUROPEAN UNION (2007). Explanations relating to the Charter of Fundamental Rights. Official Journal of the European Union, C 303, 17–35. (Accessed: 14 August 2025). Available at: https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:C:2007:303:0017:0035:EN:PDF
EUROPEAN UNION AGENCY FOR FUNDAMENTAL RIGHTS (2018). FRA 2018 focus – Big data and fundamental rights. (Accessed: 19 July 2025). Available at: https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-focus-big-data_en.pdf
EU-STARTUPS (n.d.) ‘NextMind’, EU-Startups. (Accessed: 15 May 2025). Available at: https://www.eu-startups.com/directory/nextmind/
EVANS, N. (2012). ‘Emerging Military Technologies: A Case Study in Neurowarfare’ in TRIPODI, P.; WOLFENDALE, J (Eds.). New Wars and New Soldiers: Military Ethics in the Contemporary World. England: Ashgate Publishing Company Roudledge. 105-116.
FAIGMAN, D.L., et al. (2014). Group to Individual (G2i) Inference in Scientific Expert Testimony. University Chicago Law Rev, 81(2), 417-480.
FARAH, M. J., et al. (2014). ‘Functional MRI-based Lie Detection: Scientific and Societal Challenges’. Nature Reviews Neuroscience, 15, 123-131. https://doi.org/10.1038/nrn3665
FARAHANY, N. A. (2023). “Neurotech at Work”, Harvad Business Review, March- April. (Accessed: 16 May 2025). Available at: https://hbr.org/2023/03/neurotech-at-work
FARINA, M. and LAVAZZA, A. (2002). “Memory Modulation Via Non-invasive Brain Stimulation: Status, Perspectives, and Ethical Issues”, Frontiers in Human Neuroscience, 16, 1-6. https://doi.org/10.3389/fnhum.2022.826862
FARISCO, M. and PETRINI, C. (2014). ‘On the stand. Another episode of neuroscience and law discussion from Italy’. Neuroethics. 7, 243–245. https://doi.org/10.1007/s12152-013-9187-7
FRANK, M., et al. (2017). ‘Using EEG-Based BCI Devices to Subliminally Probe for Private Information’. WPES '17: Proceedings of the 2017 on Workshop on Privacy in the Electronic Society, 133-136. https://doi.org/10.1145/3139550.3139559
GEETHA, A.V., et al. (2024). ‘Multimodal Emotion Recognition with Deep Learning: Advancements Challenges, and Future Directions’. Information Fusion, 105, 102218, 1-38.https://doi.org/10.1016/j.inffus.2023.102218
GLENN, A. L. and RAINE, A. (2014). ‘Neurocriminology: Implications for the punishment, prediction and prevention of criminal behaviour’, Nature Reviews Neuroscience, 15, 54–63. https://doi.org/10.1038/nrn3640
GOU, N., et al. (2021). ‘Identification of violent patients with schizophrenia using a hybrid machine learning approach at the individual level’, Psychiatry Research, 306, 114294, 1-9. https://doi.org/10.1016/j.psychres.2021.114294
GRILLNER, S., et al. (2016). ‘Worldwide Initiatives to Advance Brain Research’. Nature neuroscience, 19(9), 1118–1122. https://doi.org/10.1038/nn.4371
HAIN, D., et al. (2023). Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends. Paris: UNESCO. https://doi.org/10.54678/OCBM4164
HAFNER, M. (2019). ‘Judging homicide defendants by their brains: An empirical study on the use of neuroscience in homicide trials in Slovenia’. Journal of Law and the Biosciences, 6(1), 226–254. https://doi.org/10.1093/jlb/lsz006
HALKIOPOULOS, C., et al. (2025). ‘Advances in Neuroimaging and Deep Learning for Emotion Detection: A Systematic Review of Cognitive Neuroscience and Algorithmic Innovations’. Diagnostics, 15(456), 1-85. https://doi.org/10.3390/diagnostics15040456
HASLACHER, D., et al. (2024). ‘AI for brain-computer interfaces’ in IENCA, M.; STARKE, G. Developments in Neuroethics and Bioethics. Academic Press, 3-28. https://doi.org/10.1016/bs.dnb.2024.02.003
HASSABIS, D., et al. (2017). ‘Neuroscience-Inspired Artificial Intelligence’. Neuron, 95(2), 245-258. (Accessed: 01 November 2025). Available at: https://www.cell.com/neuron/fulltext/S0896-6273(17)30509-3
HOLBROOK, C., et al. (2016). ‘Neuromodulation of Group Prejudice and Religious Beliefs’. Social Cognitive and Affective Neuroscience, 11(3), 387-394. https://doi.org/10.1093/scan/nsv107
IACOBONI, M., et al. (2007). ‘This is your brain in politics’. New York Times, November 11.
IEEE Brain (n.d.). Neurotechnologies: The Next Technology Frontier, IEEE Brain. (Accessed: 14 August 2025). Available at: https://brain.ieee.org/topics/neurotechnologies-the-next-technology-frontier/
IENCA, M. and ANDORNO, R. (2017). “Towards new human rights in the age of neuroscience and neurotechnology”, Life Sciences, Society and Policy, Vol. 13, No. 5, pp. 1-27. https://doi.org/10.1186/s40504-017-0050-1
IENCA, M. and MALGIERI, G. (2022). “Mental data protection and the GDPR”, Journal of Law and Biosciences, 9(1), lsac006, 1-19. https://doi.org/10.1093/jlb/lsac006
INDEPENDENT HIGH-LEVEL EXPERT GROUP ON AI SET UP BY THE EUROPEAN COMMISSION (AI HLEG) (2019). Ethics Guidelines for Trustworthy AI. European Commission: Brussels.
INTER-AMERICAN JURIDICAL COMMITTEE, ORGANIZATION OF AMERICAN STATES (OAS) (2023). Inter-American Declaration of Principles on Neuroscience, Neurotechnologies, and Human Rights, CJI-RES. 281 (CII-O/23) corr.1.(Accessed: 14 August 2025). Available at: https://www.oas.org/en/sla/iajc/docs/CJI-RES_281_CII-O-23_corr1_ENG.pdf
INTERNATIONAL NEUROMODULATION SOCIETY (2023). About Neuromodulation. Available at: https://www.neuromodulation.com/about-neuromodulation (Accessed: 7 August 2025).
INTERNATIONAL NEUROMODULATION SOCIETY (2021). Conditions That May Be Treated with Neuromodulation. Available at: https://www.neuromodulation.com/conditions (Accessed: 7 August 2025).
ISO/IEC (2022). Information technology — Artificial intelligence — Artificial intelligence concepts and terminology (ISO/IEC 22989:2022). First edition, July 2022. Geneva: International Organization for Standardization.
JOHNSON, S. (2017). ‘This Company Wants to Gather Student Brainwave Data to Measure Engagement’, EdSurge, 26, October. (Accessed: 16 May 2025). Available at: https://www.edsurge.com/news/2017-10-26-this-company-wants-to-gather-student-brainwave-data-to-measure-engagement
JULIÀ-PIJOAN, M. (2020). Proceso penal y (neuro) ciencia: una interacción desorientada. Una reflexión acerca de la neuropredicción. Madrid: Marcial Pons.
JULIÀ-PIJOAN, M. (2024). La computarización del derecho, a partir del proceso y de los procedimientos judiciales. Madrid: Dykinson, S. L.
JWA, A. S. and POLDRACK, R. A. (2022): ‘Addressing privacy risk in neuroscience data: from data protection to harm prevention’, Journal of Law and the Biosciences, 9(2). https://doi.org/10.1093/jlb/lsac025
KAMITANI, Y. and TONG, F. (2005). ‘Decoding the visual and subjective contents of the human brain’. Nature Neuroscience, 8, 679–685. https://doi.org/10.1038/nn1444
KAPLAN, J. T., et al. (2007). ‘Us versus Them: Political Attitudes and Party Affiliation Influence Neural Response to Faces of Presidential Candidates’. Neuropsychologia, 45(1), 55-64. https://doi.org/10.1016/j.neuropsychologia.2006.04.024
KAPLAN, J. T., et al. (2016). ‘Neural Correlates of Maintaining One’s Political Beliefs in the Face of Counterevidence’. Sci Rep, 6, 39589, 1-11. https://doi.org/10.1038/srep39589
KHAILI, M. A., et al. (2023). ‘Deep Learning Applications in Brain Computer Interface Based Lie Detection’. IEEE 13th Annual Computing and Communication Workshop and Conference (CCWC). https://doi.org/10.1109/ccwc57344.2023.10099109
KIEHL, K.A., et al. (2018). ‘Age of Gray Matters: Neuroprediction of Recidivism’, NeuroImage: Clinical, 19, 813–823. https://doi.org/10.1016/j.nicl.2018.05.036
KLONOVS, J., et al. (2013). ‘ID Proof on the Go: Development of a Mobile EEG-Based Biometric Authentication System’. IEEE Vehicular Technology Magazine, 8(1), 81–89. https://doi.org/10.1109/mvt.2012.2234056
KO, L. W., et al. (2017). ‘Sustained attention in real classroom settings: An EEG study’. Frontiers in Human Neurosciences, 11(388), 1-10. https://doi.org/10.3389/fnhum.2017.00388
KOSAL, M. and PUTNEY, J. (2023). ‘Neurotechnology and International Security: Predicting commercial and military adoption of brain-computer interfaces (BCIs) in the United States and China’. Politics and the Life Sciences, 41(I), 81-103. https://doi.org/10.1017/pls.2022.2
KROL, L. R. and ZANDER, T. O. (2017). ‘Passive BCI-based neuroadaptive systems,’ in Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 (Graz: GBCIC). https://doi.org/10.3217/978-3-85125-533-1-46
KUBANEK, J., et al. (2020). ‘Remote, Brain Region–Specific Control of Choice Behavior with Ultrasonic Waves’. Science Advances, 16(21), 1-9.
KUNZ, E. M., et al. (2025). ‘Inner Speech in Motor Cortex and Implications for Speech Neuroprostheses’. Cell, 188, 1-16. https://doi.org/10.1016/j.cell.2025.06.015
LIGTHART, S.; et. al. (2021). ‘Forensic Brain-Reading and Metal Privacy in European Human Rights Law: Foundations and Challenges’, Neuroethics, 14, 191-203. https://doi.org/10.1007/s12152-020-09438-4
LIU, N. H., et al. (2013). ‘Improving Driver Alertness Through Music Selection Using a Mobile EEG to Detect Brainwaves’. Sensors, 13, 8199–8221. https://doi.org/10.3390/s130708199
LUCCHIARI, C., et al. (2019). ‘Editorial: Brain Stimulation and Behavioral Change’. Neuroscience, 13(20), 1-3. https://doi.org/10.3389/fnbeh.2019.00020
MERIKLE, P. M., et al. (2001). ‘Perception without awareness: Perspectives from Cognitive Psychology’. Cognition, 79(1-2): 115-134. https://doi.org/10.1016/s0010-0277(00)00126-8
MIYAWAKI, Y, et al. (2008). ‘Visual image reconstruction from human brain activity using a combination of multiscale local image decoders’. Neuron, 60(5), 915-929. PMID: 19081384. https://doi.org/10.1016/j.neuron.2008.11.004
MOORE, T. E. (1982). ‘Subliminal Advertising: What You See Is What You Get’. Journal of Marketing, Vol. 46, No. 2, pp. 38-47. https://doi.org/10.2307/3203339
MORE, V., et al. (2023). “Using Motor Imagery and Deeping Learning for Brain-Computer Interface in Video Games,” 2023 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 0711-0716. https://doi.org/10.1109/AIIoT58121.2023.10174453.
MORENO, J., et al. (2022). ‘The Ethics of AI-Assisted Warfighter Enhancement Research and Experimentation: Historical Perspectives and Ethical Challenges’. Frontiers in Big Data, 5(978734), 1-13. https://doi.org/10.3389/fdata.2022.978734
MUNYON, CH. (2018). ‘Neuroethics of Non-primary Brain Computer Interface: Focus of Potential Military Applications’. Frontiers in Neuroscience, 12(696), 1-4. https://doi.org/10.3389/fnins.2018.00696
MUSE (2025). Muse™ EEG-Powered Meditation & Sleep Headband. (Accessed: 15 May 2025). Available at: https://choosemuse.com
NEURALINK (n.d). Neuralink. (Accessed: 15 May 2025). Available at: https://neuralink.com/
NUFFIELD COUNCIL ON BIOETHICS (2013). Novel Neurotechnologies: Intervening in the Brain. London: Nuffield Council on Bioethics. https://cdn.nuffieldbioethics.org/wp-content/uploads/Novel-neurotechnologies-report.pdf
OECD (2024). Explanatory Memorandum on the Updated OECD Definition of an AI System. OECD Artificial Intelligence Papers. March, No. 8.
OECD (2025). Recommendation of the Council on Responsible Innovation in Neurotechnology. OECD/LEGAL/0457; (Accessed: 30 July 2025). Available at: https://legalinstruments.oecd.org/en/instruments/oecd-legal-0457
ONCIUL, R., et al. (2025). ‘Artificial Intelligence and Neuroscience: Transformative Synergies in Brain Research and Clinical Applications’, Journal of Clinical Medicine, 14, Article 550. (Accessed: 01 november 2025). Available at: https://doi.org/10.3390/jcm14020550
PATEL, S. H. and AZZAM, P. N. (2005). ‘Characterization of N200 and P300: Selected Studies of the Event-Related Potential’. International Journal of Medical Sciences, Vol. 2, No. 4, pp. 147-154. https://doi.org/10.7150/ijms.2.147
PAUL-ZIEGER, R. (2024). EU MDR Conformity Assessment Options for Medical Devices: Determining the proper path to CE marking for your products. EmergobyUL.com White Paper, May. (Accessed: 30 July 2025). Available at: https://www.emergobyul.com/sites/default/files/2024-12/EU-MDR-Conformity-Assessment-Whitepaper.pdf
PEARSON, H. (2006). ‘Lure of Lie Detectors Spooks Ethicists. Nature, 44(22), 918-919.
PELLEY, R. (2024). ‘‘Use the Force, Rich!’ Can You Really Play Video Games with Your Mind?’. The Guardian. (Accessed: 22 May 2025). Available at: https://www.theguardian.com/games/article/2024/aug/09/can-you-really-play-video-games-with-your-mind
POLDRACK, R. A.; et al (2018). ‘Predicting Violent Behavior: What Can Neuroscience Add?’. Trends Cognitive Science, 22, 111–123. https://doi.org/10.1016/j.tics.2017.11.003
PRESS, G. (2017). ‘Artificial Intelligence (AI) Defined’. Forbes, August 27. (Accessed: 12 July 2025). Available at: https://www.forbes.com/sites/gilpress/2017/08/27/artificial-intelligence-ai-defined/
QVARTRUP, M. (2024). The Political Brain: The Emergence of Neuropolitics. CEU Press Perspectives.
RAINEY, S., et al. (2020). ‘Brain Recording, Mind‑Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain‑Based Speech Decoding’. Science and Engineering Ethics, 26, 2295–2311. https://doi.org/10.1007/s11948-020-00218-0
RAINEY, S., et al. (2020a). “Is the European Data Protection Regulation sufficient to deal with emerging data concerns relating to neurotechnology?”, Journal of Law and the Biosciences, 7(1), January-June, 1-19. https://doi.org/10.1093/jlb/lsaa051
RAZQUIN, M. M. (2024). ‘Sistemas de IA prohibidos, de alto riesgo, de limitado riesgo, o de bajo o nulo riesgo’. Revista de Privacidad y Derecho Digital, 34, 172-235.
ROELFSEMA, P. R., et al. (2018). ‘Mind Reading and Writing: The Future of Neurotechnology’. Trends in Cognitive Sciences, 22(7), 598-610. https://doi.org/10.1016/j.tics.2018.04.001
SCHALK, G., et al. (2024). ‘Translation of neurotechnologies’. Nature Reviews Bioengineering, Vol. 2, pp. 637-652. https://doi.org/10.1038/s44222-024-00185-2
SEITZ, R. J. (2017). ‘Beliefs and Believing as Possible Targets for Neuroscientific Research’ in ANGEL, H-F., et al. (eds.). Processes of Believing: The Acquisition, Maintenance,and Change in Creditions, 1. Springer: New Approaches to the Scientific Study of Religion, 69-81. https://doi.org/10.1007/978-3-319-50924-2_8
SHIH, J. J., et al. (2012). ‘Brain-Computer Interfaces in Medicine’. Mayo Clinic Proceedings, Vol. 87, Issue 3, pp. 268-279. https://doi.org/10.1016/j.mayocp.2011.12.008
SKEEM, J. L. and MONAHAN, J. (2011). ‘Current Directions in Violence Risk Assessment’. Current Directions in Psychological Science, 20(1), 38-42. https://doi.org/10.1177/0963721410397271
STEVENSON, A. (ed.) (2015). Oxford Dictionary of English. 3rd edn. Oxford: Oxford University Press. (Accessed: 5 June 2025). Available at: https://www.oxfordreference.com
SURIANARAYANAN, CH., et al. (2023). ‘Convergence of Artificial Intelligence and Neuroscience towards the Diagnosis of Neurological Disorders – A Scoping Review’. Sensors, 23(6), 3062, 1-29. https://doi.org/10.3390/s23063062
TANG, J., et al. (2023). ‘Semantic Reconstruction of Continuos Language from Non-Invasive Brain Recordings’, Nature Neuroscience, Vol. 26, pp. 858-866. https://doi.org/10.1038/s41593-023-01304-9
TORTORA, L., et al. (2020). ‘Neuroprediction and A.I. in Forensic Psychiatry and Criminal Justice: A Neurolaw Perspective’. Front Psychol, 11:220. https://doi.org/10.3389/fpsyg.2020.00220
TÜV AI-LAB (2024). Technical Assessment of High-Risk AI Systems: State of Play and Challenges. TÜV AI-Lab. (Accessed: 1 Aug. 2025). Available at: https://www.tuev-lab.ai/fileadmin/user_upload/AI_Lab/TUEV_AI_Lab_Whitepaper_Technical_Assessment_of_AI_Systems.pdf
UC DAVIS HEALTH (2024), New Brain-Computer Interface Allows a Man with ALS to ‘Speak’ Again. (Accessed: 1 Aug. 2025). Available at: https://health.ucdavis.edu/news/headlines/new-brain-computer-interface-allows-man-with-als-to-speak-again/2024/08
UNESCO (n.d.). Artificial intelligence. UNESCO. (Accessed 7 May 2025). Available at: https://www.unesco.org/en/artificial-intelligence
UNESCO (2025). Draft Recommendation on the Ethics of Neurotechnology. Paris: United Nations Educational, Scientific and Cultural Organization. (Accessed: 9 August 2025). Available at: https://unesdoc.unesco.org/ark:/48223/pf0000394866
UNITED NATIONS HUMAN RIGHTS COUNCIL [UNHRC] (2024). Impact, opportunities and challenges of neurotechnology with regard to the promotion and protection of all human rights. United Nations. Report A/HRC/57/61. (Accessed 9 August 2025). Available at: https://docs.un.org/en/A/HRC/57/61
VAN DONGEN, J. D. M., et al. (2025). ‘Neuroprediction of Violence and Criminal Behaviour Using Neuro-Imaging Data: From Innovation to Considerations for Future Directions’. Aggression and Violent Behaviour, 80(102008), 1-14. https://doi.org/10.1016/j.avb.2024.102008
VIBRE (n.d.). Analyzing Brain Data to Reduce Accidents In High-Risk Industries. (Accessed: 16 May 2025). Available at: https://vibre.io/en/
VOLL, C. (2025). ‘The Science of EEG + fNIRS: Why Combining These Technologies Enhances Mental Fitness’. Muse Blog, 18 March. (Accessed: 15 May 2025). Available at: https://choosemuse.com/blogs/news/the-science-of-eeg-fnirs-why-combining-these-technologies-enhances-mental-fitness
YU, T., et al. (2022). ‘Prediction of Violence in Male Schizophrenia Using sMRI, Based on Machine Learning Algorithms’, BMC Psychiatry, 22(676), 1-7. https://doi.org/10.1186/s12888-022-04331-1
YUSTE, R. (2022). ‘Rafael Yuste: Let’s Act Before It’s Too Late’. The UNESCO Courier. [Accessed 14 May 2025]. Available at: https://courier.unesco.org/en/articles/rafael-yuste-lets-act-its-too-late
YUSTE, R., et al. (2017). ‘It’s Time for Neuro-Rights: New Human Rights for the Age of Neurotechnology’. Horizons, 18, 154-164.
ZHOU, M. H., et al. (2025). ‘Bird’s Eye View of Artificial Intelligence in Neuroscience’. AI in Neuroscience, 1(1), 16-41. https://doi.org/10.1089/ains.2024.0001
ZOHNY, H., et al. (2023). ‘The Mystery of Mental Integrity: Clarifying Its Relevance to Neurotechnologies’. Neuroethics, 16(20), 1-20. https://doi.org/10.1007/s12152-023-09525-2
Received: 16th August 2025
Accepted: 10th November 2025

_______________________________
1 Co-PI of the Research Group on International Relations and International Law (GERD-UOC), within the Center for Research in Digital Transformation and Governance (UOC-DIGIT). melizaldec@uoc.edu
2 P300 signals are a specific type of event-related potential (ERP), which refers to variations in the brain’s electrical activity in response to a given stimulus, such as a sound or an image. These variations can be measured using electroencephalography (EEG). The P300 is associated with a positive deflection that appears approximately 300 to 400 milliseconds after the presentation of a stimulus and is typically linked to cognitive processes of selective attention and information processing, including recognition and memory-updating processes (Patel, S. H.; Azzam, P. N., 2005).