The EU Commission publishes its guidelines on the definition of AI systems

EU Commission publication of February 6, 2025

A few days after the first provisions of Regulation 2024/1689 on Artificial Intelligence (hereinafter, the “AI Act”) came into application, the European Commission published on February 6, 2025, its guidelines on the definition of “AI system” within the meaning of this text. The purpose of this document is to clarify the application of Article 3(1) of the AI Act, which lays down a structuring definition for determining which systems fall within the scope of the Regulation. The EU Commission’s task was not an easy one, given the technical nature of the definition used by the AI Act, and at a time when the expression “artificial intelligence” is being applied more and more freely to an impressive number of products and services to which, until now, no particular intelligence was necessarily attributed.

A definition structured around seven criteria

Article 3(1) of the AI Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

This definition is based on seven criteria, addressed one by one by the European Commission in its guidelines:

  1. Machine-based system: This criterion excludes purely human or mechanical processes without computer intervention. An AI system must be based on integrated hardware and software components that enable computer operations to be carried out. The Commission insists on the wide variety of computerized systems covered, taking as examples quantum computers as well as all biological or organic systems, “so long as they provide computational capacity.”

  1. Variable level of autonomy: An AI system must operate with a certain degree of autonomy, which is assessed in particular with regard to its inference capabilities (criterion 5). In practice, Recital 12 of the AI Act specifies that this means “some degree of independence of actions from human involvement and . . . capabilities to operate without human intervention.” The degree of autonomy can therefore vary widely in practice. The Commission confirms, however, that systems that are designed to operate solely with human involvement and intervention, either directly (e.g. manual control) or indirectly (e.g. controls based on automated systems that allow humans to delegate or supervise system operations) are excluded from the definition (and therefore from the scope of AI Act). The Commission also notes that the level of autonomy may trigger specific obligations within the AI Act in terms of risk management and human supervision.

  1. Post-deployment adaptiveness: This is the only criterion that is not mandatory, since Article 3(1) states that AI systems “may” demonstrate adaptability. Adaptability refers to AI systems designed to evolve after deployment, through self-learning from new data collected.

  1. Explicit or implicit objectives: AI systems must be developed to achieve explicitly or implicitly defined objectives. According to the Commission, explicit objectives are those clearly stated and directly encoded by the developer in the system, while implicit objectives can be deduced from the behavior or assumptions underlying the system. The Commission notes that objectives are defined internally to the system (for example, answering questions about a set of documents with precision) and should not be confused with the system’s intended purpose, which is defined externally in a specific context (for example, helping a department within a company deploying the AI system to be more productive).

  1. Inference capability: Inference capability is the most important criterion. Recital 12 of the AI Act explains that “[the] capability to infer refers to the process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments, and to a capability of AI systems to derive models or algorithms, or both, from inputs or data.” According to the EU Commission, output generation mainly concerns the use phase of AI systems, while model or algorithm inference concerns the development phase.

The Commission also points out that Article 3(1) refers more specifically to the ability to infer “how to generate. . .” For the European body, this precision is very important and refers to the use, during the development phase, of specific “AI techniques” enabling such inference. These techniques include (i) various machine learningmethods (supervised, unsupervised, reinforcement learning, deep learning, etc.); (ii) logic- and knowledge-based approaches (models that don’t learn from raw data, but “reason” from rules, facts and relationships between different elements, encoded by the developers), as well as hybrid techniques.

  1. Output generation: AI systems must be able to generate outputs belonging to one of the four categories listed in Article 3(1):

  • Predictions: This is the most common category of AI system outputs, corresponding to the ability to estimate an unknown value from the input data provided (known values). The Commission recalls that standard software has long been used to make predictions, but that what distinguishes AI systems is that they are notably capable of generating accurate predictions in highly dynamic and complex environments (e.g. AI systems in autonomous cars).

  • Recommendations: These could be personalized recommendations based on user preferences and behavior (e.g. movie recommendations on a streaming platform). Here too, what distinguishes AI systems from standard software is their ability to exploit large-scale data, adapt in real time, provide highly personalized recommendations, etc.

  • Content: Creation of texts, images or videos based on generative algorithms (e.g. conversational chatbots, image generative AI).

  • Automated decisions: Execution of specific actions in response to certain conditions (e.g. validation of bank loans, real-time fraud detection). It may be recalled here that, when they are fully automated and concern a natural person, these types of decisions are subject to the specific provisions of Article 22 GDPR.

  1. Interaction on the environment: The Commission does not go into much detail on this last criterion, merely noting that AI systems are not “passive” and that the influence of an AI system can be exerted both on tangible objects (e.g. a robot’s arm) and on virtual environments. However, this should not settle all the questions that will arise concerning this criterion, notably as to what distinguishes AI systems from standard software on this point.

Differentiation from conventional software

The guidelines emphasize the distinction between AI systems and conventional software, providing several examples. These illustrations are very important because, on the basis of the above criteria alone, it can still be difficult to draw the line between high-performance conventional software and an “AI system” – only the latter being subject to the provisions of the European Regulation.

In principle, traditional software executes statically defined instructions, often without the ability to learn from or modify incoming data. For example, a spreadsheet performs predefined calculations according to fixed formulas, a search engine returns results according to predefined indexing criteria, and a sorting algorithm classifies data according to fixed rules. These systems are fundamentally deterministic and lack the intrinsic capabilities of the AI systems described above.

The Commission notes that some conventional software may have an inference capability but does not fall within the scope of the definition of AI systems because of its limited ability to analyze models and adjust its results autonomously. The Commission gives several detailed examples to illustrate its point:

  • Systems for improving mathematical optimization: In particular, the Commission is targeting systems that use learning techniques to improve computational performance, optimize resource management, or improve the efficiency of pre-existing algorithms. Although these systems may incorporate “automatic adjustment”functionalities, the Commission considers that they do not constitute an “AI system” if they merely improve the performance of pre-existing models, rather than, for example, enabling these models to be adapted “intelligently.”

  • Basic data processing: These include, for example, database management systems for sorting or filtering data according to specific criteria, or data visualization software. These systems make it possible to perform various operations on pre-existing data, without actually learning and reasoning from this data, which they merely present in an informative way.

  • Systems based on classical heuristics: The Commission notes that an important difference between these systems and modern machine learning models is that systems based on classical heuristics apply predefined rules or algorithms to derive solutions, without adjusting their models according to input-output relationships.

  • Simple prediction systems: The Commission gives several examples of automated systems making predictions on the basis of basic statistical learning rules. According to the guidelines, these “simple” systems do not fall under the definition of an AI system because of their lower “performance.”

These examples can serve as an initial basis for analysis. However, it is clear from reading the guidelines, and in particular the examples provided by the Commission, that the line between conventional software and AI systems is far from clear. The highly technical nature of the subject, combined with the sometimes imprecise criteria put forward by the Commission (notably the reference to the absence of “intelligent” adaptation of models in the first example above, or to the lower “performance” levels of simple prediction systems), unfortunately does not provide a truly simple and reliable analysis grid. In-depth case-by-case analyses will still be essential in many cases, but they could lead to differences of opinion and therefore to different conclusions on the application of AI Act to certain systems.