Artificial intelligence law in Italy: everything you need to know

The massive emergence of artificial intelligence (AI) presents enormous opportunities: advanced automation, more accurate predictions, improvements in health, finance, and public services. But at the same time, it brings with it very specific challenges: algorithm transparency, hidden biases, automated decisions without human oversight, and risks to personal data privacy.

 

In this context, Europe has established itself as a global regulatory laboratory. And now, with the passage of the artificial intelligence law in Italy, one country is ahead of the rest and has established a comprehensive national framework that explicitly links AI and privacy.

This article analyzes what this law contains, what it means for data protection, how it compares to other frameworks, and what impact it will have on various sectors.

What does the new artificial intelligence law in Italy establish?

The artificial intelligence law in Italy (Law No. 132/2025) was approved on September 23, 2025, published in the Official Gazette on September 25, and entered into force on October 10, 2025. Although it has already been enacted, several of its provisions require implementing decrees that will be issued shortly. Therefore, both companies and public entities must adapt their systems and compliance policies.

This is the first national framework in the European Union to comprehensively regulate AI, aligning itself with the EU Artificial Intelligence Act (AI Act).

Among its main new features are:

  • Protection of minors: children under the age of 14 will need parental consent to use certain AI services, with the main objective being to protect young users from the potential risks associated with AI technologies.
  • Health and data reuse: the law allows health data (once anonymized or pseudonymized) to be reused to develop clinical AI systems, always under conditions that protect privacy.
  • Funding and innovation: up to €1 billion is allocated for investment in AI, cybersecurity, and telecommunications.

Which bodies oversee compliance with the law?

Italy’s artificial intelligence law establishes a complex institutional framework to ensure its effective implementation, with different bodies responsible for oversight and enforcement:

  • AgID (Italian Digital Agency): accredits and supervises the bodies that assess the compliance of AI systems.
  • ACN (National Cybersecurity Agency): acts as the main supervisory authority, with the power to impose sanctions and regulate the use of AI in cybersecurity.
  • Sectoral supervision: the Bank of Italy, CONSOB, and IVASS supervise the credit, finance, and insurance sectors. For its part, the Italian Data Protection Authority maintains its role in privacy and automated decisions, as does AGCOM in digital services.

In addition, a Coordination Committee has been set up to implement the national AI strategy and align efforts between government, industry, and research.

Although the law strengthens institutional control, some voices have questioned the lack of independence of the main supervisors, an aspect highlighted by the European Commission and the Italian data protection authority.

Implications for privacy and data protection

The approval of the artificial intelligence law in Italy is a milestone in terms of privacy, for several reasons:

Alignment with the GDPR

Italy already had a robust European framework in place thanks to the GDPR. This new law complements it by specifying how data should be processed in AI systems, especially when it comes to large volumes of data, profiling, automated decisions, or the processing of sensitive data. For example, in healthcare, data must be anonymized or pseudonymized before it can be reused.

Transparency and human oversight

The law requires that decisions made by AI systems be traceable—that is, that it be possible to identify how and why a result was reached—and requires human oversight in sensitive sectors. This strengthens the protection of citizens’ rights against opaque algorithms. For organizations, this means documenting each stage of the process, demonstrating regulatory compliance, and implementing robust data governance. Solutions such as Nymiz facilitate this compliance by incorporating manual oversight into anonymization, resulting in more accurate, auditable, and secure results.

Processing of sensitive data and individual rights

Italy’s artificial intelligence law establishes clear measures to protect personal data, especially in sensitive sectors such as healthcare. Data reuse is only permitted if it has been properly anonymized or pseudonymized, thus ensuring privacy.

The rights of the data subject—access, rectification, erasure—remain fully in force, but a “privacy by design” approach and human oversight of automated decisions are required. In addition, the law provides for severe penalties for the misuse of personal data through AI systems, reinforcing the commitment to ethical and transparent AI.

Exemplary penalties and real cases

Italy’s artificial intelligence law provides for prison sentences of 1 to 5 years for unlawful uses of AI that cause harm, such as the dissemination of deepfakes. Although standard administrative fines have not yet been set in the regulations, regulatory precedents already set the tone:

artificial-intelligence-law-in-italy-data-privacy-nymiz

These examples show that Italy not only legislates but also actively enforces its data protection principles in the field of AI.

Impact on risk minimization

One of the pillars of Italy’s artificial intelligence law is proactive risk management. For AI systems that may affect fundamental rights, such as in employment, justice, or healthcare, a data protection impact assessment (DPIA) is required.

This involves integrating privacy by design, applying techniques such as anonymization, ensuring the traceability of algorithms, and establishing human oversight mechanisms. Thus, the law promotes a culture of responsibility in the use of AI from its development to its practical application.

Comparison with other legislation

Italy’s artificial intelligence law is aligned with the European regulatory framework, but introduces distinctive elements that make it a benchmark within the EU.

Similarities with the European AI Act

  • It applies the same risk classification (high, medium, low) for AI systems.

 

  • It reinforces the principles of transparency, traceability, and respect for fundamental rights.

 

  • It establishes strict penalties for misuse or dangerous use of AI.

Differences and advances

  • It is the first national law in the EU to comprehensively regulate AI before the AI Act comes into force.

 

  • It introduces specific measures to protect sensitive data (such as health data) and vulnerable groups (such as minors).

 

  • Requires employees to be informed when AI is used in the workplace.

  • Commits to a mixed model of regulation and innovation, with public investment of up to €1 billion in AI and cybersecurity.

This combination of regulatory foresight and practical approach makes Italy’s artificial intelligence law an emerging model for other European legislation.

Complement or foresight?

This law can be interpreted as complementing the AI Act and, at the same time, serving as a laboratory for AI regulation in Europe: Italy is taking the lead, setting standards that other countries are likely to follow.

Impact by sector

The artificial intelligence law in Italy will have specific effects on different sectors. Let’s look at the most relevant ones:

Public sector

Automation of citizen services: AI systems in public administration must comply with requirements for traceability, transparency, and respect for privacy. One of the most practical and revealing examples is in healthcare services, where AI can be used for diagnosis, provided that the data is anonymized and doctors retain the final decision.

Healthcare sector

Reuse of clinical data: Thanks to the law, research institutes can use anonymized/pseudonymized data to train AI, helping to accelerate medical innovation. However, it also entails strict data protection and governance obligations.

Legal and compliance sector

In a sector where artificial intelligence is already used for case law analysis, contract automation, and litigation prediction, Italy’s artificial intelligence law requires a new level of responsibility.

Law firms, compliance departments, and corporate legal services will need to review and adapt their procedures to ensure that:

  • the algorithms used are documented and audited
  • the impact on employees and customers is assessed
  • clear reporting is provided when AI systems are used in automated decisions that may affect rights.

These obligations reinforce the role of legal compliance as a strategic axis for implementing AI that is ethical, legal, and aligned with fundamental rights.

Labor sector

Employers who use AI in work processes (selection, evaluation, task assignment) must inform workers that an AI system is being used. This not only reinforces the right to explanation and transparency, but also ensures regulatory compliance…

What does this mean for companies using AI in Italy?

The arrival of the artificial intelligence law in Italy requires companies to act proactively to avoid penalties and take advantage of opportunities. Among the key points are:

  • Make an inventory of the AI systems they have deployed or plan to deploy in Italy or that operate with data from Italian residents.
  • Verify compliance with principles such as traceability, human oversight, transparency, and data protection.
  • Adapt data governance processes, ensuring anonymization/pseudonymization where appropriate, and conducting privacy impact assessments (DPIA).
  • Prepare the required documentation: system description, data used, training criteria, risk mitigations.
  • Train staff and raise awareness about data protection and ethics in AI.
  • Monitor regulatory developments: although the law has already been passed, there are still implementing decrees to be issued that will define operational details (e.g., requirements for algorithms).
  • Seize the opportunity: the allocation of up to €1 billion could be a way to finance innovation in AI, making regulation a lever for competitiveness.

 

Nymiz, your strategic ally for regulatory compliance

Adapting to this new regulation does not have to be complex. At Nymiz, we help public and private organizations comply with legal data processing requirements through solutions such as:

  • Advanced anonymization and pseudonymization, which not only protect personal identity but also preserve the value of the data, unlike less secure solutions such as black boxes or manual redaction.
ediscovery-data-protection-nymiz-legaltech-anonymization
  • Privacy by design compliance, integrating privacy into the development of AI systems from the outset.
  • Robust data governance, aligned with the traceability, transparency, and oversight requirements demanded by law.

Our technology facilitates compliance with Italy’s artificial intelligence law without slowing down innovation, allowing companies to use AI responsibly, ethically, and legally.

online-data-potection-vpn-nymiz

Conclusion

The enactment of the artificial intelligence law in Italy represents a decisive step toward articulating a regulatory framework in Europe that combines the protection of fundamental rights, privacy, innovation, and competitiveness. In doing so, Italy is positioning itself as a pioneer.

For organizations, the message is clear: it is not enough to develop efficient AI; they must do so in an ethical, responsible, and privacy-friendly manner. This law demonstrates that data governance and transparency will no longer be “best practices” but will become legal obligations.

Ultimately, this new framework can serve as a model for other countries that are still designing their own regulatory paths toward AI that respects human dignity, privacy, and public trust.

The regulatory future of AI has already begun, and Italy’s AI law is a good indicator of the direction Europe is taking.

Don’t get left behind: find out how Nymiz can help you adapt to this new legislation. Book a demo and start protecting your data and strengthening your compliance today.

more insights