MACRAMÉ Project Partners AcumenIST have published an article about the discussions and conclusions of a workshop that elucidated how Artificial Intelligence (AI) is increasingly influencing chemical risk assessment, enabling faster, more comprehensive, and potentially more ethical assessments.
Organised and hosted by ECETOC in October 2024, the workshop brought together experts from academia, industry, and regulatory bodies, to reflect upon the historical challenges in integrating multidimensional omics technologies into chemical regulation and explore the current capabilities and future potential of AI in toxicology and regulatory science.
The application of AI in chemical risk assessment refers to both generative and predictive algorithms encompassing machine learning, to analyse complex chemical, biological, and environmental data and provide insights into adverse effect potential for humans and ecosystems. AI systems support the prediction of chemical hazards, exposure levels, and adverse effects by learning from experimental results, mechanistic models, and regulatory datasets, thereby enhancing the efficiency of safety evaluations.
‘Artificial Intelligence (AI) can mimic human cognitive processes by analysing data to derive new knowledge, learn from patterns, and predict or generate outputs. Unlike biological intelligence (BI), which incorporates emotional responses and evolved instincts, AI operates via machines through logical, data-driven processing.’
This ECETOC workshop explored the reality of AI in chemical risk assessment where several areas for potential application have been identified (Figure 1).

The resulting publication, entitled ‘Building trust in the integration of artificial intelligence into chemical risk assessment: findings from the 2024 ECETOC workshop’, highlights that ‘Artificial Intelligence (AI) can mimic human cognitive processes by analysing data to derive new knowledge, learn from patterns, and predict or generate outputs. Unlike biological intelligence (BI), which incorporates emotional responses and evolved instincts, AI operates via machines through logical, data-driven processing.’
FAIR principles: a prerequisite for responsible AI in regulatory science
The workshop found that generative AI introduces new complexities in data transformation and comparison. Generative AI demands reproducibility not only of outcomes but also of the methods used to create synthetic data. Entire datasets generated by AI may resemble test data, yet individual data points often differ. The ability to produce vast quantities of synthetic data raises questions around storage, sharing, and equity, and whether AI-generated data should be treated the same as experimentally derived data.
Applying FAIR principles rigorously in AI will lead to more transparent, reproducible, and high-quality outcomes (Figure 2). In the long term, the benefits of FAIR compliance far outweigh the initial effort required to meet these standards.

To ensure FAIR AI, it is essential for instance to:
- Label data transparently
- Link AI models to their data sources
- Extend metadata standards
- Harmonise data models
Conclusions
The authors find that ‘[t]he clear message from this workshop is that AI has already become mainstream and will be an integral part of the future of chemical risk assessment. […] AI represents a major advancement on existing tools, in that it is usable across a spectrum of toxicological science activities, from literature reviews to chemical selection, hazard identification, exposure assessment and report writing.’
‘The use of AI in regulatory toxicology is not a matter of choice, it will happen regardless of what we decide. The objective should be to make AI work well in regulatory toxicology, not only because of the benefits this technology offers, but also because its implementation is already advancing at a tremendous pace. Application and trust building will not be trivial issues, and overcoming the challenges highlighted in this workshop will require significant efforts from all sectors.’
Follow this link to read the full paper.





