Menu

AI Ethics & Regulation: Insights from DiliTrust’s Head of Legal & DPO 

AI is rapidly emerging as a catalyst for innovation, reshaping industries and redefining the way we work and interact with technology. However, with innovation comes responsibility, and the integration of AI into our lives raises questions about ethics, privacy and social impact. 

In this interview we delve into the perspectives of Marie-Claire Jacob, Head of Legal and Data Protection Officer at DiliTrust. With a wealth of experience in navigating the intricate realms of legal and data protection landscape, Marie-Claire brings a unique perspective to the conversation. Join us as we embark on this journey of reflection through this dialogue that explores the intersection between technology, ethics and society. 

An illustration of the scales of justice on a background of lines and dots representing technology vectors. This image represents the question of ai ethics and regulation.

QUESTION: What are your current thoughts on the role of artificial intelligence? 

Artificial intelligence is currently the spark of many debates, whether for or against, particularly in the legal context. Personally, I often use artificial intelligence in my everyday work, and have recently discovered new gen AI tools that positively impact the way I work.  I’m impressed by the level to which these tools can improve my writing, even though I’m bilingual I have learned some new words thanks to these tools.  

I believe there is a real added value to using such tools and this bears witness to the transforming power it will have on our society and our work habits in numerous fields.  However, this goes without saying that there are certain limits and risks to this, especially with genAI tools.  

This could also interest you: Ensuring Sustainability in the Age of AI 

QUESTION: Talking about risk, what are your concerns exactly? 

There are some legal and ethical risks that need to be studied as well as guidelines set for use. While I appreciate the enthusiasm surrounding artificial intelligence, I believe that implementing regulations is crucial to mitigating its potential adverse effects on our societies. I’d like to draw a comparison with the cloud when it first emerged, it raised initial concern but has now become indispensable.  

I believe we need to approach the issue in a way that allows us to understand how it works. AI functions by simulating human intelligence, acting as a computational brain that learns from, and analyzes copious amounts of data. But the collection, quantity and origin of these data raise privacy, consent and liability issues. The use of personal data raises concerns under regulations like GDPR, as models trained on web data may inadvertently include sensitive information. This is why there needs to be careful consideration of the legal frameworks around it as this kind of innovation can clash with fundamental privacy principles. 

QUESTION: Could you give us an insight as to how artificial intelligence is currently regulated, or will be regulated in the future? 

The regulation of AI is an ongoing topic with various initiatives aiming to address its ethical and legal implications. There’s a division within the tech community regarding regulatory approaches, with academia emphasizing ethical considerations and industry players outlining principles of responsible AI. 

 International efforts, like the 2018 Declaration of Montreal and initiatives by organizations like UNESCO, reflect a global consensus for those needs. The EU’s AI Act, set for application in 2025, is a pioneering step in regulating AI, as it will classify AI applications based on their risk levels and ensure compliance with fundamental values. 

 While some criticize these regulations, they help set a precedent and elevate standards, as was seen before with GDPR. Transparency and accountability in AI models do pose some challenges regarding trade secrets and competitive advantage. However, viewing legal compliance as a business advantage can foster trust among clients and set new global standards. 


AI Act

The EU AI Act recently approved by the European Parliament on 14 March aims to ban specific AI applications threatening fundamental rights and establishes rules for high-risk AI systems. This can include biometric recognition and categorization. It places specific limitations on the use of AI in law enforcement and stresses the need for transparency, requiring developers to explain AI workings and data used. Overall, the Act marks a significant step in governing AI responsibly and transparently in Europe, whilst supporting its innovation. 


QUESTION: What are, according to you, the key considerations when working with AI? 

When working with AI, it’s crucial to prioritize trust and compliance. This involves defining clear use cases, ensuring privacy by design, and conducting thorough data protection assessments. Furthermore, addressing biases through fine-tuning and maintaining the ability to evolve in changing regulatory landscapes are crucial.  

Collaborating with legal and data protection experts from the project’s outset adds value, ensuring compliance and minimizing risks. Ultimately, viewing legal compliance as a business partnership enhances project success and fosters a competitive edge in the evolving AI landscape.

Read also: Understanding the Digital Operational Resilience Act 


At DiliTrust we understand that regulation and ethical considerations surrounding artificial intelligence are crucial. Marie-Claire’s insights highlight the importance of trust, compliance and collaboration in this rapidly evolving landscape. As we continue to forge ahead, DiliTrust remains committed to championing responsible AI governance through our innovative suite of solutions. By embracing initiatives like the EU AI Act, we pave the path towards a future where AI operates transparently, ethically, and responsibly, empowering us to harness its transformative potential while upholding fundamental principles and values.

Streamline legal and corporate activities with DiliTrust Governance Suite’s integrated modules. Contact us for a free demo and discover how to boost productivity!

Disclaimer: The views and opinions expressed in this interview are those of the individual interviewee, Marie-Claire Jacob, and do not necessarily reflect the official stance or viewpoints of DiliTrust. While Marie-Claire Jacob holds the position of Head of Legal and Data Protection Officer within our organization, the perspectives shared in this interview represent her personal viewpoints and professional expertise. DiliTrust does not endorse or validate the opinions expressed herein, and readers are encouraged to interpret the content within the context of individual perspectives.