25 March 2025 - 7 min Reading time
The increased use of artificial intelligence (AI) is driving change in many sectors, including finance. Laurence Van Meerhaeghe, Senior Legal Counsel and Data Expert at Febelfin, discusses what this means for banks in terms of novelties, challenges and opportunities.
I am a jurist and have a passion for law. Before joining the legal department of a bank in 2015, I worked as a lawyer at the Brussels Bar for more than eight years. Since September 2020, I have been working at Febelfin, where I monitor national and European regulations, analyse their potential impact and prepare the sector for their implementation. I specialise in legal issues related to data, from the protection of personal data to new technologies. In addition, I completed an executive master's degree in law and artificial intelligence to stay up-to-date.
We often think of ChatGPT when we talk about AI, but it is much broader than that. Artificial intelligence (AI) is an aggregate term for various technologies, from basic algorithms to complex deep learning systems and generative AI. AI covers different types of systems, depending on their goals (prediction, decision-making, etc.) and how they are programmed and trained. Although AI technologies have been in place for decades, their adoption has recently accelerated with the increase in processing power of computers, the growth of generated data (social networks, connected objects) and advances in research and development that have improved algorithms.
In back offices, some applications can be used for repetitive tasks such as document collection and verification. This can be effective and make risk management faster and more efficient (fewer human errors). In customer relations and communication, the use of chatbots and virtual assistants enables customers to get quick answers to simple questions, providing immediate reassurance. In the area of fraud, applications can help detect fraud or suspicious transactions.
As with all discoveries (fire, electricity, etc.), AI offers significant benefits, but also poses dangers. The main challenges are related to security, privacy protection, transparency of systems and ethics (bias). Belgian banks address these challenges in different ways. First, they apply strict regulations such as the AI Act (European artificial intelligence regulation) and the General Data Protection Regulation (AVG), which focuses on the protection of personal data and is applicable to AI systems operating on data. In addition, there are many other Belgian and European laws that apply to the sector and cover these elements in one way or another (AML, PSD, etc.).
Furthermore, banks have internal departments monitoring AI developments in their respective areas, such as legal departments, compliance, IT, risk management and audit. Some institutions have even created committees or think tanks that focus specifically on AI, for instance to discuss ethical issues. Finally, banks train their staff in the proper use of AI systems. This is essential to ensure that staff understand and use the technologies correctly and securely.
A more general challenge concerns raising public awareness. Indeed, it is crucial to educate and raise public awareness about the use of new technologies so that they can better understand the benefits and address the potential dangers. This requires a collective effort.
Through the AI Act, Europe wanted to harmonise the rules for the production, marketing and use of AI within the European Union. The aim is to ensure trust and respect for fundamental rights, health and safety of European citizens. The law classifies AI systems into three categories.
Prohibited systems refer to specific “practices” that use AI systems and are considered so risky that they are prohibited. These practices include, for example, behavioural manipulation, exploiting vulnerabilities of individuals and biometric facial recognition remotely and in real time in public places by authorities.
Next are “high” risk systems, whose impact is considered potentially high in sensitive areas. This is the case for health, justice, education, work, certain services (e.g. credit rating), law enforcement and critical infrastructures. These systems are subject to strict risk management and human control requirements.
Finally, there are low-risk systems, whose impact is considered limited and subject to less stringent requirements (transparency). Some provisions of the AI Act came into force in February 2025, especially those relating to prohibited systems and the definition of AI systems.
AI offers many benefits, but also raises concerns. It is legitimate to have certain fears. But in this context, it is crucial to be wary of disinformation and to be able to consult reliable sources. With the increasing digitalisation of society, consumers need to be informed and proactive, in order not to lose control and use new technologies safely. It is important to ensure the access of those who lack sufficient resources. Raising awareness among everyone is crucial to avoid misunderstandings and unfounded fears.
Detecting fraudulent transactions is already being carried out using AI applications. Other applications could be used, but this should always be done according to the legal framework applicable to the sector. This necessary compliance with various Belgian and European regulations can sometimes be a challenge for the adoption of innovative solutions.
AI holds the potential to change the accessibility of banks in several ways. By making certain services more agile and flexible, greater personalisation of services can be achieved, allowing a better response to the specific needs of certain groups.