AI is transforming the cybersecurity landscape

Published on 26/08/2025 in Expert talks

Artificial intelligence has a significant impact on cybersecurity. With cybercriminals intensifying and automating their attacks, organizations are increasingly integrating AI into their defensive ecosystems.

AI is transforming the cybersecurity landscape

AI is increasing the risk of cybersecurity threats. No longer limited by human constraints, cybercriminals are intensifying and automating their attacks, exploiting vulnerabilities better and targeting phishing more precisely. But that also means that organizations must integrate AI into their entire cybersecurity ecosystem in order to deploy appropriate countermeasures . This is fundamentally changing the role and responsibilities of CISOs. There is a clear shift from reactive to proactive, particularly thanks to threat intelligence and automated log analysis.

“Phishing attacks, for example, are carried out in several languages, using sophisticated language,” Fabrice Clément, CISO at the Proximus Group, explains. “By relying not only on rules, but also on behavior, AI enables companies to detect changes in patterns and automate actions, including managing alerts and redirecting potential victims to warning pages.”


Fabrice Clément
The effectiveness of AI is a major issue and the challenge for cybersecurity is to prove ourselves more effective than our adversaries.

Fabrice Clément, Chief Information Security Officer (CISO) at the Proximus Group


A question of effectiveness

Fabrice is convinced that the effectiveness of artificial intelligence is the primary challenge facing CISOs in terms of threats. "Thanks to AI, the human adversaries we face are becoming increasingly effective. They are able to exploit large amounts of data, particularly through social engineering techniques or organizational recognition data and that makes them better at targeting and guiding their attacks. The challenge of cybersecurity is to be more effective than our adversaries."


The threat is no longer solely from cybercriminal groups, but also from hostile nations capable of investing enormous resources in technology.

Benoît Hespel, Head of AI at Proximus ADA

Benoît Hespel

The profile of cybercriminals is changing

In the past, hackers needed specialized skills but that has changed. “A few years ago, all you had to do was ask ChatGPT how to create a keylogger and it would give you the code,” Benoît Hespel, Head of AI at Proximus ADA, recalls. “Since then, protection systems have been put in place in consumer LLMs to prevent this type of misuse. However, the threat continues to grow on the dark web, where criminals create, use, and share other large language models to create malware.”

Today, anyone with malicious intent can find a tool to carry out an attack. The result? More attackers, more automation, and more entities targeted at once by specialized AI agents.

Offensive tactics enhanced by AI

In addition to boosting social engineering, AI also personalizes phishing and refines deep fakes. "Whether it's text, video, sound, images, or QR codes, it's becoming increasingly difficult to distinguish between what's real and what's fake. Bots can even log into Teams to impersonate someone and collect information,“ Benoît points out. ”Some malware can also become adaptive and learn from the environment in which it is deployed, causing more damage and making it more difficult to detect."

Adversarial attacks

In companies, AI projects are often managed by people other than the IT team. For Benoît, this constitutes a new channel for threats: "AI models are subject to far fewer controls and guidelines than conventional IT projects and someone with access to the model can use it against its creator by injecting prompts into large language models or poisoning the data so that the model no longer detects a certain type of fraud, activity, or correlation." This is known as an adversarial AI attack.

Sometimes developers ask ChatGPT which open source library to use to develop a program. ChatGPT gives them three names, one of which is completely made up. This is the phenomenon of hallucination inherent in LLMs. But a hacker can create this fictitious library by inserting a backdoor or malware into it. All it takes is for a developer to install the library for the wolf to enter the sheepfold.


Keep the viral triangle in mind. The human and process aspects are just as important as the technological aspect.

Fabrice Clément, Chief Information Security Officer (CISO) at the Proximus Group


Basic defensive hygiene

“To protect against attacks generated or enhanced by AI, cybersecurity fundamentals such as access management and two-factor authentication remain essential,” says Fabrice. "But it is important to keep the viral triangle in mind. The human and process aspects are just as important as the technological aspect. With this in mind, security by design, vulnerability management, and backups are essential."

Fabrice is convinced that preparing for attacks by having a team and systems in place to detect and respond to incidents, by increasing automation, and by relying on AI, is essential. Managing information from different sources is another key offensive measure, greatly facilitated by artificial intelligence, particularly generative AI, and lets you act upon information. Finally, offensive defense, which involves attacking one's own infrastructure to check for vulnerabilities, is essential.

AI-enhanced defenses

Artificial intelligence enhances defenses in many ways. In terms of identification, for example, it helps detect vulnerabilities in specific contexts and develop scenarios in penetration tests.

In protection or incident detection and response, AI can act as a coach for professionals. A chatbot answers their highly specific questions, taking into account all the available documentation. Proximus ADANew window is currently developing this AI coach in-house, initially for security analysts and then for developers. The coach will ensure that their implementation, design, and solutions do not pose any major security risks in relation to the various guidelines in force.

Finally, artificial intelligence is used in the field of governance, where it can help ensure compliance with the legal framework.


In cybersecurity, AI is neither good nor bad; it is the intention behind the use of this powerful tool that is good or bad.

Benoît Hespel, Head of AI at Proximus ADA


Proximus sets the tone

The Proximus Group is at the forefront of AI-based cybersecurity. “We use AI agents to detect phishing in emails, smishing in text messages, and fraud in phone calls,” Fabrice explains.

“In terms of CLI-spoofing, we go beyond white lists and black lists. An AI model based on fairly recent history analyzes the behavior of these numbers and awards a confidence score for these CLIs (calling line identification), thus improving fraud management between operators,” Benoît adds.

Proximus also goes to great lengths in automated log analysis. Data from different systems is reconciled to obtain more correlations. Using graph-based algorithms, this data is then represented as a network to identify interactions and detect anomalies from normal behavior, thereby potentially revealing suspicious activity.

Staff awareness

In the inventory of defensive measures, training and raising employee awareness of cybersecurity are crucial. AI allows for a high degree of personalization in these training programs. Benoît: “At Proximus, we already generate internal phishing campaigns to train and raise awareness among staff. When someone is phished, they will, from now on receive an email explaining in detail the clues that could have made them realize it was phishing.”

These highly personalized campaigns with concrete feedback are more effective than standardized campaigns. They allow us to stay ahead of the game. They also raise awareness about deep fakes to encourage users to think critically. Because the training is gamified and interactive, the message gets across better and employees feel more engaged.

Agentic AI: a trend for the future

AI agents are LLMs like ChatGPT, but capable of using tools, for example by interacting with an Outlook calendar, sending emails, or accessing an API. Certain tasks can be automated simply by talking to an AI agent. “These agents have a bright future ahead of them,” Benoît says, “particularly for analyzing correlations or responding to certain incidents. Automating monitoring will allow us to focus on more complex cases that will still require human expertise.


In cybersecurity, AI is neither good nor bad; it is the intention behind the use of this powerful tool that is good or bad.

Benoît Hespel, Head of AI at Proximus ADA


The ubiquity of AI

“In the future, AI will permeate every aspect of cybersecurity, both on the cybercriminal side and on the organizational side,” Fabrice tells us. "Automation will become increasingly sophisticated, while assistance and control will become more efficient. AI will also be more autonomous in performing tasks and interacting. Employees will be able to focus on value-added tasks where human intelligence makes all the difference. Given that AI will be ubiquitous on the adversary side too, there will be a race for efficiency."

The AI market in cybersecurity is expected to see strong growth in the coming years, growing from over $30 billion in 2024 to around $134 billion in 2030.

Statista 02/2025 Opens a new window

“That's why Proximus ADA combines AI and cybersecurity,” Benoît adds. “In cybersecurity, artificial intelligence is a necessity to stay in the race, while securing AI applications cannot be taken lightly. As it becomes more widespread, AI is becoming comparable to any IT application and must be secured appropriately.”


Surround yourself with the right cyber partners who can advise you on how to integrate AI into your cybersecurity ecosystem.

Fabrice Clément, Chief Information Security Officer (CISO) at the Proximus Group


Step-by-step plan and governance

Fabrice recommends that companies wishing to integrate AI into their cybersecurity strategy do so in stages. "I advise them to start by defining a clear vision of their needs and a roadmap. I then encourage them to integrate AI capabilities into their processes and IT architecture, surrounding themselves with the right cyber partners. Finally, training teams and supporting change, by implementing governance that integrates ethical aspects, are essential."

“As AI becomes more widespread, anyone can easily create their own small, specific AI models. This form of shadow AI can pose a risk to cybersecurity and open doors to external access. The risk is even greater with the emergence of agentic AI. That makes governance crucial,” Benoît concludes.

Ready to strengthen your cybersecurity with artificial intelligence?

Call on the expertise of Proximus NXT to integrate AI into your cybersecurity.

Talk to an expert Opens a new window

Receive 3 interesting insights on data integration and AI in your mailbox.

We would love to share inspiring insights from our experts to help you optimally design your own AI journey. Register now and receive three articles featuring our experts in your mailbox.

  1. Why your AI project is also a data project.
  2. AI and data integration: an essential symbiosis
  3. 7 practical lessons from over 150 AI projects.

Register now Opens a new window

Fabrice Clément is Chief Information Security Officer (CISO) of the Proximus Group and responsible for the group's overall cybersecurity strategy. He sets and implements IT security policies, oversees information risk management, and ensures data protection compliance.

Benoît Hespel is Head of AI at Proximus ADA, Proximus' center of expertise in AI and cybersecurity. Benoit manages and coordinates the development and deployment of AI-based applications and tools, mainly for internal teams, with the aim of optimizing operational processes or improving the customer experience.