ARMOR
The ARMOR project will focus on developing an adversarial testing framework to evaluate the security and robustness of ML models deployed in 6G O-RAN. The experiment will simulate a variety of adversarial attack scenarios that target AI-based decision-making models, to understand their vulnerabilities and improve their resilience.
OBJECTIVES
The ARMOR project will focus on developing an adversarial testing framework to
evaluate the security and robustness of ML models deployed in 6G O-RAN.
NOVELTY
This project brings forth innovative approaches to address a critical and underexplored issue in 6G security: adversarial attacks targeting AI models.
The ARMOR experiment breaks new ground by focusing on the specific challenges of 6G O-RAN environments
and providing a comprehensive solution to mitigate adversarial threats.
Activity Plan
MILESTONES AND TIMEPLAN
REPORTING AND DELIVERABLES
M2
(Internal)
Detailed Experimentation Plan.
M6
(Internal)
Experimentation Implementation Report and Results.
M6
(Public)
Demonstration of the Implementation.