Discover the latest breakthroughs from the NetsLab research group. Explore videos showcasing our innovations in 5G, 6G, AI-driven security, and advanced networking technologies. Dive into our work and see how we’re shaping the future of connectivity.

xAI-based Data Poisoning Attacks Defence for Federated Learning

As part of our contributions to the ROBUST-6G project, this demo tackles the critical challenge of securing AI systems against poisoning attacks—where malicious clients attempt to degrade model performance through corrupted updates. Our solution integrates SHAP-based feature attribution into a federated learning framework to detect and mitigate such threats while maintaining system performance and privacy.

Key Highlights:

  • Advanced Security Architecture – Strengthens the AI service layer within the robust 6G architecture by detecting and defending against malicious node behavior.
  • Federated Learning Implementation – Utilizes Federated Averaging for secure and efficient model aggregation with support for dynamic node management and real-time connection visualization.
  • Privacy-Preserving Framework – Ensures client privacy through integrated countermeasures and secure data handling in distributed environments.

Performance Features:

  • Intelligent Detection System – Automatically identifies poisoned models using SHAP-based attribution, with real-time visualization of client clusters.
  • Flexible Deployment – Allows customizable client configurations, dynamic addition/removal of nodes, and interactive topology control.
  • Robust Training Protocol – Supports automated training rounds and clearly distinguishes between benign and malicious clients for in-depth model analysis and threat detection.

DisLLM: Distributed LLMs for Privacy Assurance in Resource-Constrained Environments

Deploying Large Language Models (LLMs) in privacy-sensitive and resource-constrained environments such healthcare and finance has always been a challenge. DisLLM changes the game by integrating SplitFed Learning (SFL) with Low-Rank Adaptation (LoRA) and Local Differential Privacy (LDP) to enable efficient, privacy-preserving LLM training—without compromising performance. 

Key Highlights:

  • Privacy-Preserving Fine-Tuning – No raw data leaves client devices!
  • Efficient Resource Utilization – Distributed LLMs work seamlessly on low-end devices.
  • Enhanced Security – LDP ensures differential privacy protection.
  • Scalable & Robust – Handles large-scale multi-class classification with ease.

Performance Results:

  • Comparable Accuracy to Centralized Models
  • Up to 19.8% Lower GPU Consumption than SplitFed Learning
  • Seamless AI Adaptation for Privacy-Sensitive Domains

Read the full paper: https://lnkd.in/gGxqFP_n

Power Line Monitoring-based Consensus Algorithm for Smart Grid 2.0

Revolutionizing Smart Grids with Energy-Efficient Blockchain! 

Traditional blockchain consensus methods like PoW and PoS often fall short in energy efficiency and scalability—two critical needs for modern energy systems. PLMC changes the game by leveraging smart meter data for real-time power line monitoring, slashing energy consumption and block creation time by up to 60%!

As energy markets evolve, sustainability and security are non-negotiable. PLMC offers a scalable, eco-friendly alternative, bringing blockchain closer to powering the smart grids of tomorrow.

Key Highlights:

  • Energy-Efficient Consensus – No complex mining, just smart meter readings.
  • Real-Time Grid Monitoring – Boosts transparency & grid reliability.
  • Decentralized & Scalable – Secure P2P energy trading without central control.

Performance Results:

  • 60% Faster block creation than traditional PoW
  • Minimal Energy Consumption
  • Enhanced Trust in P2P energy markets

Read the full paper:  https://lnkd.in/dn7TWs2n

SHIELD - Secure Aggregation against Poisoning in Hierarchical Federated Learning

Federated Learning plays a vital role in AI-driven Beyond 5G (B5G) and 6G networks and applications. In our latest demo, based on the research presented in SHIELD – Secure Aggregation against Poisoning in Hierarchical Federated Learning, we present a hierarchical secure aggregation scheme to secure Federated Learning systems against poisoning attacks.

Our Approach to Securing Hierarchical Federated Learning:

  • We simulate an hierarchical FL training scenario, highlighting the benefits of intermediate aggregations at various layers including edge and core which leads to effective bandwidth utilization and make use of computing power of the intermediate layer nodes.
  • Poisoning attacks are a real threat to FL systems due to the absence of data at the central server. Poisoning detection is vital to ensure proper training of the FL model without any malicious intention.
  • Enhanced Robustness against Poisoning – Our approach ensures the robustness of the FL system for varying number of poisoners and non-IID data distributions among the FL users.
  • Secure aggregation helps Federated Leaning for making the FL models resilient against attacks. Dive into the video to see how our research secures hierarchical FL systems in Beyond 5G (B5G) and 6G!

Read the full paper: https://lnkd.in/eUJsrEzH

Explainable AI-based Data Poisoning Attacks Defence for Federated Learning

Next-generation networks like Beyond 5G (B5G)/6G are envisioned to be fully AI-driven. Therefore, it can be expected to rely heavily on up-to-date ML models, continuously trained by a distributed and privacy-preserved architecture like Federated Learning (FL). This demo presents a Federated Learning-based system robust against data poisoning attacks, utilizing Explainable AI (XAI) for attack detection. In the demo, we first simulate a FL training scenario. The simulation highlights how FL improves data privacy, and reduces the risk of client data exposure. Next, we demonstrate the applicability of XAI for poisoning attack detection within this FL framework. We show that while the detection of poisoning attacks is more challenging in a distributed AI environment, the use of XAI techniques significantly aids in identifying and understanding these attacks. Furthermore, our approach increases the flexibility of clients, thereby improving the overall robustness and security of the learning process. Through this demo, we illustrate how FL combined with XAI can be a powerful solution for maintaining the integrity and reliability of ML models in next-generation networks.

Read the full paper:  https://tinyurl.com/sherpa-fl

Radio Spectrum Data Collection with Distributed-Proof-of-Sense Blockchain Network

In this video, we demonstrate a groundbreaking approach to radio spectrum management using the Distributed-Proof-of-Sense (DPoS) blockchain consensus mechanism, specifically designed for Dynamic Spectrum Access (DSA) systems. DPoS incentivizes nodes to conduct spectrum sensing, enhancing energy efficiency and security in wireless networks.

Watch as we walk through the process of setting up a prototype network that uses HackRF One Software Defined Radio (SDR) as a spectrum sensor and a Raspberry Pi 4 to run the blockchain client. Our custom-built blockchain network utilizes elliptic curve cryptography-based zero-knowledge proofs (ZKPs) for verifying blocks, replacing energy-intensive methods like Proof-of-Work.

The demo showcases how nodes collect and share spectrum data in real time, addressing key issues like spectrum misuse and unauthorized access. We also illustrate how the DPoS algorithm efficiently determines the next block adder, improving the overall performance of the DSA system.

Key Benefits:

•⁠ ⁠Energy-efficient consensus algorithm
•⁠ ⁠Enhanced spectrum management and security
•⁠ ⁠Scalable for future token-based spectrum marketplaces

Join us for this deep dive into the future of spectrum access and blockchain technology!

Check out our papers:
https://ieeexplore.ieee.org/document/9762480
https://ieeexplore.ieee.org/document/10171207