electric-vehicle-charger-plug-with-digital-display

“Securing Data-Driven Cognitive V2G Charging: Edge Intelligence and Cybersecurity for Trusted EV Energy Exchange”  

Authors: Maria Makrynioti, George Lazaridis, Georgios Spanos, Georgios Stavropoulos, Periklis Chatzimisios, Silvia Canale, Esther Stallone, Konstantinos Votis, Dimitrios Tzovaras

The rapid evolution and usage of Electric Vehicles (EVs) and bidirectional Vehicle-to-Grid (V2G) technologies is reshaping the energy ecosystem, creating new opportunities for data-driven optimization while exposing charging infrastructures to evolving cybersecurity risks. This paper presents a conceptual framework for Securing Data-Driven Cognitive V2G Charging that leverages edge intelligence, distributed machine learning, and 5G-enabled IoT microservices to enable trusted EV energy exchange. Building on prior knowledge through European Union (EU) funded projects, the proposed approach addresses two complementary scenarios: (i) cognitive edge optimization of power flows for intelligent and cost-efficient EV charging under volatile renewable generation, and (ii) cybersecurity and trust enhancement in V2G data exchange through continuous monitoring, vulnerability detection, and secure, auditable data workflows. By integrating cognitive decision-making with big data analytics, the framework enables measurable cost savings across the EV charging value chain, while simultaneously ensuring grid stability, power quality, and resilience against cyber threats. Furthermore, the paper discusses open challenges and research directions for building secure, scalable, and trustworthy V2G charging infrastructures, highlighting the role of big data–driven cognitive intelligence in bridging energy efficiency with cybersecurity.

 Relevance of the Paper to the O-CEI Project:

This research paper directly addresses the core objectives of the O-CEI project, specifically contributing to the implementation of Pilot 7: Trustworthy and Secure EV Charging upon Reliable 5G Networks. By proposing a conceptual framework that integrates 5G-enabled IoT microservices with cognitive edge intelligence, this work provides the architectural basis necessary to secure the critical data flows required by Pilot 7. The paper’s focus on trust enhancement and auditable data workflows aligns perfectly with the pilot’s requirement for verifiable and resilient V2G exchanges, ensuring that the expanded attack surface of 5G-connected charging infrastructure is effectively mitigated. Furthermore, the proposed cognitive optimization of power flows supports O-CEI’s broader ambition to demonstrate how the Cloud-Edge-IoT continuum can drive energy efficiency and grid stability in the face of volatile renewable generation.

Artboard 3

DriftMoE: A Mixture of Experts Approach to Handle Concept Drifts 

Publication: arXiv:2507.18464 

Authors: Miguel Aspis,Sebastián A. Cajas Ordónez,Andrés L. Suárez-Cetrulo,Ricardo Simón Carbajo

 

Learning from non-stationary data streams subject to concept drift requires models that can adapt on-the-fly while remaining resource-efficient. Existing adaptive ensemble methods often rely on coarsegrained adaptation mechanisms or simple voting schemes that fail to optimally leverage specialized knowledge. This paper introduces DriftMoE, an online Mixture-of-Experts (MoE) architecture that addresses these limitations through a novel co-training framework. DriftMoE features a compact neural router that is co-trained alongside a pool of incremental Hoeffding tree experts. The key innovation lies in a symbiotic learning loop that enables expert specialization: the router selects the most suitable expert for prediction, the relevant experts update incrementally with the true label, and the router refines its parameters using a multi-hot correctness mask that reinforces every accurate expert. This feedback loop provides the router with a clear training signal while accelerating expert specialization. We evaluate DriftMoE’s performance across nine state-of-the-art data stream learning benchmarks spanning abrupt, gradual, and real-world drifts, testing two distinct configurations: one where experts specialize on data regimes (multi-class variant), and another where they focus on single-class specialization (task-based variant). Our results demonstrate that DriftMoE achieves competitive results with state-of-the-art stream learning adaptive ensembles, offering a principled and efficient approach to concept drift adaptation. All code, data pipelines, and reproducibility scripts are available in our public GitHub repository: https://github.com/miguel-ceadar/drift-moe.

Relevance of the Paper to the O-CEI Project:

The methodology outlined in DriftMoE holds strong potential for adaptation within O-CEI Task 3.5 (Implementation of intra- and cross-domain data management, observability, and AI orchestration mechanisms), where CeADAR plays a leading role. The intent is to use router networks to recommend relevant models from O-CEI’s marketplace. Furthermore, CeADAR is investigating ways to compress a MoE, which would make it even more efficient for recommending resource-intensive models like LLMs, a key part of our future research efforts. The paper acknowledges O-CEI.

Artboard 1

Intelligent Edge Computing and Machine Learning: A Survey of Optimization and Applications

Publication: Future Internet17(9), 417.

Authors: Sebastián A. Cajas, Jaydeep Samanta, Andrés L. Suárez-Cetrulo, Ricardo Simón Carbajo

Intelligent edge machine learning has emerged as a paradigm for deploying smart applications across resource-constrained devices in next-generation network infrastructures. This survey addresses the critical challenges of implementing machine learning models on edge devices within distributed network environments, including computational limitations, memory constraints, and energy-efficiency requirements for real-time intelligent inference. We provide a comprehensive analysis of soft computing optimization strategies essential for intelligent edge deployment, systematically examining model compression techniques, including pruning, quantization methods, knowledge distillation, and low-rank decomposition approaches. The survey explores intelligent MLOps frameworks tailored for network edge environments, addressing continuous model adaptation, monitoring under data drift, and federated learning for distributed intelligence while preserving privacy in next-generation networks. Our work covers practical applications across intelligent smart agriculture, energy management, healthcare, and industrial monitoring within network infrastructures, highlighting domain-specific challenges and emerging solutions. We analyse specialized hardware architectures, cloud offloading strategies, and distributed learning approaches that enable intelligent edge computing in heterogeneous network environments. The survey identifies critical research gaps in multimodal model deployment, streaming learning under concept drift, and integration of soft computing techniques with intelligent edge orchestration frameworks for network applications. These gaps directly manifest as open challenges in balancing computational efficiency with model robustness due to limited multimodal optimization techniques, developing sustainable intelligent edge AI systems arising from inadequate streaming learning adaptation, and creating adaptive network applications for dynamic environments resulting from insufficient soft computing integration. This comprehensive roadmap synthesizes current intelligent edge machine learning solutions with emerging soft computing approaches, providing researchers and practitioners with insights for developing next-generation intelligent edge computing systems that leverage machine learning capabilities in distributed network infrastructures.

Relevance of the Paper to the Project:

To achieve the O-CEI project’s goal of accelerating the uptake of innovative Cloud-Edge-IoT solutions and strengthening Europe’s strategic autonomy, a thorough understanding of the current AI landscape is paramount. This review paper provides a necessary up-to-date survey of the state-of-the-art in AI for edge and cloud applications. The knowledge within is foundational for developing some of the project’s core components, including its AIOps solutions and AI-driven marketplace, thereby ensuring that O-CEI is at the forefront of technological advancement.

growtika-nGoCBxiaRO0-unsplash

Offloading Artificial Intelligence Workloads across the Computing Continuum by means of Active Storage Systems

Publication: Future Generation Computer Systems, 2025/12/1, 108271

Authors: Alex Barceló, Sebastián A Cajas Ordoñez, Jaydeep Samanta, Andrés L Suárez-Cetrulo, Romila Ghosh, Ricardo Simón Carbajo, Anna Queralt

The increasing demand for artificial intelligence (AI) workloads across diverse computing environments has driven the need for more efficient data management strategies. Traditional cloud-based architectures struggle to handle the sheer volume and velocity of AI-driven data, leading to inefficiencies in storage, computation, and data movement. This paper explores the integration of active storage systems within the computing continuum to optimize AI workload distribution.

By embedding computation directly into storage architectures, active storage is able to reduce data transfer overhead, enhancing performance and improving resource utilization. Other existing frameworks and architectures offer mechanisms to distribute certain AI processes across distributed environments; however, they lack the flexibility and adaptability that the continuum requires, both regarding the heterogeneity of devices and the rapid-changing algorithms and models being used by domain experts and researchers.

This article proposes a software architecture aimed at seamlessly distributing AI workloads across the computing continuum, and presents its implementation using mainstream Python libraries and dataClay, an active storage platform. The evaluation shows the benefits and trade-offs regarding memory consumption, storage requirements, training times, and execution efficiency across different devices. Experimental results demonstrate that the process of offloading workloads through active storage significantly improves memory efficiency and training speeds while maintaining accuracy. Our findings highlight the potential of active storage to revolutionize AI workload management, making distributed AI deployments more scalable and resource-efficient with a very low entry barrier for domain experts and application developers.

Relevance of the Paper to O-CEI:

This paper performs experiments to offload AI workloads closer to the data sources used to train this model, exploiting near-data locality in the data federation and computing capabilities close to them in the context of MetaOS projects. This is relevant to O-CEI as it uses similar AI tooling, and the experiments occur in the context of the edge-to-cloud continuum, accessing external data sources and computing facilities.