Results

Defining the research framework and research context in relation to the project domain – Informing sessions

A series of good practices that are applied by the teaching staff of the Computer Department of the Faculty of Automatic Control and Computers were proposed. The objective of the presentation was to pursue the building of a collaborative relationship with students through forms of mentoring that lead to diploma theses, subsequently to a dissertation, which has added value. In addition, how can we support students to disseminate the results through scientific contributions that support them in the next stage of training, the doctoral one. On this occasion, a series of community values ​​that were formulated within the Didactic Hub of the Bucharest Polytechnic were also disseminated; nec ullamcorper mattis, pulvinar dapibus leo.

During the presentation, the notion of the computing continuum was introduced, explaining the functional structure: the three levels of the computing continuum Cloud – the center, Fog/Edge – the link, and Deep Edge/IoT – the periphery. In the second part, several environments, tools and libraries applicable to such architectures were presented, such as JupyterHub, Docker / Singularity / Apptainer, Kubernetes, Dask, Apache Workflow, showing how they can be used, respectively, for designing applications for the cloud, containerization and resource management for containers, distributed and parallel computing in the cloud and defining data flows, and analyzing the ways in which they are solutions to the problems of security, scalability, transparency, fault tolerance, resource optimization.

The basic concepts regarding resource management were introduced, with an emphasis on the problems they solve: adaptability, latency reduction, real-time processing, scalability, etc., but also on the difficulties that these implementations open up, further: they are inherent to any advanced planning and orchestration methods. Federated Learning was proposed as an optimization solution in planning, the central subject of the presentation – we separated its components: the central server, client devices, client – ​​server – client communication methods and client – ​​server data aggregation. It was explained and illustrated how learning is carried out, in 3 steps: training models at the client device level, based on locally obtained data; transmitting and aggregating new parameters to the cloud / central server; transmitting the updated model back to the client devices. It also argued how Federated Learning can bring optimizations, in terms of security, efficiency and performance. In conclusion, I pointed out some potential weaknesses, or obstacles, of these models, and suggested research topics that I have in mind, through which I intend to investigate how some of these obstacles can be overcome in the future.

This represented a proposal for the use of simulation in the design of systems in the Computing Continuum, as well as a description of some simulation tools and methods. The advantages of simulation in the design of computational solutions in this field were highlighted: it replaces the costs and risks associated with the physical implementation of complex architectures with a much more economical and secure solution, which allows for a priori optimization of system parameters. A taxonomy of distributed systems simulation tools was presented, starting from hardware component simulation (Verilog), to hardware prototyping (Arduino), Digital Twins, and up to the simulation of distributed systems communicating over opportunistic networks. The use of simulation in the development of the Drop Computing paradigm was explained in detail, where we simulated mobility models over opportunistic networks using the MobEmu simulator.

The topic of Open Science was approached in the context of the interoperability / connectivity component of the computing continuum, through the lens of how data obtained from LoRaWAN sensors can be shared. The presentation includes the context of the use case and the design and implementation of a solution based on open technologies that allows data storage and data analysis, both in real time and as data batches. The demos at the end include exemplifying the use of the proposed solution for Living Labs (laboratories with real-time data) and the integration of data from sensors into research articles.

A cluster architecture for processing data flows was presented with an emphasis on demonstrating fault tolerance in different work scenarios. The functioning of the Kafka solution was presented, as well as its important components. The presented solution uses microservices-based technologies (Docker Swarm). A brief experimental evaluation of the solution was performed and the results were presented in the form of conclusive graphs.