We will coordinate the definition of a theoretical framework for designing and evaluating federated learning solutions in the e-health domain. The framework will support the formalization of design choices, from the identification of data sources to the specification of federated learning software, the execution platform, and the evaluation strategy.
D1.2 Federated learning models for e-health, Report (M18, L:
Unibari)
We will select existing federated learning models, which may be extended through original developments, in order to be adapted and evaluated for the e-health domain in general and for the specific use case requirements defined.
We will define and test deployment techniques for the automatic deployment of federated learning models, based on the architectural patterns defined. The task will include the identification of novel patterns, the optimization of code execution on target platforms, and the setup of real platforms, virtualization technologies, and simulation tools. Both existing software and original code will be used in the experimental activities planned.
D2.2 KPIs definitions and metrics, Report (M16, L:CNR)
We will coordinate the definition of Key Performance Indicators (KPIs) to assess the impact of the proposed solutions. General cross-application KPIs will be defined, complemented by specific KPIs tailored to each use case, along with metrics and methods for their measurement. This task will also establish the evaluation process and the evaluation template.
D3.1 Analisys of Requirements, Report (M12, L:
Unibari)
We will contribute to the requirement analysis of the three use cases, focusing on the selection and organization of data sources for experimental and evaluation activities. KPIs for assessing the impact of the federated learning solutions in each specific use case will be defined. Additionally, the datasets to be used for each use case will be identified.
D3.2 Evaluation Report, Report (M24, L: CNR)
We will design and conduct experimental activities to evaluate the defined methodology, testing different deployment configurations of the models on a configured platform.
D3.3 Data archive and software repository (M24, L:
Unicampania)
We will carry out evaluation activities focusing on KPIs that describe the quality and performance of the computing infrastructure. Special attention will be given to KPIs that characterize the impact of the proposed solutions on the three case studies.