Menu Chiudi

Summary

D’Antoni, F., Petrosino, L., Sgarro, F., Pagano, A., Vollero, L., Piemonte, V., & Merone, M. (2022). Prediction of Glucose Concentration in Children with Type 1 Diabetes Using Neural Networks: An Edge Computing Application. Bioengineering, 9(5), 183. https://doi.org/10.3390/bioengineering9050183


D’Antoni, F., Petrosino, L., Velieri, A., Sasso, D., d’Angelis, O., Boscarino, T., Vollero, L., Merone, M., & Piemonte, P., (2022). Identification of Optimal Training for Prediction of Glucose Levels in Type-1-Diabetes Using Edge Computing. In Proc. of the International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME) 16-18 November 2022, Maldives. doi:10.1109/ICECCME55909.2022.9988026


Del Giorno, S., D’Antoni, F., Piemonte, V., & Merone, M. (2023). A New Glycemic closed-loop control based on Dyna-Q for Type-1-DiabetesBiomedical Signal Processing and Control81, 104492. https://doi.org/10.1016/j.bspc.2022.104492


D’Antoni, F., Petrosino, L., Marchetti, A., Bacco, L., Pieralice, S., Vollero, L., Pozzilli, P., Piemonte, V. & Merone, M. (2023). Layered meta-learning Algorithm for Predicting Adverse Events in Type 1 Diabetes in IEEE Access, https://ieeexplore.ieee.org/document/10019274.


Panunzi, S., Borri, A., D’Orsi, L., & De Gaetano, A. (2023). Order estimation for a fractional Brownian motion model of glucose control. Communications in Nonlinear Science and Numerical Simulation127, 107554. https://doi.org/10.1016/j.cnsns.2023.107554


D’Antoni, F., Giaccone, P., Petrosino, L., Boscarino, T., Sabatini, A., Vollero, L., Piemonte, V. & Merone, M. (2023). Investigating Learning Methodologies on Edge Devices for Blood Glucose Level Forecasting in Type 1 Diabetes Patients Using CGM Sensor Data. EUROPEAN CHEMICAL BULLETIN12, 21159. https://www.iris.unicampus.it/handle/20.500.12610/76324

Extended Version



D’Antoni, F., Petrosino, L., Sgarro, F., Pagano, A., Vollero, L., Piemonte, V., & Merone, M. (2022). Prediction of Glucose Concentration in Children with Type 1 Diabetes Using Neural Networks: An Edge Computing Application. Bioengineering, 9(5), 183. https://doi.org/10.3390/bioengineering9050183

Abstract

Background: Type 1 Diabetes Mellitus (T1D) is an autoimmune disease that can cause serious complications that can be avoided by preventing the glycemic levels from exceeding the physiological range. Straightforwardly, many data-driven models were developed to forecast future glycemic levels and to allow patients to avoid adverse events. Most models are tuned on data of adult patients, whereas the prediction of glycemic levels of pediatric patients has been rarely investigated, as they represent the most challenging T1D population. Methods: A Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) Recurrent Neural Network were optimized on glucose, insulin, and meal data of 10 virtual pediatric patients. The trained models were then implemented on two edge-computing boards to evaluate the feasibility of an edge system for glucose forecasting in terms of prediction accuracy and inference time. Results: The LSTM model achieved the best numeric and clinical accuracy when tested in the .tflite format, whereas the CNN achieved the best clinical accuracy in uint8. The inference time for each prediction was far under the limit represented by the sampling period. Conclusion: Both models effectively predict glucose in pediatric patients in terms of numerical and clinical accuracy. The edge implementation did not show a significant performance decrease, and the inference time was largely adequate for a real-time application.

Figure 1: Schematic representation of the experimental setup during the test phase with edge systems.

Figure 2: Clarke Error Grids resulted by the best and worst predictions of the CNN (left) and LSTM (right) using different edge devices.

Figure 3: Graphical examples of the best and worst predictions performed by the CNN (left) and LSTM (right) using different edge devices.



D’Antoni, F., Petrosino, L., Velieri, A., Sasso, D., d’Angelis, O., Boscarino, T., Vollero, L., Merone, M., & Piemonte, P., (2022). Identification of Optimal Training for Prediction of Glucose Levels in Type-1-Diabetes Using Edge Computing. In Proc. of the International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME) 16-18 November 2022, Maldives. doi:10.1109/ICECCME55909.2022.9988026

Abstract

Type 1 Diabetes Mellitus (T1DM) is an autoimmune disease, and can cause serious complications that can be avoided by preventing the glycemic levels from exceeding the physiological range. At present, many data-driven models have been developed to forecast future glycemic levels in order to allow patients to avoid adverse events. However, most studies have investigated only the predictive capability of such models, whereas few studies have focused on the application on an edge device. When applying a data-driven model on an edge-computing system, and in particular when the model is expected to be continuously updated with new incoming data, a compromise must be reached between the predictive model performance and the limited computational capability of the edge device. This usually results in reducing the number of model parameters, or the amount of training data that will be used to train the model on the edge device, while maintaining the predictive performance acceptable. For the reasons above, the goal of this study is to identify an optimal training strategy to ensure sufficient reliability of a model for glucose levels prediction when used for long periods on an edge device. Different configurations of Long Short-Term Memory Recurrent Neural Networks are optimized on continuous glucose monitoring, insulin, and meal data of 6 virtual T1DM patients. The models are first trained in Cloud, and then implemented on an edge device in order to detect the optimal training approach. The obtained results are compared in terms of prediction accuracy and overall elapsed time, calculated as the sum between the time spent for: dataframe acquisition, preprocessing, training, and model .tflite transformation. The results show that training the network over 60 days of data leads to an improvement in numerical performance, whereas increasing the training set further may result in a negligible performance improvement, especially when compared to the burden of collecting data over such a long time window.

Figure 4: Average test RMSE for different sizes of the training set.

Figure 5: Comparison of the average results of the proposed approaches




Del Giorno, S., D’Antoni, F., Piemonte, V., & Merone, M. (2023). A New Glycemic closed-loop control based on Dyna-Q for Type-1-DiabetesBiomedical Signal Processing and Control81, 104492. https://doi.org/10.1016/j.bspc.2022.104492

Abstract

Objective: Type 1 Diabetes Mellitus is an autoimmune disease which requires constant care from patients. Continuous Glucose Monitoring (CGM) devices allow to keep track of the glycemic trend for 24 h a day. Control algorithms are necessary to automate the therapy and to develop an artificial pancreas system. The objective of this study is to develop a fully-automated glycemic control system based on a Dyna-Q Reinforcement Learning algorithm that is able to automatically decide the insulin infusion without the need of carbohydrate information from the patient. Methods: A Dyna-Q Reinforcement Learning architecture is proposed to automate glycemic control while using only information on past CGM and insulin data, validated on data from 10 in silico patients. Results: The proposed glycemic predictor achieves an average RMSE and a MARD of 13.2 mg/dl and 6.9% on 10 virtual adults, and of 15.0 mg/dl and 7.2% on 12 real patients, while over 98.8% of the forecasts fall within the safe zones of the Clarke Error Grid. The controller is able to maintain the glucose levels of the virtual subjects in the target range for 60.7% of the simulation time on a 24 h scenario, without causing hypoglycemic events in 8 out of 10 patients. Conclusion: The proposed architecture is able to achieve good performance without exploiting information on carbohydrates and using a much smaller amount of training data compared to models in the literature. Significance: We proved that model-based Reinforcement Learning could be a valid approach for a human-safe fully-automated artificial pancreas.

Figure 6: General Dyna architecture

Figure 7: Architecture of the Reinforcement Learning model for closed-loop glycemic control. The green text shows how blocks of the proposed model and their respective connections to each other reflect the general Dyna-Q algorithm

Figure 8: The reward function chosen to train the agent such that hypoglycemic and hyperglycemic events are penalized severely, in order to minimize as much as possible the percentage time spent far from the center of the euglycemic range.

Figure 9: prediction on the best and the worst patient

Figure 10: CGM trends and corresponding insulin boluses obtained by simulating the 24h and 3 meals scenario using the UVA/Padova software. The specific controllers of Adult #001 and Adult #003 are reported in (a) and (b), respectively, in order to present one of the best and worst in silico patient.






D’Antoni, F., Petrosino, L., Marchetti, A., Bacco, L., Pieralice, S., Vollero, L., Pozzilli, P., Piemonte, V. & Merone, M., “Layered meta-learning Algorithm for Predicting Adverse Events in Type 1 Diabetes,” in IEEE Access,
https://ieeexplore.ieee.org/document/10019274.

Abstract

Type 1 diabetes mellitus (T1D) is a chronic disease that, if not treated properly, can lead to serious complications. We propose a layered meta-learning approach based on multi-expert systems to predict adverse events in T1D. The base learner is composed of three deep neural networks and exploits only continuous glucose monitoring data as an input feature. Each network specializes in predicting whether the patient is about to experience hypoglycemia, hyperglycemia, or euglycemia. The output of the experts is passed to a meta-learner to provide the final model classification. In addition, we formally introduce a novel parameter, α, to evaluate the advance by which a prediction is performed. We evaluate the proposed approach on both a public and a private dataset and implement it on an edge device to test its feasibility in real life. On average, on the Ohio T1DM dataset, our system was able to predict hypoglycemia events with a time gain of 22.8 minutes, hyperglycemia ones with an advance of 24.0 minutes. Our model not only outperforms presented models in the literature in terms of events predicted with sufficient advance, but also with regard to the number of false positives, achieving on average 0.45 and 0.46 hypo- and hyperglycemic false alarms per day, respectively. Furthermore, the meta-learning approach effectively improves performance in a new cohort of patients by training only the meta-learner with a limited amount of data. We believe our approach would be an essential ally for the patients to control the glycemic fluctuations and adjust their insulin therapy and dietary intakes, enabling them to speed up decision-making and improve personal self-management, resulting in a reduced risk of acute and chronic complications. As our last contribution, we assessed the validity of the approach by exploiting only blood glucose variations as well as in combination with the information of the insulin boluses, the skin temperature, and the galvanic skin response. In general, we have observed that providing other information but CGM leads to slightly lower performances with respect to considering CGM alone.

Figure 11: Meta learning algorithm for the prediction of adverse events for patients with Type-1-Diabetes. The base learner consists of 3 LSTM experts, each specialized in predicting one of the three classes: hypoglycemia, euglycemia, hyperglycemia. The meta learner takes as input the predictions of the three experts and provides the final decision. The patient receives alarms on future hypoglycemic and hyperglycemic events

Figure 12: Schematic representation of the expert architectures. Left: the architecture based on the LSTM. network. Right: the architecture based on the CNN network.

Figure 13: Schematic representation of the meta-learning algorithm and the single experts’ architecture.

Figure 14: examples of predictions and relative classification with the proposed and the standard approach.

Figure 15: Schematic representations of the experimental tests.





Panunzi, S., Borri, A., D’Orsi, L., & De Gaetano, A. (2023), “Order estimation for a fractional Brownian motion model of glucose control,” in Communications in Nonlinear Science and Numerical Simulation, 127, 107554,
https://doi.org/10.1016/j.cnsns.2023.107554.

Abstract

When a subject is at rest and meals have not been eaten for a relatively long time (e.g. during the night), presumably near-constant, zero-order glucose production occurs in the liver. Glucose elimination from the bloodstream may be proportional to glycemia, with an apparently first-order, linear elimination rate. Besides glycemia itself, unobserved factors (insulinemia, other hormones) may exert second and higher order effects. Random events (sleep pattern variations, hormonal cycles) may also affect glycemia. The time-course of transcutaneously, continuously measured glycemia (CGM) thus reflects the superposition of different orders of control, together with random system error. The problem may be formalized as a fractional random walk, or fractional Brownian motion. In the present work, the order of this fractional stochastic process is estimated on night-time CGM data from one subject.

Figure 16: 95% of realizations from the fitting procedure. Panel A shows the observed (red circles), fitted autocorrelation (dashed black lines) function for those realizations which produced values of the loss function less than 95◦ percentile and their average (blue line). The shadow area represents the 95% envelop of all the fitted autocorrelation functions. Panel B shows the observed interstitial glucose concentration (red circles), the predicted (dashed black lines) realizations which produced loss function values less than 95◦ percentile, the 95% envelop of all the predictions.





D’Antoni, F., Giaccone, P., Petrosino, L., Boscarino, T., Sabatini, A., Vollero, L., Piemonte, V. & Merone, M. (2023), “Investigating Learning Methodologies on Edge Devices for Blood Glucose Level Forecasting in Type 1 Diabetes Patients Using CGM Sensor Data,” in EUROPEAN CHEMICAL BULLETIN, 12, 21159,
https://www.iris.unicampus.it/handle/20.500.12610/76324.

Abstract

Type 1 Diabetes mellitus (T1D) is a widespread disease characterized by a persistent condition of hyperglycemia. Continuous Glucose Monitoring (CGM) devices allow people with T1D to keep track of their glycemic level for 24 hours a day. Artificial intelligence models can aid people with T1D adjusting and optimizing their insulin therapy by providing a prediction of the future glycemic level based on CGM data; nonetheless, most of them are large models that run on the cloud, whereas few studies have focused on the application on an edge device. Applying a data-driven model that must be continuously updated on an edgecomputing system requires a compromise between the predictive model performance and the limited computational capability of the edge device. In this study, we investigate different training approaches of a well-established Long Short-Term Memory neural network for blood glucose level forecasting in people with T1D based on CGM and insulin data. The best performance is achieved when the model is pre-trained on a large amount of data from 10 virtual patients, and fine-tuned on patient-specific data updating only the parameters of the output layer, while keeping the parameters of the hidden layers unchanged. The numeric results are comparable to those achieved by larger models in the literature. The presented model is characterized by an average training and DRQ time of 67.6 seconds on an edge device that is largely acceptable in practical cases.

Figure 17: Results of the different configurations ranging from 1 to 4 days of training data.