<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Personalised Glucose Prediction via Deep Multitask Networks</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">John</forename><surname>Daniels</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Pau</forename><surname>Herrero</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Pantelis</forename><surname>Georgiou</surname></persName>
						</author>
						<title level="a" type="main">Personalised Glucose Prediction via Deep Multitask Networks</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">53CFC153272E740447488C554121BBCD</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T00:23+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Glucose control is an essential requirement in primary therapy for diabetes management. Digital approaches to maintaining tight glycaemic control, such as clinical decision support systems and artificial pancreas systems rely on continuous glucose monitoring devices and self-reported data, which is usually improved through glucose forecasting. In this work, we develop a multitask approach using convolutional recurrent neural networks (MTCRNN) to provide short-term forecasts using the OhioT1DM dataset which comprises 12 participants. We obtain the following results -30 min: 19.79±0.06 mg/dL (RMSE); 13.62±0.05 mg/dL (MAE) and 60 min: 33.73±0.24 mg/dL (RMSE); 24.54±0.15 mg/dL (MAE). Multitask learning facilitates an approach that allows for learning with the data from all available subjects, thereby overcoming the common challenge of insufficient individual datasets while learning appropriate individual models for each participant.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>In recent years, the proliferation of biosensors and wearable devices has facilitated the ability to perform continuous monitoring of physiological signals. In diabetes management, this has come with the increasing use of continuous glucose monitoring (CGM) devices for helping with glucose control. The current literature on clinical impact of CGM devices shows that continuously monitoring blood glucose concentration levels has benefit in maintaining tight glycaemic control <ref type="bibr" target="#b8">[5,</ref><ref type="bibr" target="#b5">2]</ref>. As a next step, glucose prediction offers an opportunity to further improve glucose control by taking actions to avert adverse glycaemic events, such as suspension of insulin delivery in closedloop systems to avert hypoglycaemia.</p><p>The general work in this area has typically involved collecting data covering physiological variables such as glucose concentration levels, heart rate, and self-reported data covering exercise,sleep, stress, illness, insulin, and meals. However, public datasets covering ambulatory monitoring of T1DM population are not widely available.</p><p>Deep learning <ref type="bibr" target="#b9">[6]</ref> facilitates learning the optimal features and has been shown to perform better than other methods involving hand crafted features that have been employed in recent times for predicting glucose concentration levels. However, typically these models require relatively large amounts of data to converge on an appropriate model.</p><p>In this work, we employ a multitask learning <ref type="bibr" target="#b4">[1]</ref> approach in order to improve the performance of the glucose forecasting in a neural network, where each individual is viewed as a task, using shared layers to enable learning form other individuals.</p><p>1 Imperial College London, United Kingdom, email: jsd111@imperial.ac.uk, p.herrero-vinias@imperial.ac.uk, pantelis@imperial.ac.uk</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">RELATED WORK</head><p>Glucose prediction has been a long-standing area of focus in the diabetes community. As a result, many approaches have existed in order to provide near-time glucose concentration level forecasts. Early work in this area have focused on physiological models and traditional machine learning methods in predicting glucose concentration levels <ref type="bibr" target="#b15">[12,</ref><ref type="bibr" target="#b6">3]</ref>. Recent work as seen in the 2018 Blood Glucose Predictive Challenge has seen a move towards deep learning methods with more impressive results <ref type="bibr" target="#b14">[11,</ref><ref type="bibr" target="#b12">9,</ref><ref type="bibr" target="#b17">14,</ref><ref type="bibr" target="#b11">8]</ref>. These have used convolutional architectures, recurrent architectures, or a combination of both to model the task of glucose prediction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">DATASET AND DATA PREPROCESSING</head><p>In this section, we detail the transformations that are performed on the data prior to training and testing the model for each T1DM participant.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">OhioT1DM Dataset 2020</head><p>The OhioT1DM dataset 2020 <ref type="bibr" target="#b13">[10]</ref> is a dataset comprising 12 unique participants that cover eight weeks of daily living. The participants are given IDs as the data is anonymised. This data comprises physiological data gathered using a continuous glucose monitor (blood glucose concentration levels) and wristband device (heart rate, skin conductance, skin temperature), activity data (acceleration, step count), and self-reported data (meal intake, insulin, exercise, work, sleep, and stressors).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Dealing with Missing Values</head><p>A non-trivial aspect of the datasets used for developing glucose prediction models is the aspect of missingness. This is evident in the Ohio T1DM dataset with missingness present in both physiological variables and self-reported data <ref type="bibr" target="#b7">[4]</ref>.</p><p>Linear Interpolation: The blood glucose values that are missing in this dataset are typically missing at random. This could be attributed to issues around replacing glucose sensors and/or transmitters, or dealing with faulty communication. As a result, we employ linear interpolation in the training set to handle imputation of missing blood glucose concentration levels in the dataset over a period of one hour. In the samples where more than an hour of CGM data is missing the sample is discarded from the training set. This is illustrated with an example sequence in (C) of Fig. <ref type="figure" target="#fig_0">1</ref> On the other hand, features which comprise self-reported data the assumption is made that any missing values represent an absence of said feature. Therefore all missing values in insulin, meal intake and reported exercise are imputed with zero.</p><p>The missingness in features from the self-reported data in the testing set is tackled similarly as in the training set. However, this is not the case for blood glucose concentration levels as interpolation when a current value at a given timestep is missing would lead to an inaccurate evaluation of model performance.</p><p>Extrapolation: In order to accurately evaluate the performance of the model we cannot always rely on interpolation at test time as this may require, in a real-time setting, an unknown future value to perform interpolation. Consequently, we need to rely on other methods of extrapolation to impute the missing glucose concentration levels. In this scenario (A), for gaps of data less than 30 minutes, we impute missing values with predicted values from the trained model. For missing recent values longer than 30 minutes as in (B), we pad the remaining values with the last computed value. In cases where, a gap larger than 30 minutes is evident in historical data and a current value is present at the given timestep, linear interpolation was then employed instead to provide a more accurate imputation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Standardisation</head><p>To enable training the proposed model effectively, we perform transformation of the relevant input features (blood glucose concentration, insulin bolus, meal(carbohydrate) intake, and reported exercise). The blood glucose concentration levels are scaled down by a factor of 120. Similarly, the insulin bolus is scaled by 100 and meal intake values are scaled by 200 in the same range between features. The exercise values are transformed to a simple binary representation of the presence or absence of exercise, from the recorded exercise intensity on a range from 1-10.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">METHODS</head><p>In this section we detail the machine learning technique that is used to provide the means of learning personalised models with the entire dataset. We detail the approach to develop the deep multitask network for personalisation. We provide a summary of the hyperparameters used in training as well and setting up the input for personalised multitask learning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Multitask Learning</head><p>Multitask learning is an approach in machine learning that can be broadly described as a method of learning multiple tasks simultane-ously with the aim of improving generalisation <ref type="bibr" target="#b4">[1]</ref>.</p><p>Multitask learning for personalisation has been used mainly in affective computing <ref type="bibr" target="#b16">[13]</ref> with early work in diabetes management focusing on using multitask learning for developing prediction models for clustered groups of Type 1, Type 2, an non-diabetic participants <ref type="bibr" target="#b10">[7]</ref> rather than leveraging similarities within groups such as gender, for personalised glucose predictions.</p><p>As seen Figure <ref type="figure">2</ref>, the output from the shared layers are now fed into the individual(task)-specific fully connected layers of each user.</p><p>In a multitask setting of this kind, a multiplicative gating approach is used to ensure that the input corresponding to the particular user trains on just that user in the individual-specific layers. In that sense, at each iteration a batch that consists of data from a particular individual is used to train the shared layers and the layers specific to the individual.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">CRNN Model</head><p>The deep learning model trained in the multitask learning setting is a convolutional recurrent neural network (CRNN) proposed by Li et. al <ref type="bibr" target="#b11">[8]</ref> to perform short-term glucose prediction. This forms the basis of the single-task (STL) model. The convolutional recurrent model consists initially of a 3 temporal convolutional layers that perform a 1-D convolution with a Gaussian kernel over the sequence of input to extract features of various rates of appearance, followed by a max pooling layer after each convolution operation. The input is a 4-dimensional sequence that takes a 2-hour window of historical data.</p><p>The output from the convolutional layers performs feature extraction and feeds into a recurrent long short-term memory (LSTM) layer that is able to better model the temporal nature of the task.</p><p>The output from the shared layers feed into the fully connected layers of each user and to then provide the change in glucose value over the prediction horizon. This is then added to the current glucose value to provide the forecast glucose concentration level.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Loss Function</head><p>The loss function used for converging to the appropriate model for the glucose forecasting is the mean absolute error. This is expressed below as:</p><formula xml:id="formula_0">L(y, ŷ) = 1 N batch N k=1 |y − ŷ| (<label>1</label></formula><formula xml:id="formula_1">)</formula><p>Figure <ref type="figure">2</ref>. A detailed look at the formulation of convolutional recurrent networks in a multitask setting. In this setting, each user is represented as a task. In addition, the initial layers (convolutional and recurrent layers) are shared between each user, the next two (dense) layers are shared based on gender, and the last (dense) layer is specific to each user.</p><p>where ŷ denotes the predicted results given the historical data and y denotes the reference change in glucose concentration over the relevant glucose prediction, and N batch refers to the batch size.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4">Hyperparameters</head><p>The following table details provides the details of the hyperparameters used for the model architecture at each layer. The optimiser used for this work is Adam. The learning rate is 0.0053. The model is trained for 200 epochs. This value was obtained through grid search optimisation.</p><p>The model is developed on Keras 2.2.2, with a Tensorflow 1.5 backend. The training is performed on an NVIDIA GTX 1050 GPU.</p><p>The repository for the code accompanying the paper can be found at: https://github.com/jsmdaniels/ecai-bglp-challenge</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">RESULTS</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Evaluation Metrics</head><p>The model is tested on data from six participant IDs: 540, 544, 552, 567, 584, 596.</p><p>The evaluation of the model is based on two metrics: root mean square error (RMSE) and the mean absolute error (MAE). The extrapolated points are not considered in calculating these metrics. The formulation of these metrics are provided below:</p><formula xml:id="formula_2">RM SE = 1 N N k=1 (y − ŷ) 2 ,<label>(2)</label></formula><formula xml:id="formula_3">M AE = 1 N N k=1 |y − ŷ|.<label>(3)</label></formula><p>where ŷ denotes the predicted results given the historical data and y denotes the reference glucose measurement, and N refers to the data size.</p><p>In order to undertake a comprehensive evaluation of the model performance, the subsequent criteria for assessment are followed:</p><p>• Performance evaluation over 30-minute and 60-minute prediction horizon (PH): The RMSE and MAE for each participant is analysed for a the same length of values for both prediction horizons.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>• Comparison of training setting:</head><p>The performance of the multitask learning (MTL) approach is evaluated in the context of comparison with the performance of a single task learning (STL) approach which uses only patient specific data.</p><p>• Multiple runs for each participant ID: The multitask CRNN (MTCRNN) model uses randomly initialised weights at the start of training. Given the variable nature of this training procedure, the results reported are the average of 5 model runs.</p><p>The unit for results reported below is mg/dL. The best performance is in bold.   <ref type="figure">3 and 4</ref> exhibit the differences in performance as seen in the specific window for participant 596. The increased lag and reduced predictive performance can also be attributed to the higher chance of external activities (insulin, meals, exercise) that influence the blood glucose trajectory occurring over the prediction horizon.</p><p>The best predictive performances were achieved by the model with IDs 544, 552, 596 whereas, IDs 540, 567, and 584 exhibited worse performances over both 30 and 60 minute prediction horizons. An investigation of the glycaemic variability, using the coefficient of variation (CV) <ref type="bibr" target="#b5">[2]</ref>, of the training set of the former set of participants are stable (CV≤36%) whereas the latter group are labile (CV&gt;36%). The multitask learning approach definitively performs better over the single task approach over a 30-minute prediction horizon. However, the performance improvement of the MTL approach over a 60-minute prediction is not consistent across each participant and metric.</p><p>One potential issue with multitask learning is the issue of negative transfer. This can be described as a scenario in which one or more of the tasks (individuals) or sampled batches during training are not strongly correlated, degrading the learning in the shared layers, and subsequently the performance at test time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">CONCLUSION</head><p>In this work, we have presented a multitask convolutional recurrent neural network that is capable of performing short-term personalised predictions -19.79±0.06mg/dL (RMSE) and 13.62±0.05mg/dL (MAE) at 30 minutes, as well as 33.73±0.24mg/dL (RMSE) and 24.54±0.15mg/dL (MAE) at 60 minutes. We work towards leveraging population data while still learning a personalised model. In the future, we hope to address further challenges such as negative transfer during learning that could improve the accuracy of individual models. This approach would enable more accurate models to be deployed in the face of limited personal data.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. A visualisation of the imputation methods employed in this work. In (A) the input sequence has at least 30 minutes of recent values missing (eg. linear extrapolation). (B) shows the imputation scheme during testing for longer than 30 minutes of recent values missing (zero-order hold). Finally (C) shows the imputation scheme when the missing values of the input sequence are located between real values (linear interpolation).</figDesc><graphic coords="2,54.10,71.12,481.89,113.38" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figures</head><label></label><figDesc>Figures3 and 4exhibit the differences in performance as seen in the specific window for participant 596. The increased lag and reduced predictive performance can also be attributed to the higher chance of external activities (insulin, meals, exercise) that influence the blood glucose trajectory occurring over the prediction horizon.The best predictive performances were achieved by the model with IDs 544, 552, 596 whereas, IDs 540, 567, and 584 exhibited worse performances over both 30 and 60 minute prediction horizons. An investigation of the glycaemic variability, using the coefficient of variation (CV)<ref type="bibr" target="#b5">[2]</ref>, of the training set of the former set of participants are stable (CV≤36%) whereas the latter group are labile (CV&gt;36%). The multitask learning approach definitively performs better over the single task approach over a 30-minute prediction horizon. However, the performance improvement of the MTL approach over a 60-minute prediction is not consistent across each participant and metric.One potential issue with multitask learning is the issue of negative transfer. This can be described as a scenario in which one or more of the tasks (individuals) or sampled batches during training are not strongly correlated, degrading the learning in the shared layers, and subsequently the performance at test time.</figDesc><graphic coords="4,42.11,489.58,251.99,149.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="3,71.10,71.12,447.87,244.73" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>A table detailing the size and dimensions of layers in the multitask CRNN model (MTCRNN)</figDesc><table><row><cell>Layer Description</cell><cell>Output Dimensions</cell><cell>No. of</cell></row><row><cell>(layer)</cell><cell cols="2">Parameters</cell></row><row><cell cols="3">Shared Convolutional Layers (Batch×Steps×Channels)</cell></row><row><cell>(1) 1×4 conv</cell><cell>128(1) × 24 × 8</cell><cell>104</cell></row><row><cell>max pooling, size 2</cell><cell>128(1) × 12 × 8</cell><cell>−</cell></row><row><cell>(2) 1×4 conv</cell><cell>128(1) × 12 × 16</cell><cell>528</cell></row><row><cell>max pooling, size 2</cell><cell>128(1) × 6 × 16</cell><cell>−</cell></row><row><cell>(3) 1×4 conv</cell><cell>128(1) × 6 × 32</cell><cell>2080</cell></row><row><cell>max pooling</cell><cell>128(1) × 3 × 32</cell><cell>−</cell></row><row><cell cols="2">Shared Recurrent Layer (Batch×Cells)</cell><cell></cell></row><row><cell>(4) lstm</cell><cell>128(1) × 64</cell><cell>24832</cell></row><row><cell cols="3">Sub-cluster Dense Layers (Batch×Units)</cell></row><row><cell>(5) dense</cell><cell>128(1) × 256</cell><cell>16640</cell></row><row><cell>(6) dense</cell><cell>128(1) × 32</cell><cell>8224</cell></row><row><cell cols="3">Individual-Specific Dense Layers (Batch×Units)</cell></row><row><cell>(7) dense</cell><cell>128(1) × 1</cell><cell>33</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 .</head><label>2</label><figDesc>A table showing prediction performance for 30 minutes the</figDesc><table /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGEMENTS</head><p>This work is supported by the ARISES project (EP/P00993X/1), funded by the Engineering and Physical Sciences Research Council.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">DISCUSSION</head><p>As seen in Table <ref type="table">3</ref>, the results shown provide a comprehensive evaluation of the model predictive performance. Evidently, the model performance at PH = 30 minutes is better than the model performance at PH = 60 minutes, given that prediction at 60 minutes is a more complex task than prediction at 30 minutes. </p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title/>
		<idno>92±0.03 17.11±0.24 12.68±0.49 567 24.12±0.17 15.55±0.03 24.73±0.45 16.01±0.71 584 23.66±0.20 15.77±0.08 24.30±0.48 16.20±0.23 596 16.63±0.15 11.59±0.09 16.78±0.20 12.00±1.77</idno>
	</analytic>
	<monogr>
		<title level="j">19±0</title>
		<imprint>
			<biblScope unit="page" from="30" to="30" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title/>
		<idno>79±0.06 13.62±0</idno>
	</analytic>
	<monogr>
		<title level="j">Average</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page">19</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Table 3. A table showing the prediction performance for 60 minutes RMSE and MAE results of the six participants over 5 runs (CRNN) MTL STL ID RMSE MAE</title>
		<idno>19±0.79 28.77±0.13 40.09±0.64 27.71±0.13 584 37.82±0.78 26.88±0.37 37.22±0.34 26.64±0.41 596 27.74±0.11 20.12±0.14 28.13±0.48 20.30±0.41</idno>
	</analytic>
	<monogr>
		<title level="j">RMSE MAE</title>
		<imprint>
			<biblScope unit="volume">540</biblScope>
			<biblScope unit="page" from="13" to="567" />
			<date>29±0. 97±0</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">Average</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page">14</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Multitask Learning</title>
		<author>
			<persName><forename type="first">Rich</forename><surname>Caruana</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning</title>
				<imprint>
			<date type="published" when="1997-07">July 1997</date>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="41" to="75" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Glycaemic variability in diabetes: clinical and therapeutic implications</title>
		<author>
			<persName><forename type="first">Antonio</forename><surname>Ceriello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Louis</forename><surname>Monnier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Owens</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Lancet Diabetes &amp; Endocrinology</title>
		<imprint>
			<date type="published" when="2018-08">August 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Multivariate Prediction of Subcutaneous Glucose Concentration in Type 1 Diabetes Patients Based on Support Vector Regression</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">I</forename><surname>Georga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">C</forename><surname>Protopappas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ardigò</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Marina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Zavaroni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Polyzos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">I</forename><surname>Fotiadis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Journal of Biomedical and Health Informatics</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="71" to="81" />
			<date type="published" when="2013-01">January 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">Marzyeh</forename><surname>Ghassemi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tristan</forename><surname>Naumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Schulam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrew</forename><forename type="middle">L</forename><surname>Beam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rajesh</forename><surname>Ranganath</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1806.00388</idno>
		<idno>arXiv: 1806.00388</idno>
		<title level="m">Opportunities in Machine Learning for Healthcare</title>
				<imprint>
			<date type="published" when="2018-06">June 2018</date>
		</imprint>
	</monogr>
	<note>cs, stat</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Wearable Continuous Glucose Monitoring Sensors: A Revolution in Diabetes Treatment</title>
		<author>
			<persName><forename type="first">Giacomo</forename><surname>Cappon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giada</forename><surname>Acciaroli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martina</forename><surname>Vettoretti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrea</forename><surname>Facchinetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giovanni</forename><surname>Sparacino</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">65</biblScope>
			<date type="published" when="2017-09">September 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Deep Learning</title>
		<author>
			<persName><forename type="first">Ian</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yoshua</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aaron</forename><surname>Courville</surname></persName>
		</author>
		<ptr target="http://www.deeplearningbook.org" />
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Predicting Blood Glucose Dynamics with Multi-time-series Deep Learning</title>
		<author>
			<persName><forename type="first">Weixi</forename><surname>Gu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zimu</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yuxun</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Miao</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Han</forename><surname>Zou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lin</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems -SenSys &apos;17</title>
				<meeting>the 15th ACM Conference on Embedded Network Sensor Systems -SenSys &apos;17<address><addrLine>Delft, Netherlands</addrLine></address></meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1" to="2" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Convolutional Recurrent Neural Networks for Glucose Prediction</title>
		<author>
			<persName><forename type="first">K</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Daniels</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Herrero-Vinas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Georgiou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Journal of Biomedical and Health Informatics</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">1</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">GluNet: A Deep Learning Framework for Accurate Glucose Forecasting</title>
		<author>
			<persName><forename type="first">Kezhi</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chengyuan</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Taiyu</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pau</forename><surname>Herrero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pantelis</forename><surname>Georgiou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Journal of Biomedical and Health Informatics</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="414" to="423" />
			<date type="published" when="2020-02">February 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">The OhioT1DM Dataset for Blood Glucose Level Prediction</title>
		<author>
			<persName><forename type="first">Cindy</forename><surname>Marling</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Razvan</forename><surname>Bunescu</surname></persName>
		</author>
		<ptr target="http://smarthealth.cs.ohio.edu/bglp/OhioT1DM-dataset-paper.pdf" />
	</analytic>
	<monogr>
		<title level="m">The 5th International Workshop on Knowledge discovery in healthcare data</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note>CEUR proceeding in press</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Blood Glucose Prediction with Variance Estimation Using Recurrent Neural Networks</title>
		<author>
			<persName><forename type="first">John</forename><surname>Martinsson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><surname>Schliep</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Björn</forename><surname>Eliasson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Olof</forename><surname>Mogren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Healthcare Informatics Research</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="18" />
			<date type="published" when="2020-03">March 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Artificial Neural Network Algorithm for Online Glucose Prediction from Continuous Glucose Monitoring</title>
		<author>
			<persName><forename type="first">C</forename><surname>Pérez-Gandía</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Facchinetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sparacino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cobelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">J</forename><surname>Gómez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rigla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>De Leiva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Hernando</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Diabetes Technology &amp; Therapeutics</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="81" to="88" />
			<date type="published" when="2010-01">January 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Personalized Multitask Learning for Predicting Tomorrow&apos;s Mood, Stress, and Health</title>
		<author>
			<persName><forename type="first">Ann</forename><surname>Sara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Natasha</forename><surname>Taylor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ehimwenma</forename><surname>Jaques</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Akane</forename><surname>Nosakhare</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rosalind</forename><surname>Sano</surname></persName>
		</author>
		<author>
			<persName><surname>Picard</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Affective Computing</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">1</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Dilated Recurrent Neural Networks for Glucose Forecasting in Type 1 Diabetes</title>
		<author>
			<persName><forename type="first">Taiyu</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kezhi</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jianwei</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pau</forename><surname>Herrero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pantelis</forename><surname>Georgiou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Healthcare Informatics Research</title>
		<imprint>
			<date type="published" when="2020-04">April 2020</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
