<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Towards a Deep Learning-based Activity Discovery System</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Eoin</forename><surname>Rogers</surname></persName>
							<email>eoin.rogers@student.dit.iejohn.d.kelleher@dit.ie</email>
							<affiliation key="aff0">
								<orgName type="department">Applied Intelligence Research Centre</orgName>
								<orgName type="institution">Dublin Institute of Technology</orgName>
								<address>
									<settlement>Dublin</settlement>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">John</forename><forename type="middle">D</forename><surname>Kelleher</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Applied Intelligence Research Centre</orgName>
								<orgName type="institution">Dublin Institute of Technology</orgName>
								<address>
									<settlement>Dublin</settlement>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Robert</forename><forename type="middle">J</forename><surname>Ross</surname></persName>
							<email>robert.ross@dit.ie</email>
							<affiliation key="aff0">
								<orgName type="department">Applied Intelligence Research Centre</orgName>
								<orgName type="institution">Dublin Institute of Technology</orgName>
								<address>
									<settlement>Dublin</settlement>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Towards a Deep Learning-based Activity Discovery System</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">5CE05DCC7FE8BF9175A330470389BFBB</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T14:14+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Activity discovery is a challenging machine learning problem where we seek to uncover new or altered behavioural patterns in sensor data. In this paper we motivate and introduce a novel approach to activity discovery based on modern deep learning techniques. We hypothesise that our proposed approach can deal with interleaved datasets in a more intelligent manner than most existing AD methods. We also build upon prior work building hierarchies of activities that capture the inherent aggregate nature of complex activities and show how this could plausibly be adapted to work with the deep learning technique we present. Finally, we briefly talk about the challenge of evaluating activity discovery systems in a fair way and outline our future plans for implementing this model.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Activity discovery (AD) refers to the unsupervised discovery of plausible human activities in unannotated datasets composed of sensor readings of human subjects. AD is itself a sub-field of activity recognition, the recognition of activities from sensor readings in a supervised manner. These technologies have potential applications in the automatic labelling of activity recognition datasets and building profiles of normal and abnormal behaviour.</p><p>In this paper, we propose a novel approach to activity discovery that makes use of deep learning technology. The discovery of activity patterns in sensornetwork type sensor data is complicated significantly by the fact that interleaving is frequently seen in activity data. Interleaving can be thought of as situations in which multiple activities are occurring in parallel. Our system is designed with the explicit intention of helping to make progress in handling data where activities can be a tangle of sensor events that can interleave and overlap. We believe that our newly proposed method has the potential to allow these events to be untangled, and used to build a hierarchy of activities, allowing the capturing of situations where complex activities are composed of (or contain) a multitude of smaller activities.</p><p>The structure of this paper is as follows: section 2 reviews work that has been done previously in the field of activity discovery and related fields. Our proposed approach is presented in section 3. Finally, we end with an overview and some comments in section 4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Prior Work</head><p>Activity discovery is an active area of research. In this section, we briefly review some of the major contributions that have been made previously to this field. The field is itself a special case of sequential pattern mining, which is a field that has existed for many decades (see for example <ref type="bibr" target="#b12">[13]</ref>).</p><p>Almost all approaches to activity discovery make use of the same basic methodology of trying to find patterns in the code. The means by which the patterns are found, however, can differ substantially between systems. For example, <ref type="bibr" target="#b3">[4]</ref> makes use of topic modelling algorithms, an approach which has also been revived for <ref type="bibr" target="#b7">[8]</ref>. Gu et al. meanwhile <ref type="bibr" target="#b2">[3]</ref> have made use of emerging patterns or EPs (patterns which undergo large changes in their support values throughout the dataset, where a support value is the number of times the patterns appears over the length of a subset of the dataset) as the basis of an activity discovery system. This paper is interesting since the authors explicitly focus on the issue of interleaving (circumstances where multiple activities are taking place at the same time), which is an issue we are also attempting to address. More recently, <ref type="bibr" target="#b1">[2]</ref> introduces an AD system that makes use of low-resolution video cameras and image processing algorithms.</p><p>A concept that is closely related to activity discovery, and may therefore be a useful source of inspiration, is grammar induction. This is the problem of taking a set of example sentences of a language, and attempting to deduce the grammar of the language using only those examples. A number of ways of doing this exist. ADIOS <ref type="bibr" target="#b10">[11]</ref> is a prominent grammar induction algorithm that works by finding patterns probabilistically in the dataset, after it has been read into a graph structure. Another approach, using expectation maximisation techniques, is presented by <ref type="bibr" target="#b11">[12]</ref>. This involves trying to use annealing techniques and the idea of rearranging or deleting words from the dataset in order to overcome the complex search spaces inherent in grammar induction.</p><p>There are a number of pre-existing datasets that are appropriate for use in evaluating Activity Discovery algorithms. These provide real-world data from an activity recognition setup which we can use as input to a system. One of the most prominent datasets in use for activity recognition is the Opportunity dataset <ref type="bibr" target="#b8">[9]</ref>. This has all of the major characteristics that we need from a dataset, in that it is large, contains real-world data, and the activities within it are complex and interleaved. Likewise, the Center for Advanced Studies in Adaptive Systems (CASAS) at Washington State University have produced a range of datasets that would be of use for this also, for instance the Kyoto 3 dataset <ref type="bibr" target="#b9">[10]</ref>. These are challenging datasets that can provide a good test for Activity Discovery systems. All of these datasets can be seen as a sequence of sensor events. For example, if a sensor is fit to a door, an event will appear whenever someone opens or closes the door.</p><p>One challenge facing researchers attempting to develop activity discovery systems concerns fair evaluation metrics. In the case of pre-annotated datasets, we can use the annotations as a ground truth to compare against, allowing us to calculate raw accuracy or F-measure values. These measures all depend on the accuracy of the annotations, however, and this is not guaranteed, nor are annotated datasets easy to come by. For this reason, a number of authors have proposed using the concept of minimum description length (MDL) to measure the performance of AD systems <ref type="bibr">[7][8]</ref>. This states that the performance of a machine learning model is equal to the size of that model plus the size of the dataset after the model model has been used to compress it. This gives us a simple metric that does not require any sort of prior human annotation, while retaining the property of being mathematically justifiable. Although it is primarily aimed at activity recognition rather than discovery, a method of classifying and enumerating types of errors has been proposed by <ref type="bibr" target="#b13">[14]</ref>. This could potentially also be used for activity discovery if ground truth data is available.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Proposed Approach</head><p>We propose here our new approach to activity discovery that we speculate can be used to discover activities automatically even in situations where the activities are heavily interleaved. To make progress on this problem, we turn to deep learning, a now well-developed technique which allows for the efficient construction and training of networks of computational units <ref type="bibr" target="#b5">[6]</ref>. From the field of deep learning we will focus in particular on neural language models, a form of deep neural network proposed by <ref type="bibr" target="#b0">[1]</ref> as a solution to the wider problem of language modelling in natural language processing. If we represent a natural language sentence s as a sequence of words s = w 1 , w 2 , . . . , w l , and i ∈ {1, 2, . . . , l} is an index into s, a language model computes a probability distribution over possible values for w i given w 1:i−1 , where w x:y is shorthand for w x , w x+1 , . . . , w y .</p><p>A neural language model is simply a deep network that is trained to approximate the language model distribution. The input layer takes one-hot encodings of the last n words in the given sentence, in other words a one-hot encoding for w i−n:i and the output layer is a softmax distribution over all possible values for w i+1 . Our proposal generalises this basic concept of a language model into a more complex distribution. Given an n-gram window w i−n:i and an output window w i+1:i+1+m of length m, for each w j ∈ w i+1:i+1+m we compute:</p><formula xml:id="formula_0">P (w j |w i−n:i )<label>(1)</label></formula><p>In other words, we for each w j , we compute the probability that w j would appear anywhere in the m words following the n-gram window of length n. The standard language model presented above, therefore, is just a variant of this model with m = 1. This can be implemented using a deep network that is almost identical to the standard model's network, and is shown in Figure <ref type="figure" target="#fig_0">1</ref>. Again, we pass a one-hot encoding of the previous n words into the input layer, and receive a softmax over possible successor words in the output layer. However, the output now models the probability of each possible word appearing in one of the next m words after word w i , rather than just appearing as w i+1 .</p><p>The choice of units for the model is important for achieving best results. In general, it has been found that training a neural network works best if the average outputs from all units is close to zero. This suggests the use of something like a hyperbolic tangent function for the middle layers <ref type="bibr" target="#b4">[5]</ref>, although it may be worthwhile experimenting with logistic layers also. This generalised neural language model would not be of much use for linguistic purposes. However, if we imagine that each w i of our sequence s is in fact a sensor event, rather than a word, we can use this model for activity discovery. An example of how this is done is shown in Figure <ref type="figure" target="#fig_1">2</ref>. In Figure <ref type="figure" target="#fig_1">2</ref>(a), we see the initial setup of the system: events A and B are contained in the sliding window that will constitute our input (so n = 2), and events C to F are in the window of length m that we want to compare to our output layer probabilities. Suppose that our language model predicts that event D will appear within the next m events with high probability. We thus add a link connecting events B and D as shown in Figure <ref type="figure" target="#fig_1">2(b)</ref>. Note that it is also possible for more links to be added.</p><p>For example, if event E was also predicted with high probability, we would add another link from B to E. The exact mechanism by which a probability will be decided to be high enough to form a link has yet to be fully determined by the authors, although it will probably involve identifying probabilities more than a certain threshold above the average for the dataset (i.e. significantly more probable than background noise). The hypothesis that we are working on is that these links form activities, which are untangled from their original interleaved state. After this, we move our n-gram and m-gram windows on by one event, which allows us to continue the process. We will continue moving the windows until we reach the end of the dataset, adding new links along the way. Note that links can be added to activities discovered in previous iterations also, so we may add new events to the activity shown in Figure <ref type="figure" target="#fig_1">2(b)</ref>. As a result, we cannot see the full extent of a discovered activity until the system has run through the entire dataset. When this is done, we should see multiple, interleaving activities have been explicitly identified.</p><p>In many cases, a single activity can be composed of multiple sub-activities. For example, if you had an activity called Making dinner, it would be reasonable to expect it to be composed of a sub-activity called Pour drink. Finding such activities is itself a difficult task called Hierarchical activity discovery (HAD). We would like to adapt the activity discovery approach proposed here to integrate it into an HAD system we have built previously <ref type="bibr" target="#b7">[8]</ref>. This system works by running a normal (flat) activity discovery system over the dataset and then subsuming discovered activities into a single events. We then repeat the process until we cannot discover any more activities. Naïvely doing this with this new discovery system would not work, since the activities discovered are interleaved and need to be disentangled before we can subsume them into events. We now propose a simple way of doing this. The expected outcome of this process is shown in Figure <ref type="figure" target="#fig_1">2(c</ref>). We have removed events B and D, since they are the constituents of the activity we are trying to generalise, and we do replace them with a single new activity. However, event C, which occurred in the middle of the new activity but was not part of it, is not included in the new activity, and is instead moved to one side of it (the right in this case). The question of which side of the new activity an interleaved event should be moved to can be decided as follows:</p><p>-If the event is itself part of an activity, move it to the side that the majority of that activity's events occur at. This will allow that activity to be generalised easily in turn. -In all other circumstances, simply move it to the left end of the new activity if it occurs temporally closer to the first event of the new activity, and to the right end otherwise -For some particularly highly interleaved datasets, we may not want to move the interleaved events at all, but simply have instances of the new event on either side of them Finally, we aim to use the minimum description length principle, as described in section 2 to evaluate the performance of the system. This negates the need for us to compare directly to the ground truth since achieving a high compression ratio guarantees that we have found one or more frequently repeating patterns in the data, although it may also be worthwhile comparing to human annotations to determine if the discovered and annotated activities agree. The error analysis method proposed by <ref type="bibr" target="#b13">[14]</ref> may also be used for datasets with a pre-existing ground truth, highlighting the differences between the two systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusion</head><p>This paper proposed a technique for activity discovery which makes use of recent advances in computational linguistics and neural networks, along with a plausible means for the evaluation of such a system. Unlike many other activity discovery systems, we are explicitly attempting to deal with the problem of interleaving, and the described algorithm has a means to identify and subsequently disentangle interleaved data. Our plan over the coming months is to implement and test this system using the Opportunity dataset <ref type="bibr" target="#b8">[9]</ref> to allow easy comparison with existing AD systems, and report the results back to the activity recognition community. Opportunity contains readings from a number on-body sensors worn by participants as they carried out a range of Activities of Daily Living (ADL), which are high-level activities that would be expected to occur in a residential environment. On-body sensors are becoming increasingly common due to the increasing prevalence of devices such as smartphones and smartwatches. However, these do bring up privacy concerns, so we may also evaluate the system on datasets that use sensors emdebbed in the environment, such as the Kyoto 3 dataset <ref type="bibr" target="#b9">[10]</ref>.</p><p>One issue that could be worth investigating is the consequences varying lengths of the output (m) and input (n) windows. Our expectation is that there should be an optimal size for m, and thus a degree of trial-and-error may be needed. If m is too small, the system will fail to find interleaved activities since it won't be able to add links long enough to bridge the gaps between the distant sensor events. Likewise, excessively long windows may cause spurious links to be formed, since the probability that any one sensor event will appear in the window approaches one as the length of m increases.</p><p>Longer term, there are a number of aspects to the system that we would like to investigate:</p><p>-What kinds of errors does this system tend to produce? Are they different from the error types produced by other AD systems? -Could weighting links by length, so as to penalise long-range links and encourage shorter links improve the performance of the system? If not, could penalising sort-range links and encouraging longer links? -Can we find a way to represent temporal information in the event encoding in a way that can be usefully exploited by the algorithm? Is temporal information even important for activity discovery?</p><p>We see our proposed model as a solid initial step in this research agenda.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. A diagram of our proposed neural language model</figDesc><graphic coords="4,157.68,234.64,300.00,255.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. An overview of our proposed approach</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A Neural Probabilistic Language Model</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ducharme</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vincent</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Jauvin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="1137" to="1155" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Behavior analysis for elderly care using a network of low-resolution visual sensors</title>
		<author>
			<persName><forename type="first">M</forename><surname>Eldib</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Deboeverie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Philips</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Aghajan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Electronic Imaging</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="41003" to="041003" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A Pattern Mining Approach to Sensor-Based Human Activity Recognition</title>
		<author>
			<persName><forename type="first">T</forename><surname>Gu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Tao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="1359" to="4347" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Discovery of Activity Patterns using Topic Models</title>
		<author>
			<persName><forename type="first">T</forename><surname>Huỳnh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fritz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Schiele</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th International Conference on Ubiquitous Computing</title>
				<meeting>the 10th International Conference on Ubiquitous Computing</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="10" to="19" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Efficient backprop</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">A</forename><surname>Lecun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bottou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">B</forename><surname>Orr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">R</forename><surname>Mller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Neural networks: Tricks of the trade</title>
				<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="9" to="48" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Deep learning</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Lecun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hinton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<biblScope unit="volume">521</biblScope>
			<biblScope unit="page" from="436" to="444" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Finding motifs in large personal lifelogs</title>
		<author>
			<persName><forename type="first">N</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Crane</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gurrin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">J</forename><surname>Ruskin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th Augmented Human International Conference</title>
				<meeting>the 7th Augmented Human International Conference</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page">2016</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Using Topic Modelling Algorithms for Hierarchical Activity Discovery</title>
		<author>
			<persName><forename type="first">E</forename><surname>Rogers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Kelleher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Ross</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Ambient Intelligence-Software and Applications-7th International Symposium on Ambient Intelligence</title>
				<meeting><address><addrLine>ISAmI</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016. 2016</date>
			<biblScope unit="page" from="41" to="48" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Benchmarking classification techniques using the Opportunity human activity dataset</title>
		<author>
			<persName><forename type="first">H</forename><surname>Sagha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">T</forename><surname>Digumarti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D R</forename><surname>Millán</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chavarriaga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Calatroni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Roggen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tröster</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Systems, Man, and Cybernetics (SMC)</title>
				<imprint>
			<date type="published" when="2011">2011. 2011</date>
			<biblScope unit="page" from="36" to="40" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Tracking activities in complex setttings using smart environment technologies</title>
		<author>
			<persName><forename type="first">G</forename><surname>Singla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Cook</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schmitter-Edgecombe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Bio-Sciences, Psychiatry and Technology</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="25" to="35" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Unsupervised learning of natural languages</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Solan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Horn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ruppin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Edelman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the National Academy of Sciences of the United States of America</title>
		<imprint>
			<biblScope unit="volume">102</biblScope>
			<biblScope unit="issue">33</biblScope>
			<biblScope unit="page" from="11629" to="11634" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Novel estimation methods for unsupervised discovery of latent structure in natural language text</title>
		<author>
			<persName><forename type="first">N</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Eisner</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
	<note type="report_type">PhD thesis</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Mining sequential patterns: Generalizations and performance improvements</title>
		<author>
			<persName><forename type="first">R</forename><surname>Srikant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Agrawal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Extending Database Technology</title>
				<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1996">1996</date>
			<biblScope unit="page" from="1" to="17" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Evaluating performance in continuous context recognition using event-driven error characterisation</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Ward</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Lukowicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tröster</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Symposium on Location-and Context-Awareness</title>
				<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="239" to="255" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
