<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The Impact of Feature Quantity on Recommendation Algorithm Performance</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Lukas</forename><surname>Wegmeth</surname></persName>
							<email>lukas.wegmeth@uni-siegen.de</email>
							<affiliation key="aff0">
								<orgName type="laboratory">Intelligent Systems Group</orgName>
								<orgName type="institution">University of Siegen</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">The Impact of Feature Quantity on Recommendation Algorithm Performance</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">0DB95887D0A5D1905034EF08B817076E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:14+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Feature Engineering</term>
					<term>Recommender Systems</term>
					<term>Automated Machine Learning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Recent model-based Recommender Systems (RecSys) algorithms emphasize using features, also called side information, in their design, similar to algorithms in Machine Learning (ML). In contrast, some of the most popular and traditional algorithms for RecSys solely focus on a given user-item-rating relation without including side information. An essential category of these is matrix factorization-based algorithms, e.g., Singular Value Decomposition and Alternating Least Squares, which are known to have high performance on RecSys datasets. This paper aims to provide a performance comparison and assessment of RecSys and ML algorithms when side information is included. We chose the Movielens-100K dataset for a case study since it is a standard for comparing RecSys algorithms. We compared six different feature sets with varying quantities of features, which were generated from the baseline data and evaluated on 19 RecSys algorithms, baseline ML algorithms, Automated Machine Learning (AutoML) pipelines, and state-of-the-art RecSys algorithms that incorporate side information. The results show that additional features benefit all algorithms we evaluated. However, the correlation between feature quantity and performance is not monotonic for AutoML and RecSys. In these categories, an analysis of feature importance revealed that the quality of features matters more than quantity. Throughout our experiments, the average performance on the feature set with the lowest number of features is ∼6% worse compared to that with the highest in terms of the Root Mean Squared Error. An interesting observation is that AutoML outperforms matrix factorization-based RecSys algorithms when additional features are used. Almost all algorithms that can include side information perform better when using the highest quantity of features. In the other cases, the performance difference is negligible (&lt;1%). The results show a clear positive trend for the effect of feature quantity and the critical effects of feature quality on the evaluated algorithms.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Matrix factorization-based Recommender System (RecSys) algorithms are specialized for predicting missing entries, e.g., ratings, in sparsely filled user-item matrices. Many often-used benchmark datasets exist <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>, representing such a RecSys task. Some of these datasets include side information, also called features, which are not used by the RecSys algorithms mentioned above. Instead, the data is directly reduced to a sparse user-item matrix, ignoring side information. In contrast, Machine Learning (ML) algorithms are broad in their applications and usually profit from the availability of additional, meaningful features <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref>. As an extension, Automated Machine Learning (AutoML) techniques further increase ML performance through automated algorithm selection and hyperparameter optimization. Furthermore, the same tasks RecSys algorithms intend to solve can generally be solved by (Auto)ML algorithms. Recent advances in RecSys have led to more sophisticated model-based algorithms, with the likes of Factorization Machines <ref type="bibr" target="#b5">[6]</ref> and especially the latest Deep Neural Networks (DNN), that can incorporate side information and are more similar to their ML relatives <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>. However, the performance gap between (Auto)ML and RecSys has not been explicitly researched.</p><p>Feature engineering is a broad topic that is well documented and researched due to its positive influences on ML <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b14">15]</ref>. It summarizes many feature-processing techniques that mainly intend to increase prediction performance. Today, such feature engineering techniques are standard for many ML pipelines. Among those techniques is the curation of features by selection and extraction. Additional real-world features and features extracted from existing data often benefit a model's performance. In this context, comparing the impact of feature quantity on the performance of RecSys and ML algorithms provides a meaningful indicator for the effects of feature engineering. However, we could not find previous comparative studies on the effect of feature quantity on RecSys algorithms.</p><p>Due to the aforementioned positive effects of feature engineering techniques in ML algorithms, we hypothesized that the same effects could also be shown for RecSys algorithms. The following two research questions arise since a performance comparison between (Auto)ML and RecSys regarding feature quantity is absent in the literature. In a RecSys problem setting, how does feature quantity impact the performance:</p><p>• of ML, AutoML, and RecSys algorithms in general?</p><p>• of RecSys algorithms compared to (Auto)ML algorithms?</p><p>We explore these questions through a case study on the Movielens-100K <ref type="bibr" target="#b15">[16]</ref> dataset. The code that produced the results reported in this paper is available on our GitHub repository<ref type="foot" target="#foot_0">1</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Method</head><p>We evaluated 19 algorithms from nine libraries (Table <ref type="table">1</ref>) on six feature sets generated from the Movielens-100K <ref type="bibr" target="#b15">[16]</ref> dataset (Table <ref type="table">2</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>An overview of the evaluated algorithms and their category. We evaluated the algorithms as implemented in their respective library. Importantly, it is shown whether the algorithm can incorporate side information and how their hyperparameters were tuned. MyFM <ref type="bibr" target="#b25">[26]</ref> Bayesian Factorization Machine X -Movielens-100K <ref type="bibr" target="#b15">[16]</ref> is one of the regularly used explicit feedback algorithm performance evaluation datasets in the RecSys community <ref type="bibr" target="#b26">[27,</ref><ref type="bibr" target="#b27">28,</ref><ref type="bibr" target="#b28">29,</ref><ref type="bibr" target="#b29">30]</ref>. The full dataset consists of a table with user IDs and item IDs and their observed ratings, where each rating has a timestamp. Additionally, each user ID and item ID contains a set of features specific to them. The user features are their age, gender, occupation, and (North American) ZIP code. The item features are their movie genre, title, release date, and IMDb URL. We solve the prediction of ratings as a regression task and measure and compare the performance of a given algorithm through the Root Mean Squared Error.</p><p>To analyze the impact of the number of features on algorithms, we cut and/or enriched the features of the original dataset and finally grouped them, resulting in six separate feature sets (Table <ref type="table">2</ref>). The default feature set contains most of the basic features of the original dataset. The idea is to use as many original features as possible that require little to no further processing. For this reason, the item's title, IMDb URL, and the user's ZIP codes were removed.</p><p>From the observed user-item-ratings relation, many statistical features can be engineered. They can also be calculated separately in terms of the users and items. We calculated the following nine statistical features regarding the users and items: mean, median, mode, minimum, maximum, standard deviation, kurtosis, and skew. This provides a total of 18 additional features. Notably, these features were only calculated on the training set after splitting the data into a separate training and test set to avoid leaking information about the training set into the test set. These engineered features should provide helpful additional information to the algorithms at hand.</p><p>Finally, additional real-world data points can be added to the list of available features. For this, we chose the median and mean household income and population of a period as close as possible to the release date of the original dataset. A set of these data points grouped by ZIP codes from the US from 2006 to 2010 <ref type="bibr" target="#b30">[31]</ref> is the earliest publicly available data directly related to the user features.</p><p>From various combinations of the feature sets mentioned above, we created six combinations to perform the experiments on. An overview of the sets and their names that we refer to from here on is listed in Table <ref type="table">2</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 2</head><p>An overview of the evaluated feature sets that we generated from the base Movielens-100K <ref type="bibr" target="#b15">[16]</ref> dataset either by cutting or enriching features. The columns denote the contained features in the named feature sets shown in the rows and provide the total number of features. We applied additional processing steps to some of the features to make them suitable for the evaluation, sometimes depending on the dataset or automatically as an algorithm requires. Generally, we tried to stay as close to the original features as possible, making changes only where sensible or necessary. As a result, we did not treat the user ID and item ID as categorical features where applicable. The movie genre is provided as a categorical feature by default, which we did not change. However, the user's occupation is not provided as a categorical feature, so we transformed it into one. Movie release dates are provided in a date format, and we converted them to a signed UNIX timestamp representation. The user's age and zip code are special cases. As provided in the original set, we treated the age as an integer for the 'basic' sets. We removed the ZIP code because it contains some non-numerical entries and because we did not intend to filter ratings in these sets. For the 'feature-expansion' sets, we divided the age by 18 to create five age categories and then treated the age as a categorical feature. We had to keep the ZIP code to add the mean and median household income and population features. Finally, we removed entries with ZIP codes that were not contained in the additional feature sets, which incurs a loss in observed ratings of 7.05%. The remaining ZIP codes range from '00000' to '99999'. To use them as a feature, we selected only the first digit of each ZIP code and then transformed it into a categorical feature. We chose to apply this processing step because the first digit in the ZIP code has a geographical meaning and, therefore, serves as an estimation for the residential area of each user.</p><p>To have an equal ground for algorithm comparisons, we applied some constraints. We performed all experiments on implementations in publicly available libraries to increase accessibility and reproducibility. The Movielens-100K <ref type="bibr" target="#b15">[16]</ref> dataset contains explicit ratings as integers ranging from one to five. Therefore, we only chose algorithms that can take explicit ratings as input and predict ratings in that same format. We evaluated the algorithms using five-fold cross-validation. Since one of the research questions is about a comparison of the performance of algorithms against each other, we performed hyperparameter tuning on algorithms that do not default to a tuned parameter setup for the Movielens-100K <ref type="bibr" target="#b15">[16]</ref> dataset. Depending on the algorithm, we manually tuned the hyperparameters with a random search or using SMAC3 <ref type="bibr" target="#b17">[18]</ref>, an all-purpose hyperparameter optimization tool. We set the time budget of AutoML tools to one hour for each fold. Table <ref type="table">1</ref> lists all libraries and algorithms and their categories, whether they use side information, and how their hyperparameters were tuned.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results</head><p>Figure <ref type="figure">1</ref> aggregates the results gathered during the experiments and clearly shows that a higher feature quantity generally results in a lower Root Mean Square Error (RMSE). In particular, when comparing evaluations of the highest quantity of features with the lowest, the RMSE is 10% lower for ML, 4% lower for AutoML, 1% lower for model-based RecSys, and 6% in total.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1:</head><p>This plot shows the RMSE performance of the algorithm categories as denoted in Table <ref type="table">1</ref> on the feature sets denoted in Table <ref type="table">2</ref>. Each line plots the average RMSE of an algorithm category evaluated on each feature set represented by their number of features. The vertical lines and labels on the x-axis denote the collected data points. The first three data points do not include statistical features, while the final three do. The plot shows that more features result in higher performance in most cases. The evaluation shows that AutoML outperforms the traditionally strong matrix factorization-based RecSys contenders in most of the evaluated feature sets. When only the 'basic' feature sets are included, the RMSE is 1% lower. Notably, however, there is an increase in RMSE between the feature sets with eleven features which is 'feature-expansion-no-stats' and 20 features which is 'stripped-with-stats'. In these special cases, feature quantity alone can not significantly improve the algorithm's performance. A likely reason is the feature importance seen in Figure <ref type="figure" target="#fig_1">2</ref>, which shows that the feature set with the higher feature quantity is missing essential features like the rating timestamp or item genre. The ordered Gini feature importance in percent. We evaluated these with a Random Forest Regressor that we fitted on the training data of the largest feature set. The chart shows the impact of each feature on the trained model. These important values are not necessarily true for the evaluated algorithms but provide a good estimation nonetheless. The feature names denote if they are an item or user feature. <ref type="bibr">Statistical</ref>  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Gini importance in percent</head><p>Figure <ref type="figure" target="#fig_2">3</ref> provides a detailed overview of all experiments. It shows that the performance difference and ranking in terms of the feature sets for RecSys and AutoML algorithms enormously varies by the algorithm, which is in contrast to the ML algorithms, where the ranking is mostly the same. Notably, the cross-validation procedure averages the results of multiple evaluations on an algorithm, and the figures show these aggregations. However, the evaluations of an algorithm had only marginal performance differences (&lt;2%). Therefore, the reported averaged results shown here are relatively stable. Furthermore, we observe that the performance of the different feature sets is divided into two groups for AutoML. One of the groups contains the 'stripped' feature sets for which the search could not find a proper result even when provided with statistical features. In the other group, all other feature sets are tightly packed together, with a slight lead for the most extensive feature set, indicating the preference for a good combination and a higher quantity of real-world and statistical features.</p><p>Regarding the model-based RecSys algorithms, the DNN approaches that are Wide &amp; Deep and Deep Interest Network do not show a clear trend toward more favorable features. However, the Bayesian Factorization Machine is one of the most exciting results. Its performance increases clearly with feature quantity. Though a RecSys algorithm by nature, it could also be used to solve more general ML problems by design. It exhibits a mix of the behavior of ML and AutoML algorithms in its results, which can be seen in Figure <ref type="figure" target="#fig_2">3</ref>. The most significant difference is that its performance far exceeds that of any other algorithm compared to any feature set, making it a prime example of the capability of additional features for ML and RecSys.</p><p>As expected, ML algorithms consistently demonstrated improved performance with an increased number of features. This shows that the provided features are of good enough quality and distribution. The feature sets have an almost equal ranking order across all ML algorithms.  <ref type="table">1</ref> on the feature sets listed in Table <ref type="table">2</ref>. The bar chart is grouped by algorithm, and each group's bars are ordered by RMSE ascending. A lower RMSE is better. Algorithms that can not use additional features perform the same on all feature sets. For these only, the 'basic' feature set is plotted. These results are aggregated over algorithm categories in Figure <ref type="figure">1</ref> Overall, the results of our study indicate a clear preference for using more features. The important caveat to this is the feature type and quality. As reported, the relation between feature quantity and algorithm performance is not monotonic for AutoML and RecSys due to feature importance. The evaluated model-based RecSys algorithms performed best in this study, but our evaluation shows how even this gap can be closed by introducing additional features.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2:The ordered Gini feature importance in percent. We evaluated these with a Random Forest Regressor that we fitted on the training data of the largest feature set. The chart shows the impact of each feature on the trained model. These important values are not necessarily true for the evaluated algorithms but provide a good estimation nonetheless. The feature names denote if they are an item or user feature. Statistical features are prefixed with 'i' and 'u' to denote this.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3:This figure shows the detailed evaluation results that lead to the main conclusions presented in this paper. It shows the RMSE of the algorithms listed in Table1on the feature sets listed in Table2. The bar chart is grouped by algorithm, and each group's bars are ordered by RMSE ascending. A lower RMSE is better. Algorithms that can not use additional features perform the same on all feature sets. For these only, the 'basic' feature set is plotted. These results are aggregated over algorithm categories in Figure1.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head></head><label></label><figDesc>features are prefixed with 'i' and 'u' to denote this.</figDesc><table><row><cell>25</cell><cell></cell></row><row><cell>20</cell><cell></cell></row><row><cell>15</cell><cell></cell></row><row><cell>10</cell><cell></cell></row><row><cell>5</cell><cell></cell></row><row><cell>0</cell><cell>iMeanRating uMeanRating rating_timestamp item_genre iStdRating itemId iCountRating iSkewRating iKurtRating item_release_date uStdRating userId uCountRating uSkewRating uKurtRating user_occupation user_population user_zip user_mean_income user_median_income user_age iModeRating user_gender uModeRating iMedianRating uMedianRating iMinRating uMinRating iMaxRating uMaxRating</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head></head><label></label><figDesc>.</figDesc><table><row><cell></cell><cell>basic-no-stats basic-with-stats</cell><cell cols="2">stripped-no-stats stripped-with-stats</cell><cell cols="3">feature-expansion-no-stats feature-expansion-with-stats</cell></row><row><cell></cell><cell cols="5">RecSys Models AutoML Machine Learning RecSys Matrix Factorization</cell><cell></cell></row><row><cell></cell><cell cols="3">Bayesian Factorization Machine</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="2">Deep Interest Network</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="2">Wide &amp; Deep</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="2">FLAML</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell>H2O</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="2">Auto-Sklearn</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="2">XGBoost</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>Random Forest</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">Histogram Gradient Boosting</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="2">Linear Regression</cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="2">K Nearest-Neighbors</cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="2">SVDpp (LibRecommender)</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="2">SVDpp (Surprise)</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell>SVD</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="3">K Nearest Neighbors Baseline</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="3">Alternating Least Squares Biased</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell>ItemItem</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="2">UserUser</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="2">Mean Predictor</cell><cell></cell></row><row><cell>0.85</cell><cell>0.90</cell><cell>0.95</cell><cell>1.00 RMSE (lower is better)</cell><cell>1.05</cell><cell>1.10</cell><cell>1.15</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://code.isg.beel.org/recsys-feature-quantity</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Discussion</head><p>We conclude that including statistical features is the most straightforward way to increase any algorithm's performance. Figure <ref type="figure">2</ref> shows that the mean rating per user and item is incredibly effective. Additionally, the feature importance analysis shows that some statistical features are significant while others are comparatively insignificant. This indicates the advantages of feature selection techniques that were out of this research's scope.</p><p>We obtained only simple additional features in this work. However, they positively impacted the performance of most of the tested algorithms. The results may be significantly better if more care and time are given to feature engineering. In addition to introducing new features, there are numerous considerations regarding refining existing features, including those introduced in this work. One such is that user IDs and item IDs, for example, are technically categorical features and should be treated as such, which was not always the case within the experiments due to constraints in the algorithms. There are also different ways to represent such categorical features, which should be explored in this context. For example, we consciously decided to represent age as a categorical feature for one of the feature sets. These decisions have a potentially enormous impact on the performance of any algorithm and should be dealt with carefully.</p><p>The biggest challenge for implementing the discoveries in this paper is likely to find suitable new features. Finding good data that supplies an existing dataset is a challenging task. However, collecting such feature data from the start should be reasonably easy when recording a new dataset. Given our results, we encourage future dataset collection tasks to include as many features as possible. Our work has shown that, for our scenario, if the goal is prediction performance, it is highly likely that a good selection of features combined with dedicated tuning will lead to better results. For future work on algorithm evaluations, we propose that pipelines include feature engineering in terms of feature quantity because it may significantly improve results.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Evaluating recommender systems</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zaier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Godin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Faucher</surname></persName>
		</author>
		<idno type="DOI">10.1109/AXMEDIS.2008.21</idno>
	</analytic>
	<monogr>
		<title level="m">2008 International Conference on Automated Solutions for Cross Media Content and Multi-Channel Distribution</title>
				<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="211" to="217" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Evaluating collaborative filtering recommender systems</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Herlocker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Konstan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">G</forename><surname>Terveen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">T</forename><surname>Riedl</surname></persName>
		</author>
		<idno type="DOI">10.1145/963770.963772</idno>
		<idno>doi:10.1145/963770.963772</idno>
		<ptr target="https://doi.org/10.1145/963770.963772" />
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Inf. Syst</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="5" to="53" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Effect of number of features on classification of roller bearing faults using svm and psvm</title>
		<author>
			<persName><forename type="first">V</forename><surname>Sugumaran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ramachandran</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.eswa.2010.09.072</idno>
		<ptr target="https://doi.org/10.1016/j.eswa.2010.09.072" />
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="page" from="4088" to="4096" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Selection of relevant features and examples in machine learning</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Blum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Langley</surname></persName>
		</author>
		<idno type="DOI">10.1016/S0004-3702(97)00063-5</idno>
		<ptr target="https://doi.org/10.1016/S0004-3702(97)00063-5,relevance" />
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">97</biblScope>
			<biblScope unit="page" from="245" to="271" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A survey of feature selection and feature extraction techniques in machine learning</title>
		<author>
			<persName><forename type="first">S</forename><surname>Khalid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Khalil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nasreen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2014 science and information conference</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="372" to="378" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Factorization machines</title>
		<author>
			<persName><forename type="first">S</forename><surname>Rendle</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2010 IEEE International conference on data mining</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="995" to="1000" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Wide &amp; deep learning for recommender systems</title>
		<author>
			<persName><forename type="first">H.-T</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Koc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Harmsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Shaked</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Chandra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Aradhye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Corrado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Chai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ispir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st workshop on deep learning for recommender systems</title>
				<meeting>the 1st workshop on deep learning for recommender systems</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="7" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Deep interest network for click-through rate prediction</title>
		<author>
			<persName><forename type="first">G</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Gai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery &amp; data mining</title>
				<meeting>the 24th ACM SIGKDD international conference on knowledge discovery &amp; data mining</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1059" to="1068" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Wide and deep learning for recommender systems</title>
		<author>
			<persName><forename type="first">H.-T</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Koc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Harmsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Shaked</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Chandra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Aradhye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Corrado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Chai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ispir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Anil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Haque</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Jain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Shah</surname></persName>
		</author>
		<idno type="DOI">10.1145/2988450.2988454</idno>
		<idno>doi:10.1145/2988450.2988454</idno>
		<ptr target="https://doi.org/10.1145/2988450.2988454" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, DLRS 2016</title>
				<meeting>the 1st Workshop on Deep Learning for Recommender Systems, DLRS 2016<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="7" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Hybrid deep neural networks for recommender systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Gridach</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neucom.2020.06.025</idno>
		<ptr target="https://doi.org/10.1016/j.neucom.2020.06.025" />
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">413</biblScope>
			<biblScope unit="page" from="23" to="30" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Deep neural networks for youtube recommendations</title>
		<author>
			<persName><forename type="first">P</forename><surname>Covington</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Adams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sargin</surname></persName>
		</author>
		<idno type="DOI">10.1145/2959100.2959190</idno>
		<ptr target="https://doi.org/10.1145/2959100.2959190.doi:10.1145/2959100.2959190" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th ACM Conference on Recommender Systems, RecSys &apos;16</title>
				<meeting>the 10th ACM Conference on Recommender Systems, RecSys &apos;16<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="191" to="198" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Casari</surname></persName>
		</author>
		<title level="m">Feature engineering for machine learning: principles and techniques for data scientists</title>
				<imprint>
			<publisher>O&apos;Reilly Media, Inc</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Feature engineering for machine learning and data analytics</title>
		<author>
			<persName><forename type="first">G</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>CRC Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A survey on feature selection methods</title>
		<author>
			<persName><forename type="first">G</forename><surname>Chandrashekar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sahin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers &amp; Electrical Engineering</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="page" from="16" to="28" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">An investigation of categorical variable encoding techniques in machine learning: binary versus one-hot and feature hashing</title>
		<author>
			<persName><forename type="first">C</forename><surname>Seger</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">The movielens datasets: History and context</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Harper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Konstan</surname></persName>
		</author>
		<idno type="DOI">10.1145/2827872</idno>
		<ptr target="https://doi.org/10.1145/2827872.doi:10.1145/2827872" />
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Interact. Intell. Syst</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Scikit-learn: Machine learning in Python</title>
		<author>
			<persName><forename type="first">F</forename><surname>Pedregosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Varoquaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gramfort</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Michel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Thirion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Grisel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Blondel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Prettenhofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Weiss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Dubourg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vanderplas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Passos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Cournapeau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brucher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Perrot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Duchesnay</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="2825" to="2830" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Lindauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Eggensperger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Feurer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Biedenkapp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Benjamins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Ruhkopf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sass</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Hutter</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2109.09831</idno>
		<title level="m">Smac3: A versatile bayesian optimization package for hyperparameter optimization</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">XGBoost: A scalable tree boosting system</title>
		<author>
			<persName><forename type="first">T</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
		<idno type="DOI">10.1145/2939672.2939785</idno>
		<ptr target="http://doi.acm.org/10.1145/2939672.2939785.doi:10.1145/2939672.2939785" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;16</title>
				<meeting>the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;16<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="785" to="794" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Efficient and robust automated machine learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Feurer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Klein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Eggensperger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Katharina</forename><surname>Springenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Blum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Hutter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="2962" to="2970" />
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">H2O AutoML: Scalable automatic machine learning</title>
		<author>
			<persName><forename type="first">E</forename><surname>Ledell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Poirier</surname></persName>
		</author>
		<ptr target="https://www.automl.org/wp-content/uploads/2020/07/AutoML_2020_paper_61.pdf" />
	</analytic>
	<monogr>
		<title level="m">7th ICML Workshop on Automated Machine Learning</title>
				<meeting><address><addrLine>AutoML</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Wu</surname></persName>
		</author>
		<idno>CoRR abs/1911.04706</idno>
		<ptr target="http://arxiv.org/abs/1911.04706.arXiv:1911.04706" />
		<title level="m">FLO: fast and lightweight hyperparameter optimization for automl</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Surprise: A python library for recommender systems</title>
		<author>
			<persName><forename type="first">N</forename><surname>Hug</surname></persName>
		</author>
		<idno type="DOI">10.21105/joss.02174</idno>
		<ptr target="https://doi.org/10.21105/joss.02174.doi:10.21105/joss.02174" />
	</analytic>
	<monogr>
		<title level="j">Journal of Open Source Software</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page">2174</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Lenskit for python: Next-generation software for recommender systems experiments</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Ekstrand</surname></persName>
		</author>
		<idno type="DOI">10.1145/3340531.3412778</idno>
		<idno>doi:10.1145/3340531.3412778</idno>
		<ptr target="https://doi.org/10.1145/3340531.3412778" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 29th ACM International Conference on Information &amp; Knowledge Management, CIKM &apos;20</title>
				<meeting>the 29th ACM International Conference on Information &amp; Knowledge Management, CIKM &apos;20<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="2999" to="3006" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Librecommender</forename><surname>Massquantity</surname></persName>
		</author>
		<ptr target="https://github.com/massquantity/LibRecommender" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Ohtsuki</surname></persName>
		</author>
		<ptr target="https://github.com/tohtsky/myFM" />
		<title level="m">myfm</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Presentation of a recommender system with ensemble learning and graph embedding: a case on movielens</title>
		<author>
			<persName><forename type="first">S</forename><surname>Forouzandeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Berahmand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rostami</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools and Applications</title>
		<imprint>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="7805" to="7832" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Clustering algorithms in hybrid recommender system on movielens data</title>
		<author>
			<persName><forename type="first">U</forename><surname>Kuzelewska</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Studies in logic, grammar and rhetoric</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="125" to="139" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Re-scale adaboost for attack detection in collaborative filtering recommender systems</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Xu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">100</biblScope>
			<biblScope unit="page" from="74" to="88" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">An efficient deep learning approach for collaborative filtering recommender system</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Aljunid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia Computer Science</title>
		<imprint>
			<biblScope unit="volume">171</biblScope>
			<biblScope unit="page" from="829" to="836" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m" type="main">Median household income</title>
		<author>
			<persName><forename type="first">U</forename><forename type="middle">O M</forename></persName>
		</author>
		<ptr target="https://www.psc.isr.umich.edu/dis/census/Features/tract2zip/" />
		<imprint>
			<date type="published" when="2006">2006-2010. 2020</date>
		</imprint>
		<respStmt>
			<orgName>Institute for Social Research</orgName>
		</respStmt>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
