<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>How to use Instagram to Travel the World? An Approach to Discovering Relevant Insights from Tourist Media Content</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Angel Fiallos</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Universidad Ecotec</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Samborondón</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ecuador</string-name>
        </contrib>
      </contrib-group>
      <fpage>101</fpage>
      <lpage>113</lpage>
      <abstract>
        <p>This work aims to detect content themes, locations, sentiment, and demographic information on Instagram or similar platforms in a way that supports business decision-making and marketing strategies in the tourism or travel industries. For this purpose, we propose an original combination of NLP methodology and computer vision to be applied to the content of posts associated with a specific hashtag. To demonstrate this, we collected and processed 30,122 images and texts of Instagram posts related to the hashtag #traveltheworld, showing the results of the most relevant user interests, places, emotions, and other detected features.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Natural Language Processing</kwd>
        <kwd>Data Mining</kwd>
        <kwd>Computer Vision</kwd>
        <kwd>Instagram</kwd>
        <kwd>Tourism</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Social media is essential because social networks have made everybody a potential author, so the
language is now closer to the user than to any prescribed norms. In this way, share information
about events, activities, services, opinions, and experiences on social media channels.</p>
      <p>Instagram is a social network that has experienced a rapid increase in users and picture
uploads since it was launched in October 2010. However, a few research works have been
developed around it in contrast to other social networks like Twitter, where the text is analyzed
as the main element in its posts.</p>
      <p>
        Ninety million photos are shared every day through Instagram. Furthermore, users add other
features such as hashtags, locations, and text to photos through the platform. These media
elements communicate the user’s intention behind posting an image but do not necessarily
describe the published image [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Also, concerning hashtags, several researchers suggest they
carry emotional information which is not directly related to the context they appear [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>Hashtags are also used to create searchable content categories to gain followers by attracting
the attention of public users by businesses and are single words or unbroken strings of words
preceding the # symbol. Instagram encourages users to make hashtags both specific and relevant,
rather than tagging generic words, to make photographs stand out and to attract like-minded
Instagram users.</p>
      <p>Obtaining all possible information from these Instagram posts is essential for gaining user
insights, measuring brand reputation, and other important market digital research aspects on
several industries, such as tourism, travel, hospitality, and customer services, among others.
Also, to evaluate campaigns in business, understand users’ social behavior, and avoid costly
direct surveys.</p>
      <p>The main contribution of this work is proposing a methodology to identify the relevant topics,
locations, sentiments, and features from a combination of text and pictures associated with a
particular hashtag by combining text mining techniques, sentimental analysis, natural language
processing, and computer vision tools. The methodology was applied to a dataset of Instagram
photos associated with the hashtag #traveltheworld. This popular hashtag refers to more
than 15 million posts and is used by travelers to discover new destinations, swap travel tips,
and share their experiences.</p>
      <p>The rest of this work is structured as follows: Section 2 describes the related work, Section 3
presents the proposed methodology, Section 4 describes the results of the case study analysis, and
Section 5 presents the conclusions and future work components, incorporating the applicable
criteria that follow.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Word</title>
      <p>
        Few researchers have investigated diferent ways to detect relevant content topics from
Instagram pictures: Hu et al., [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] analyzed free photos of a random sample of users by considering
the user’s text. Then, the similarity between pictures was calculated in terms of Euclidean
distance between their codebook vectors by k-means to obtain clusters of photos. This work
shows eight popular picture categories (friends, food, gadget, captioned photos, pets, activities,
selfies, fashion) and five distinct types of users in terms of their posted pictures.
      </p>
      <p>
        Jang et al., [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] performed an analysis of the relationship between LDA-based topics and Likes
from the test datasets of 20 million users and their 2 billion LDA-based topics. This work uses
a Latent Dirichlet Allocation model over the description text and hashtags written by users.
As a result, they identified 20 latent topics prevalent among hashtags added to pictures and
presented the top 5 topics.
      </p>
      <p>
        Amanatidis et al., [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] performed a picture analysis and categorization of the personal
experiences of users before, during, and after the covid-19 vaccination process. For this purpose, they
used computer vision convolutional neural models and datasets from ImageNet.
      </p>
      <p>
        Manikonda [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] concluded that on Twitter, you could locate informational content, while on
Instagram, the content is more personal and social. To reach this conclusion, the researchers
performed a textual and visual analysis of the media content posted on these two platforms from
the same set of users. Our paper difers from those mentioned because it uses a multidisciplinary
approach of techniques (computer vision and natural language processing) and validates which
one provides better results depending on the objectives to be achieved, in this case, focused on
tourist experiences and user demographics.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>First, the topics, locations, sentiments, and demographic information were detected following
the steps shown in Figure 1.</p>
      <sec id="sec-3-1">
        <title>3.1. Data Collection</title>
        <p>A scraping process developed with Python, using BeautifulSoup, and Selenium libraries, was
applied to collect a dataset of 50.510 publications related to the hashtag #traveltheworld from
the Instagram Platform. These data include the following features per publication: image file,
post id, user id, hashtags, upload date, post text, locations, and likes count.</p>
        <p>Then, a sample of 30,122 photos was selected from user accounts with an average of at least
150 likes and 100 followers to avoid downloading photos that belong to fake accounts. The
hashtags and the text were taken as post descriptions for this work. Figure 2 shows an Instagram
post sample.</p>
        <p>Once the photo collection was obtained, an image recognition process was applied to the
digital files to retrieve the visual description. Using Microsoft Cognitive Services API 1, multiple
executions were run to obtain the visual description of each picture in JSON Format. The API has</p>
        <sec id="sec-3-1-1">
          <title>1Microsoft Cognitive Services https://azure.microsoft.com/es-es/services/cognitive-services/</title>
          <p>a collection of SDK applications and machine-learning services developed for the Bing Oxford
Project and Microsoft Research. Figure 3 shows how computer vision API returns information
about the visual content of an image.</p>
          <p>The Microsoft Cognitive Services API also recognizes natural and manmade landmarks
worldwide by comparing them to a library of known places. Figure 4 shows an example of a
response once the recognition process is applied over a landmark photo.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Terms Detection</title>
        <p>
          Some text-mining processes were applied to documents to determine the most relevant
topics. First, a data preprocessing [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] was executed separately for posts descriptions and visual
description files with the following steps:
• Each document was transformed into words (lexical analysis).
• Empty words (articles, prepositions, marks, conjunctions, numbers, punctuation, and
other words that did not semantically describe the content) were deleted.
• Stemming process was executed where non-essential parts of terms, such as sufixes and
prefixes, were eliminated to keep their essential part (lemma) of the terms.
        </p>
        <p>
          Second, the TF-IDF (Term Frequency-Inverse document frequency) model [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] was applied to
evaluate the key terms in the documents. TF-IDF measures the weight of a term based on the
term frequency (TF) and inverse document frequency (IDF). Then, a document-term matrix was
created with the TF-IDF, and the dispersed terms were deleted to conserve the most relevant
terms.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Topic Modeling</title>
        <p>Topic modeling is a text mining technique that employs unsupervised and supervised statistical
machine-learning techniques to identify patterns in a corpus or a large amount of unstructured
text. It can take a vast collection of documents and group the words into clusters of words,
identifying topics using the process of similarity.</p>
        <p>
          We applied Non-Negative Matrix Factorization to determine relevant topics in both
documental corpus. NMF [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] is a linear-algebra optimization algorithm to extract meaningful information
about topics from decomposing the document-term matrix A. in two k-dimensional factors W
(document-topic matrix) and H (topic-term matrix).
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Sentimental Analysis</title>
        <p>Sentimental analysis is a technique that uses natural language processing to identify, extract,
quantify, and explore afective states and subjective information from text. Generally, the
sentimental analysis used a text classification approach based on machine learning.</p>
        <p>The text classification assumes that each sample is assigned to one and only one label. On
the other hand, multi-label classification assigns to each sample a set of target labels that are
not mutually exclusive. However, many of text multi-label classification methods ignore the
word order, opting to use word bag models or TF-IDF weighting to create document vectors.</p>
        <p>
          Convolutional neural networks (CNN) utilize layers with convolving filters that are applied to
local features [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Initially invented for computer vision, CNN models are adequate for NLP and
have achieved excellent results in semantic parsing. Kim and Berger [
          <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
          ] demonstrated that
CNN models using semantic word embeddings such as Word2Vec [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] significantly outperform
the Binary Relevance method with bag-of-words features on a large-scale multi-label.
        </p>
        <p>We designed a simple CNN network composed of an input layer with five diferent n-grams
window sizes and one convolution layer on top of word vectors obtained from the Word2Vec
unsupervised neural language model. These vector representations essentially feature extractors
that encode words’ semantic features in their dimensions. To conduct the experiment, first, we
trained a dataset provided by FigureEigth2, which contains approximately 19.000 tweets that
have been labeled in neutral, positive, and negative sentimental categories.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Emotion Recognition</title>
        <p>Face API allows the detection of human faces together with facial attributes that contain
predictions of facial features based on automatic learning. The characteristics of available
facial attributes are age, emotion, gender, and posture, among others. The API also integrates
recognition of emotions and returns the degree of confidence of a set of emotions for each face
detected. The process is applied to a set of Instagram photos that, during the process of image
recognition, refer to some of the values: "man", "men", "woman," or "women". For each photo,
the emotional response with the highest score is compared to the emotion classified manually
by observers (ground truth). Figure 5 shows a response from Face API.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>4.1. Relevant Terms and Topics</title>
        <p>Computer Vision API was applied over 30.122 images and detected 1.816 unique terms related to
images’ visual contents. After the preprocessing routines, 1.801 terms (99.12%) were conserved</p>
        <sec id="sec-4-1-1">
          <title>2FigureEight https://www.figure-eight.com/wp-content/uploads/2016/07/text_emotion.csv</title>
          <p>for the following analysis. Figure 6 shows a word cloud with the most relevant terms related
to the visual content of images. Figure 7 shows the most frequent terms with higher TF-IDF
weights. They are building, groups, people, water, person, mountain, woman, cities, beaches,
and streets, among others. These terms have a TF-IDF weight greater than 12.500 and suggest
that most of the pictures are related to building structures, people, urban cities, sports activities,
and natural tourism attractions.</p>
          <p>Table 1 presents the six terms more associated with the key terms “mountain,” “woman”,
“water”, “building”, “people,” and “city”; for example, the term “mountain” is related to hills,
nature, background, view, and field. Key terms were set considering the most frequent terms
illustrated in Figure 7. Associated terms have a correlation, a quantitative measure between 0
and 1 of the occurrences of words in several documents. In this respect, whether two terms
always appear together, then the calculated correlation is 1.</p>
          <p>Using NMF, we detected the most relevant topics of visual descriptions. They are shown in
Table 2. and refer to natural landscapes, people’s actions, cities and buildings, sea and related
activities, food, and other outdoor photos.
city, building, street, front, clock, tower, tall, old, large, sign.</p>
          <p>body, water, boat, ocean, beach, lake, doc, river, large, sunset.
person, woman, young, hold, wear, pose, man, front, girl, standing.</p>
          <p>mountain, field, hill, grass, green, tree.</p>
          <p>table, sit, food, plate, room, close, wooden, white, indoor, cake.</p>
          <p>Next, a corpus of 24.719 documents and 21.972 terms were created with Instagram posts. After
preprocessing, 18.810 terms (85.61%) were conserved for the topic modeling. The relevant topics
results of user descriptions are shown in Figure 8. These topics refer to events, exclamations of
admiration, visits to specific tourist sites, emotions, and engagements. In order to ensure that
content is coherent and to eliminate redundancy in topic terms, we reduce the hashtags related
to travel. Figure 8 shows the most frequent terms with higher TF-IDF weights greater than 800.</p>
          <p>On the other hand, Table 3 shows the topics of the text content written by users. These
Topic
topics refer to travelers’ stories, expressions of admiration, and social media engagements. The
average cosine distance between the topics mined from the users’ descriptions and the visual
description was 0.290, which means there is a low similarity between both documents.</p>
          <p>Then, the user descriptions do not allow us to identify the features and elements of the images
in a specific way because they refer to narrations of events, situations, or opinions of events
related to the photos.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Locations and Landmarks</title>
        <p>The geolocations were added by users in 19.782 (65.69%) Instagram posts, so the locations for
the remaining photos were detected using Computer Vision API landmark properties. A total of
2.26% of pictures were retrieved by this method. Table 4 shows the places identified in Instagram
photos which are more highest count rate.</p>
        <p>The identified locations include famous monuments and buildings, such as the Eifel Tower,
Sagrada Familia, Pantheon Rome, Grand Central Terminal, Brooklyn Bridge, and Trevi Fountain,
among others, those that were positioned to your specific city or country through the GeoPy
tool3. These values can be contrasted with TripAdvisor info, the largest travel website in the
world, where Paris, New York, London, Rome, Barcelona, Bali, and Prague, among others, are
mentioned as the most popular locations in the World in TripAdvisor. Therefore, the results
presented in Table 4 could be a good reference for worldwide tourism stats.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Users Demographics</title>
        <p>We used a scraping process to retrieve a total of 17.752 unique photos of user profiles from the
Instagram posts. The Face API process was applied to the profile’s photo collection to recognize
facial properties. Once the process was finished, we selected the photos with an exposure value
greater than 0.5, and the genre and age properties could be detected. In total 5.560 (31.32 %).</p>
        <p>The rest of the photos of user profiles, among other reasons, did not show the face of the user,
belonged to business profiles, or had low quality and did not allow identification of gender and
age properties. Table 5 shows the percentages belonging to the user genre groups, and table 6
shows the percentages belonging to the user groups by age range:</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Emotion Recognition and Text Sentiment Analysis</title>
        <p>An ideal visual experience on Instagram social network happens when the sentiment and
emotions transmitted from text and photo(s) or video(s) are similar. Classifying emotions
in publications requires a lot of efort and manual work from experienced teams. Therefore,
emotion recognition and text sentiment analysis can help predict the emotions of a social media
post.</p>
        <p>A sample of 114 photos was taken that referred to a person with a visible face. It was
automatically classified using Face API, the feelings expressed in the images for each of the
following categories: anger, disgust, fear, joy, sadness, and surprise. In addition, we use our
Word2Vec model to classify the sentiment found in the text of the user’s IG publications. Figure 9
shows the sentiment and emotion percentages, where joy is the most frequent emotion available
in people’s photos, and neutral is the most regular sentiment available in text content.</p>
        <sec id="sec-4-4-1">
          <title>3GeoPyt https://geopy.readthedocs.io/en/latest/</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>The proposed methodology allows obtaining more useful inferred information from any
collection of publications associated with a particular hashtag on Instagram or other social networks
at a low cost and efort.</p>
      <p>The low similarity between the topics is mined from the content written by users, tourists
usually, and the visual descriptions from photos because users generally refer to situations or
opinions regarding the photos. In contrast, the visual analysis produces tags more related to the
actual content of the images. We can also determine that the emotions transmitted in Instagram
posts are better predicted using photos instead of text written by users, but only when a quality
image containing a face with high confidence is available.</p>
      <p>The results of the most frequent worldwide photo locations are similar to the most popular
places on TripAdvisor. For this reason, the methodology of this work can be helpful in areas
such as digital marketing, market research, opinion polls, social studies, and other fields. Also,
the findings can be valuable for decision-making, creating new marketing strategies, and other
studies such as consumer profile analysis, as well as being complementary to textual content
from social network reports and third-party social listening platforms.</p>
      <p>In future work, we will consider exploring stories and reels’ visual content and text comments
on user descriptions to evaluate if they improve prediction values using the text.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Mohammad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kiritchenko</surname>
          </string-name>
          ,
          <article-title>Using hashtags to capture fine emotion categories from tweets</article-title>
          ,
          <source>Computational Intelligence</source>
          <volume>31</volume>
          (
          <year>2015</year>
          )
          <fpage>301</fpage>
          -
          <lpage>326</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Kunneman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Liebrecht</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. van den Bosch,</surname>
          </string-name>
          <article-title>The (un) predictability of emotional hashtags in twitter</article-title>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Manikonda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kambhampati</surname>
          </string-name>
          ,
          <article-title>What we instagram: A first analysis of instagram photo content and user types</article-title>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. Y.</given-names>
            <surname>Jang</surname>
          </string-name>
          , K. Han,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>No reciprocity in" liking" photos: analyzing like activities in instagram</article-title>
          ,
          <source>in: Proceedings of the 26th ACM conference on hypertext &amp; social media</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>273</fpage>
          -
          <lpage>282</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Amanatidis</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Mylona</surname>
          </string-name>
          , I. Kamenidou,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mamalis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stavrianea</surname>
          </string-name>
          ,
          <article-title>Mining textual and imagery instagram data during the covid-19 pandemic</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>11</volume>
          (
          <year>2021</year>
          )
          <fpage>4281</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Manikonda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Meduri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kambhampati</surname>
          </string-name>
          ,
          <article-title>Tweeting the mind and instagramming the heart: Exploring diferentiated content sharing on social media</article-title>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K. J.</given-names>
            <surname>Cios</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Pedrycz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Swiniarski</surname>
          </string-name>
          ,
          <article-title>Data mining and knowledge discovery, in: Data mining methods for knowledge discovery</article-title>
          , Springer,
          <year>1998</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>26</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <article-title>Speech and language processing: An introduction to natural language processing</article-title>
          ,
          <source>computational linguistics, and speech recognition</source>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P. O.</given-names>
            <surname>Hoyer</surname>
          </string-name>
          ,
          <article-title>Non-negative matrix factorization with sparseness constraints.</article-title>
          ,
          <source>Journal of machine learning research 5</source>
          (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>LeCun</surname>
          </string-name>
          , L. Bottou,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hafner</surname>
          </string-name>
          ,
          <article-title>Gradient-based learning applied to document recognition</article-title>
          ,
          <source>Proceedings of the IEEE</source>
          <volume>86</volume>
          (
          <year>1998</year>
          )
          <fpage>2278</fpage>
          -
          <lpage>2324</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>Convolutional neural networks for sentence classification</article-title>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>M. J. Berger</surname>
          </string-name>
          ,
          <article-title>Large scale multi-label text classification with semantic word vectors</article-title>
          ,
          <source>Technical report</source>
          , Stanford University (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>N.</given-names>
            <surname>Azam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <article-title>Comparison of term frequency and document frequency based feature selection metrics in text categorization</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>39</volume>
          (
          <year>2012</year>
          )
          <fpage>4760</fpage>
          -
          <lpage>4768</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>