<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Methods of public multimedia analysis for a social profile</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alexander M. Bershadsky</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexey Y. Timonin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>doctor of technical sciences, professor, Penza State University</institution>
        </aff>
      </contrib-group>
      <fpage>311</fpage>
      <lpage>319</lpage>
      <abstract>
        <p>Analysis of public Internet data is a popular topic for research. Study prospects of using personalized data led the scientists to the problem of constructing a social profile. It is structured set of information, which is able to uniquely characterize a particular person. The social profile building is carried out through the analysis of the filtered Internet open source data. Raw social profile data are subdivided on static and dynamic parts. Dynamic unstructured data includes text and multimedia information and cannot be handled by classical analytic means. The analytical task of social profile data is achieved through mathematical tools of the set theory, Big Data software, NoSQL data stores and analytic tools for social media. Also, mod-ern methods for analysis of the multimedia data are helpful. The techniques review for the analysis of multimedia content (graphics, sound) is offered. Analysis of multimedia resources is complicated by the variety of information processed types. The current work is devoted to combine existing experience by automating non-textual information processing in the task of a social profile building. We consider such areas as Big Data analysis, visual analysis, Optical Character Recognition, speech recognition, neural networks and specialized algorithms for specific objects recognition and linking them with a social profile. Automated processing of multimedia information will improve the completeness and accuracy of the final social profile. In addition, the results of this work can be used to study the social phenomenon of viral media.</p>
      </abstract>
      <kwd-group>
        <kwd>personal social profile</kwd>
        <kwd>public data sources</kwd>
        <kwd>social media</kwd>
        <kwd>multimedia</kwd>
        <kwd>unstructured data</kwd>
        <kwd>data analysis</kwd>
        <kwd>Big Data</kwd>
        <kwd>Data Mining</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Processing of heterogeneous Internet open data covers a very wide range of
applicability in various fields of human activity. Application analysis results of such
information starts with contextual sampling recommendations and end large-scale
scientific society research for the purpose of counteraction to criminality. A social profile
building is one of these generally applicable tasks. Social profile [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] - a lot of
information that characterizes the social qualities of person and clearly structured for the
convenience of automated processing and human perception. This problem is initially
reduced to create a mathematical model and optionally data structures for storing
personalized information [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Network personal identification is made through the
determination of its entry points - web resource accounts, which make person stand
out from the mass of other users. The collected data are filtered from the outside
information and are divided by the degree of structuring to the static (information card)
and a dynamic part.
      </p>
      <p>
        Next step is the information obtained analysis after the person identification on the
Internet, collecting and filtering of primary social profile data [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ]. This stage
purpose is to form a complete picture of the structured social personality. There is no
problem with processing of an information card data. However, dynamic data is
divided by a processing method (renewable, non-editable, complex graph data) and by
its nature (text, graphics, audio, binary data, etc.). Non-relational distributed database
HBase [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] used for data storage, not providing editing. These database runs on HDFS
(Hadoop Distributed File System) and provides a reliable way to store very large
volumes of heterogeneous data. Simple method to store the renewable data is MongoDB
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] usage. It is very popular open source document-based solution. Also, MongoDB
does not require the description of the table schema.
      </p>
      <p>Issues of social media analysis profile of a man using a variety of means (Big Data,
OCR, image analysis, neural networks, etc.) raise in this paper. In particular, the
processing of graphical and audio content, that may be associated with a particular
personality, is described. Also, there is attempts to automate the unstructured data
analysis. It's necessary to extract the statistical and semantic components from them and to
bind them with the social profile data.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Background</title>
      <p>
        Studies on analysis of multimedia information are not as common as works related to
processing text data. The reasons for this are simple:
• The diversity of multimedia information types;
• Semantic variability of content, depending on the recipient subject;
• A large amount of data, compared with the text;
• The existence of distortion;
• Difficulties of machining processing and structuring the results.
Methodology of speech and visual objects recognition fairly well researched at the
current time. But differentiation issues of recognized entities not yet resolved. Also,
determining the meaning and symbolism of the certain media image in the particular
situation poorly studied. This theme is dealt with in article named "Unveiling the
multimedia unconscious: implicit cognitive processes and multimedia content
analysis"[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] of Marco Cristani, Alessandro Vinciarelli, Cristina Segalin and Alessandro
Perina. A work called "Multimedia mining research – an overview" [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] provides the
basic concepts of multimedia mining and its essential characteristics. Multimedia
mining architectures for structured and unstructured data, research issues in
multimedia mining, data mining models used for multimedia mining and applications are also
discussed in their paper.
      </p>
      <p>
        Authors of "Triangulating Social Multimedia Content for Event Localization using
Flickr and Twitter" [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] carry out the connection between real events and media
content on example of messages about natural disasters. Yilin Yan, Qiusha Zhu,
MeiLing Shyu, Shu-Ching Chen in their article named "A Classifier Ensemble
Framework for Multimedia Big Data Classification" [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] develop Spark ensemble system
for multimedia big data processing. A paper called "Distributed Multimedia Content
Analysis with MapReduce" [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] of Arto Heikkinen, Jouni Sarvanko, Mika Rautiainen
and Mika Ylianttila introduces a scalable solution for distributing content-based video
analysis tasks using the emerging MapReduce programming model. They present a
novel implementation utilizing the popular Apache Hadoop MapReduce framework
for both analysis job scheduling and video data distribution.
      </p>
      <p>
        Jinglan Zhang, Kai Huang, Mark Cottman-Fields and others present an overview
of techniques for collecting, storing and analyzing large volumes of acoustic data
efficiently, accurately, and cost-effectively in their work called "Managing and
Analyzing Big Audio Data for Environmental Monitoring" [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>
        A paper named "Fusing audio, visual and textual clues for sentiment analysis from
multimodal content" [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] introduce a novel methodology for multimodal sentiment
analysis, which consists in harvesting sentiments from Web videos by demonstrating
a model that uses audio, visual and textual modalities as sources of information.
Authors used both feature- and decision-level fusion methods to merge affective
information extracted from multiple modalities.
      </p>
      <p>
        Research work called "Mining Melodic Patterns in Large Audio Collections of
Indian Art Music" [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] by Sankalp Gulati, Joan Serrà, Vignesh Ishwar and Xavier Serra
is devoted to the selection of music from a wide variety of audio templates, that
further be used in challenging computational tasks such as automatic raga recognition,
composition identification and music recommendation. Article named "YouTube as a
source of chronic obstructive pulmonary disease patient education" [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] authors
explored the potential of thematic YouTube videos to increase health literacy among
patients at ex-ample diseased COPD.
      </p>
      <p>Experience of works discussed above can be useful for handling multimedia social
profile data.</p>
    </sec>
    <sec id="sec-3">
      <title>Preparation to the analysis of social profile media</title>
      <p>
        It is need to analyze the existing text data before proceeding the multimedia processing.
Filling information card produced during collection and filtering stage of source data.
Dynamic text data processed by subsystem of social entities and relationships analysis.
It uses specialized tools of NLP (natural language processing), a tonality definition of
the text and the predictive text mining methods. IBM ContentAnalytics software [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] was
used for this purpose. However, unstructured text data may contain hidden information,
which is determined only indirectly. Visual analysis means are most suitable to address
these issues. A good choose is use of the IBM i2 software tools [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>Next step is to create a mathematical model for the construction of iBase social
profile database. Model based on the results of the textual data analysis using BigInsights
and Content Analytics. Social graph is built on this basis. It specifies the possible
relationships between the considered person and social profile entities (mentioned persons
information about persons, associated to the considered person in any context;
organizations - information about various facilities related to personalities from the social
profile; events - information about events, bringing together a group of people by some
common features; contact details of the person; activities, specialization, attainments
and hobbies of person). A such graph schematic example is shown in Figure 1.
The main work on the identification of relationships and dependencies is performed in
IBM i2 TextChart program. Data analysis is performed using the following algorithm:
1. Raw social profile data shall be input in the TextChart project via import CSV-file
with source information.
2. The text is allocated first read important information, then search is carried for
repetition and synonyms throughout the text by using the Find tool.
3. The results are added to the project as social profile entities (considered person,
activities, etc.).
4. Words are highlighted in the processed text, expressing the relationships between
the generated objects. Then corresponding objects are selected in the dependency
graph window.
5. The desired connection type is selected by clicking the Insert Link button, then the
expressions are added as the connections.
6. Similarly, new objects and attributes are added to existing ones by means of
navigation between entity search results.
7. Entries calculated for each option when finding conflicting information. Then it
concludes about their truth: false information is removed from the social profile or
additional specifying search is performed.</p>
      <p>The results of visual analysis are the social graph and iBase social profile database. It
is possible to find implicit relationships built on the graph in addition to the hidden
information directly identified during the analysis. Analyze the social profile text in
such way will form the basis for relations with the social objects. These objects
obtained in the analysis of the multimedia information.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Methods overview for analysis of the social profile media</title>
      <p>
        In contrast to textual data, multimedia content is difficult to analyze using traditional
means. Therefore, it is necessary to resort the implementation of big data solutions,
machine learning and neural networks. Examples of such systems can be considered
Google Analytics, MS Azure, Multimedia Mining Marvel, Quaero. The most
common multimedia information are the images and sound. There are a number of
multimedia data analysis strategies [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Holistic strategy is prefered to handle unstructured
information of social profile.
      </p>
      <p>Multimedia content can appear in two different versions as part of social profile
building:
1. Multimedia data, viewed and created by a considered person. That information can
speak about the activities and preferences of the person. Also it applies to the
author's content. Analysis issue consists in comparing the multimedia objects with
the existing samples in the Internet and a social profile (e.g. definition of personal
musical preferences across multiple recordings).
2. Content that contains the information about the considered person itself. The
analysis purpose is to identify the essential information from the direct media object
(e.g. emotion recognition in the photo, the semantic expressions selection of
audio).</p>
      <p>
        Content-based analysis method appropriate to use for multimedia information
processing of the first category [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Its essence is to split the data into constituent parts
and their subsequent direct comparison. The following algorithm would run for the
mentioned example of the musical preferences definition.
1. The existing audio records ID3 tags of social profile are checked for the presence
of Artists and Genres of Music positions.
(a) If such tags are found, they are recorded in the preference table.
(b) Otherwise, the comparative analysis is carried out. Audio record compare with
the samples from the Internet (by means of AudioTag, Shazam or Google Sound
Search utilities. Required tags are added to the preference table after finding a
match.
2. Tag counting produced after completion processing of all available audio records.
3. The conclusion about the predominance of a particular genre or artist in the
sample is output.
      </p>
      <p>User content may contain different identification labels in the service data
(information about the author, the developer device, spatial data) and within the multimedia
entity. Examples of these labels can be: the author's signature or watermark, typical
author's style of the object, author's mention in comments, etc. Some of this
information is implicit, so the possibility of automatic processing for user-generated
content is very limited. The use of visual analysis is desirably. Processing algorithm will
be different from one presented in paragraph 3 of this article that, you need to
consider abstractions in addition to text information extracted from the multimedia objects.
These abstractions are mined manually and have a fuzzy interpretation.</p>
      <p>Another object of the user-generated content analysis is to define options for using
multimedia entities. In this case, the analyst should link a particular media object to
the text definition of the event meaning in which the object is mentioned. Big Data
means searches for all possible situations in Internet of the multimedia entity use.
Further, statistical analysis and tone selection are performed for the accompanying
text. Also, it is recommended to apply visual analysis means as in the previous case.</p>
      <p>
        Let's take a second data category. Content-interpretative method is used for its
analysis [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. It consists that parts of multimedia data are assigned to the concepts in
the formal language, and then the links line up between them. Approaches to audio
and graphics information analysis are different.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Audio information processing</title>
      <p>A speech and intonation recognition are the main focus of audio analysis in the
construction task of social profile. The results of the audio analysis are: the sound
characteristics of the considered person voice, attached to the social profile; recognized text
linked to the original record. The characteristics of the other people voices also may
be taken from the analyzed audio file for comparison with other social profiles. It
should be noted that the sound voice characteristics can`t be regarded as an
information card elements. They can vary considerably depending on the age and
condition of the per-son, environment and recording quality.</p>
      <p>Currently, there are a sufficient number of freely distributed speech recognition
systems with open source: CMU Sphinx, Julius, HTK, Praat, SHoUt, VoxForge and
others. Many of them are based on the use of hidden Markov models and neural
networks. Sound voice characteristics include spectral-temporal, cepstral,
amplitudefrequency and signs of nonlinear dynamics.</p>
      <p>
        It should be mentioned that there are seven types of intonation construction in
Russian language (3 interrogative, 2 exclamatory, 2 narrative). Intonation recognition of
human speech can take place in three steps [
        <xref ref-type="bibr" rid="ref10 ref6">6, 10</xref>
        ]:
• Record of human speech and its division into finished intonation constructions;
• Selection of voice tone in the each of record parts;
• Development of the intonation constructions classifier.
      </p>
      <p>Accuracy of this algorithm depends on the quality and duration records,
characteristics of speech, etc.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Graphic data analytical approaches</title>
      <p>Analysis of the graphical information includes: image, text recognition and compare
the results with the creation date of the file under consideration. The approaches of
the graphic information recognition are divided into three categories: iteration
methods, artificial neural networks, and object search paths to the further study of their
properties. OCR technology (Optical Character Recognition) allows you to find the
printing and handwriting (rarely) text. The original image quality has a strong
influence on the final result of the recognition algorithms. As in the case of the audio
analysis, the image-recognized text should be attached to the original object. Further it
will be treated by means of text analytics. Findings data are reduced in the resulting
table after completion of the image processing. That table contains the following key
parameters: detected faces, their emotions, the list of labels, environmental data (the
recognized objects), service image information (size, creation date, name, etc).</p>
      <p>Pattern recognition in the task of a social profile building is divided into: search
people in the images, the determination of their sentiments and the environmental
elements allocation. Face Detection Services provide by projects such as ASID,
FaceID, FindFace, Vissage Gallery and others. Emotion recognition is a more
complicated procedure, major problems which are to determine the face position and
color, quality of illumination, foreign objects in the image foreground. However, despite
this, there are ready-made solutions: FaceReader, FaceSecurity, etc. Development of
systems for determining environmental elements is not sufficiently well-researched
now. There-fore, it is advisable to use specially trained neural network to solve this
problem.</p>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>The degree of analytical subsystem maturity for social profile building affects the
information content and correctness of the final results. This is especially true for
multimedia processing, because these data are diverse and difficult to automate
processing. A review of existing approaches to the multimedia content analysis is
provided in this paper. Also, possibility of their application in the task of a social profile
building is considered. It was revealed that is possible to use existing algorithms for
processing audio data, while the analysis of graphic information requires improving
recognition technology.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Apache</surname>
            <given-names>HBase</given-names>
          </string-name>
          ™ Reference Guide (
          <year>2016</year>
          ), http://hbase.apache.org/book.html
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <article-title>Hidden communications identification on the basis of the textual analysis with i2. Center of competence for IBM Big Data technology</article-title>
          , Moscow (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Gulati</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Serrà</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ishwar</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Serra</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>Mining Melodic Patterns in Large Audio Collections of Indian Art Music</article-title>
          .
          <source>In: Signal-Image Technology and Internet-Based Systems (SITIS)</source>
          ,
          <source>2014 Tenth International Conference on</source>
          , pp.
          <fpage>264</fpage>
          -
          <lpage>271</lpage>
          . Publisher: IEEE (
          <year>2015</year>
          ). DOI= https://doi.org/10.1109/SITIS.
          <year>2014</year>
          .73
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <article-title>MongoDB for GIANT Ideas | MongoDB (</article-title>
          <year>2017</year>
          ), https://www.mongodb.com
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <article-title>The analysis of the structured and unstructured data with the Content Analytics. Center of competence for IBM Big Data technology</article-title>
          , Moscow (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Boykov</surname>
            ,
            <given-names>I. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ivanov</surname>
            ,
            <given-names>A. I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalashnikov</surname>
            ,
            <given-names>D. M.:</given-names>
          </string-name>
          <article-title>An algorithm for constructing a statistical description of the discrete-continuum duration meaningful speech speaker sound stream</article-title>
          .
          <source>In: Proceedings of higher educational institutions. Volga region. Technical science, №4</source>
          , pp.
          <fpage>64</fpage>
          -
          <lpage>78</lpage>
          . Penza: PSU Publisher,
          <string-name>
            <surname>Penza</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Cristani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vinciarelli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Segalin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perina</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Unveiling the multi-media unconscious: implicit cognitive processes and multimedia content analysis</article-title>
          .
          <source>In: Proceedings of the 21st ACM international conference on Multimedia</source>
          , pp.
          <fpage>213</fpage>
          -
          <lpage>222</lpage>
          . ACM New York, NY, USA (
          <year>2013</year>
          ) DOI= https://doi.org/10.1145/2502081.2502280
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Davydov</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          :
          <article-title>Systemic Sociology: an analysis of multimedia information on the Internet</article-title>
          .
          <source>In: Official site of SI RAS</source>
          (
          <year>2009</year>
          ), http://www.isras.ru/publ.html?id=
          <fpage>1257</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Heikkinen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sarvanko</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rautiainen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ylianttila</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Distributed Multimedia Content Analysis with MapReduce</article-title>
          .
          <source>In: 24th International Symposium on Personal, Indoor and Mobile Radio Communications: Services, Applications and Business Track</source>
          , pp.
          <fpage>3502</fpage>
          -
          <lpage>3506</lpage>
          . Publisher: IEEE (
          <year>2013</year>
          ). https://www.researchgate.net/profile/Mika_Ylianttila/ publication/257641284_Distributed_Multimedia_
          <article-title>Content_Analysis_with_MapReduce/links/5739 9f8008ae9ace840d90d7</article-title>
          .pdf
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Levin</surname>
            ,
            <given-names>A.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Minin</surname>
            ,
            <given-names>P.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Egorov</surname>
            ,
            <given-names>A.D.</given-names>
          </string-name>
          :
          <article-title>Recognition of intonation in human continuous speech. In: XIX International telecommunication conference of young scientists and students "Youth and science" Theses of reports</article-title>
          . edited by O.
          <source>N. Golotyuk</source>
          , pp.
          <fpage>109</fpage>
          -
          <lpage>110</lpage>
          . Moscow: National research nuclear university "
          <source>MIFI"</source>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Panteras</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wise</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Croitoru</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Crooks</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stefanidis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Triangulating Social Multimedia Content for Event Localization using Flickr and Twitter</article-title>
          . In: Transactions in
          <string-name>
            <surname>GIS</surname>
          </string-name>
          , Volume
          <volume>19</volume>
          , Issue 5, pp.
          <fpage>694</fpage>
          -
          <lpage>715</lpage>
          (
          <year>2014</year>
          ). DOI= http://onlinelibrary.wiley.com/doi/10.1111/tgis.12122
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Poria</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cambria</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Howard</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
          </string-name>
          , G.-B.,
          <string-name>
            <surname>Hussain</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Fusing audio, visual and textual clues for sentiment analysis from multimodal con-tent</article-title>
          .
          <source>In: Neurocomputing</source>
          , Volume
          <volume>174</volume>
          ,
          <string-name>
            <surname>Part</surname>
            <given-names>A</given-names>
          </string-name>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>59</lpage>
          (
          <year>2016</year>
          ). DOI= http://dx.doi.org/10.1016/j.neucom.
          <year>2015</year>
          .
          <volume>01</volume>
          .095
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Stellefson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chaney</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ochipa</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chaney</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haider</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hanik</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chavarria</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bernhardt</surname>
            ,
            <given-names>J.M.:</given-names>
          </string-name>
          <article-title>YouTube as a source of chronic obstructive pulmonary disease patient education</article-title>
          .
          <source>In: Chronic Respiratory Disease Journal</source>
          , Volume
          <volume>11</volume>
          , issue 2, pp.
          <fpage>61</fpage>
          -
          <lpage>71</lpage>
          . (
          <year>2014</year>
          ) DOI= https://doi.org/10.1177/1479972314525058
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Timonin</surname>
            ,
            <given-names>A.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bozhday</surname>
            ,
            <given-names>A.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bershadsky</surname>
            <given-names>A. M.:</given-names>
          </string-name>
          <article-title>Research of filtration methods for reference social profile data</article-title>
          .
          <source>In: EGOSE '16 Proceedings of the International Conference on Electronic Governance and Open Society: Challenges in Eurasia</source>
          , pp.
          <fpage>189</fpage>
          -
          <lpage>193</lpage>
          . ACM New York, NY, USA (
          <year>2016</year>
          ) DOI=https://doi.org/10.1145/3014087.3014090
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Timonin</surname>
            ,
            <given-names>A.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bozhday</surname>
            ,
            <given-names>A.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bershadsky</surname>
            <given-names>A. M.:</given-names>
          </string-name>
          <article-title>The Process of Per-sonal Identification and Data Gathering Based on Big Data Technologies for Social Profiles</article-title>
          .
          <source>In: Digital Transformation and Global Society. DTGS 2016. Communications in Computer and Information Science</source>
          , vol.
          <volume>674</volume>
          , pp.
          <fpage>576</fpage>
          -
          <lpage>584</lpage>
          . Springer, Cham (
          <year>2016</year>
          ) DOI=https:// link.springer.com/chapter/10.1007/978-3-
          <fpage>319</fpage>
          -49700-6_
          <fpage>57</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Timonin</surname>
            ,
            <given-names>A.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bozhday</surname>
            ,
            <given-names>A.S.:</given-names>
          </string-name>
          <article-title>The use of Big Data technologies to build a human social profile on the basis of public data sources</article-title>
          . In: Вulletin of Penza State University, №
          <volume>2</volume>
          (
          <issue>10</issue>
          ), pp.
          <fpage>140</fpage>
          -
          <lpage>144</lpage>
          (
          <year>2015</year>
          ) http://elibrary.ru/item.asp?id=
          <fpage>24097671</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Vijayarani</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sakila</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Multimedia mining research - an overview</article-title>
          .
          <source>In: International Journal of Computer Graphics &amp; Animation (IJCGA)</source>
          , Vol.
          <volume>5</volume>
          , No.
          <issue>1</issue>
          , pp.
          <fpage>69</fpage>
          -
          <lpage>77</lpage>
          . (
          <year>2015</year>
          ) DOI= https://doi.org/10.5121/ijcga.
          <year>2015</year>
          .5105
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Yakovlev</surname>
          </string-name>
          , V.E.:
          <article-title>Macromedia: multimedia information analysis. M-Lang</article-title>
          .
          <source>In: Journal "Young Scientist", №4</source>
          . vol.
          <volume>1</volume>
          , pp.
          <fpage>105</fpage>
          -
          <lpage>108</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shyu</surname>
            ,
            <given-names>M.-L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
          </string-name>
          , S.-C.
          <article-title>: A Classifier Ensemble Framework for Multimedia Big Data Classification</article-title>
          .
          <source>In: Information Reuse and Integration (IRI)</source>
          ,
          <year>2016</year>
          IEEE 17th International Conference on, pp.
          <fpage>615</fpage>
          -
          <lpage>622</lpage>
          . Publisher: IEEE (
          <year>2016</year>
          ). DOI= https://doi.org/10.1109/IRI.
          <year>2016</year>
          .88
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cottman-Fields</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Truskinger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roe</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Duan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dong</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Towsey</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wimmer</surname>
          </string-name>
          , J.:
          <article-title>Managing and Analysing Big Audio Data for Environmental Monitoring</article-title>
          .
          <source>In: Computational Science and Engineering (CSE)</source>
          ,
          <year>2013</year>
          IEEE 16th International Conference on, pp.
          <fpage>997</fpage>
          -
          <lpage>1004</lpage>
          . Publisher: IEEE (
          <year>2014</year>
          ). DOI= https://doi.org/ 10.1109/CSE.
          <year>2013</year>
          .146
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>