<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>ChartParser: Automatic Chart Parsing for Print-Impaired</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Anukriti Kumar</string-name>
          <email>anukumar@uw.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tanuja Ganu</string-name>
          <email>tanuja.ganu@microsoft.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Infographics Accessibility, Visualization Design, Information Retrieval, Human-centered computing</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>In this paper</institution>
          ,
          <addr-line>we make three key contributions. First</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>The Third AAAI Workshop on Scientific Document Understanding</institution>
          ,
          <addr-line>Feb</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Washington</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Infographics are often an integral component of scientific documents for reporting qualitative or quantitative findings as they make it much simpler to comprehend the underlying complex information. However, their interpretation continues to be a challenge for the blind, low-vision, and other print-impaired (BLV) individuals. In this paper, we propose ChartParser, a fully automated pipeline that leverages deep learning, OCR, and image processing techniques to extract all figures from a research paper, classify them into various chart categories (bar chart, line chart, etc.) and obtain relevant information from them, specifically bar charts (including horizontal, vertical, stacked horizontal and stacked vertical charts) which already have several exciting challenges. Finally, we present the retrieved content in a tabular format that is screen-reader friendly and accessible to the BLV users. We present a thorough evaluation of our approach by applying our pipeline to sample real-world annotated bar charts from research papers.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <sec id="sec-2-1">
        <title>Given the remarkable progress in analyzing natural scene images observed in recent years, it is generally</title>
        <p>(T. Ganu)
© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License
and utilize color information for data association. It is
also robust to variations in the figure designs and has
no assumptions related to the position of axes, legend,
etc. And finally, we demonstrate the viability of our
approach by applying our pipeline to a real-world dataset
of research papers from diferent sources.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>This section discusses our proposed pipeline to convert</title>
        <p>
          bar charts from scientific publications into data tables.
2. Related Work The process is divided into three steps: First, we
extract figures from research papers. Second, we detect
Chart understanding in scientific literature has recently bar charts from the extracted figures. And finally, we
gained much traction and there have been several at- extract content from bar charts to obtain the desired data
tempts to classify charts using heuristics and expert rules. tables. These three steps are depicted in Figure 2.
Various machine learning-based algorithms that rely on
handcrafted features such as histogram of oriented
gradients (HOG), scale-invariant feature transform (SIFT), and 3.1. Figure Extraction
others have been proposed in the literature [
          <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
          ]. Sev- To segment all the figures from a research paper, we
eral deep learning algorithms for chart and table image use a pre-trained image segmentation model based on
classification have recently been introduced [
          <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
          ], and Mask R-CNN architecture from Detectron2 model zoo
[
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. to decompose a document into five categories: title, text
        </p>
        <p>There is another line of work on interpreting text com- block, list, figure, and table. The model is based on the
ponents in chart images [7, 8, 9, 10, 11, 12]. Although ResNet50 feature pyramid network (FPN) base config and
semi-automatic software solutions are available for data is trained on the PubLayNet dataset for document layout
extraction from charts, using them requires the user to analysis.
manually define the chart’s coordinate system, provide
metadata about the axes and data or click on the data
points [13, 14, 15]. 3.2. Figure Classification</p>
        <p>One of the dificulties in accurately parsing bar charts Most of the figures extracted are charts including tree
is dealing with diferent types of bar charts in scientific diagrams, network diagrams, bubble charts, etc. This
literature. Previous work, for example, [16, 17], focused step describes the chart classification model employed to
on developing heuristic models that detect key elements detect bar charts.
such as bars, legends, etc. Similarly, machine learning
has also been used recently to detect chart components 3.2.1. Chart Images Dataset
(e.g., bar or legend) [18]. Also, a deep learning object
detection model is trained in [19] to identify sub-figures We create a chart dataset to train and evaluate our chart
in compound figures. However, neither of these works classification model. We use the Python module google
extracted data values from bar charts. Using synthetic images download to obtain charts from 13 categories
data produced by the matplotlib toolkit, [20] created a (scatter plots, bar charts, line charts, etc.), 1000 images
model to boost the accuracy while parsing bar values. from each category. Then, we manually identify and</p>
        <p>Most of the previous methods do not parse the legend. remove some incorrect samples of downloaded charts.
Some assumed that the legend was always placed below Finally, we obtain a ground-truth dataset of charts with
the chart [20] or horizontally along the same line [21]. a total of approximately 12k charts, including 978 bar
This limits the applicability of these models. Previous charts.
work was mostly created for visualizations in grayscale,
as they did not parse color information from the legend. 3.2.2. Chart Classification Model
Also, there has been less focus on measuring the
accuracy of detecting the axes or label values. Quantifying
the accuracy of obtaining this semantic information is
essential for understanding the capping limits in this
evaluation process. Even though the process of extracting
information from charts and other infographics has been
extensively explored, to our knowledge, prior work has
several shortcomings as discussed above. As a result,
we propose a fully automated system for data extraction
from bar charts which solves these existing limitations
and can be extended to other types of charts, including
line charts, scatter plots, etc.</p>
      </sec>
      <sec id="sec-3-2">
        <title>We try out diferent models pre-trained on the ImageNet</title>
        <p>dataset and fine-tune them on the figure dataset
created. All the layers but the final convolutional layer
were frozen. The fully-connected layer uses a softmax
function to classify figures into 13 chart categories. Using
Adadelta as the optimizer, we re-train the convolutional
layer and the additional fully-connected layer for 30
iterations. We also add a dropout layer with a rate of 0.3
before the final fully-connected layer to avoid overfitting.
Despite similar accuracy achieved by all the baselines,
we choose MobileNet as it uses far less parameters on
ImageNet than others.</p>
        <sec id="sec-3-2-1">
          <title>3.3. Content Extraction</title>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>Content Extraction from charts is a complex process and in this step, we employ OCR and image processing techniques to extract relevant content from bar charts through various modules.</title>
        <p>3.3.1. Axes Detection</p>
      </sec>
      <sec id="sec-3-4">
        <title>We convert the image into a binary one, and then, obtain</title>
        <p>the max-continuous ones along each row and column. 3.3.4. Axes Label Detection
For this, we scan the matrix vertically and horizontally
to trace the continuity of black pixels within the adjacent We filter the text boxes present below the x-axis ticks and
columns and rows. Finally, the y-axis is the first column again, run a sweeping line from the x-axis ticks to the
where the max-continuous 1s fall in the region [max bottom of the image. While doing so, the line intersecting
- threshold, max + threshold], where a predetermined with the maximum number of text boxes provides us with
threshold (=10) is assumed. Similarly, for the x-axis, the all the bounding boxes for the x-axis label. Similarly, we
last row is chosen based on where the maximum con- also obtain the y-axis label using a vertical sweeping line.
tinuous 1s fall within the range [max - threshold, max +
threshold]. 3.3.5. Legend Detection
3.3.2. Text Detection Firstly, we remove the axes labels and ticks bounding
boxes. Then, we also remove boxes containing only a
We apply Azure Cognitive Service (ACS) Optical Charac- single ”I” character because these are typically read as
ter Recognition (OCR) to detect text within a chart and error bars and finally, we also remove text boxes with
extract all the rectangular bounding boxes of the detected numeric values placed above bars. This implies that only
text. legend names and color boxes are found in the remaining
3.3.3. Axes Ticks Detection
We filter all the text boxes below the x-axis and to the
right of the y-axis. Further, we run a sweeping line from
the x-axis to the bottom of the image and the line which
intersects with the maximum number of text boxes
provides the bounding boxes for all the x-axis ticks. A similar
algorithm is used for detecting y-axis ticks using a
vertical sweeping line.
text boxes. We combine the bounding boxes with dis- Table 1
tances under 10px into a single legend name because the Content extraction accuracy
legend names might have multiple words. We organize
these bounding boxes into groups where each member
is either horizontally or vertically aligned with at least
one other member. Finally, the maximum length group
gives the bounding boxes for all the legends.</p>
        <p>Component</p>
        <p>X-axis</p>
        <p>Y-axis
X-axis label
Y-axis label
X-axis ticks
Y-axis ticks</p>
        <p>Legend</p>
        <p>Legend color
Data association
3.3.6. Legend Color Estimation
The color boxes are assumed to be on the left or right
side, depending on the placement of text bounding box
within the legend extracted in the previous module.
Pixels within a box should ideally all have the same pixel
value. Since, these values could change for several rea- 4.1. Test Dataset
sons (such as image compression, scanning, etc.), we We sample research papers from two data sources: arXiv
start a new group with a random pixel and gradually add and PubLayNet. From the arXiv dataset published on
pixels whose R, G, and B values are no higher than 5 Kaggle [22], we obtained research paper PDFs published
compared to the average of all the pixels in the group. in the years 2019-2021 and the resulting dataset consisted
The color of a legend label is determined by taking the of around 10,024 papers. Also, we use a subset of the
Pubaverage of all the pixels in the largest group of the R, G, LayNet dataset [23] and obtain approximately 15k
docuand B channels. Later, bars matching a specific legend ment images from the same. Then, we apply the first two
are identified using these colors. steps of our fully automated pipeline to these research
papers, as mentioned in the previous section 3. First,
3.3.7. Data Extraction we extract approximately 51k figures from the research
papers dataset using our image segmentation model, and
then, on applying our chart classification model to these
ifgures, we obtain approximately 2,112 bar charts. To
evaluate our system, we sample 100 bar charts and
manually annotate the relevant data, including axes, axes label,
axes tick’s values, legend, legend color, and the textual
bounding boxes.</p>
      </sec>
      <sec id="sec-3-5">
        <title>The bounding boxes for each legend are whitened, and</title>
        <p>we eliminate all the white pixels from the original chart
image. The colors decided upon in the previous module
serve as the initial clusters as all of the image’s pixel
values are further divided into clusters. Then, we divide
the given plot into multiple plots, one for each cluster.
In other words, by clustering, we break down a stacked
bar chart into several simpler plots. Then, we obtain
all contours within the plot and subsequently, pick the
closest bounding rectangle for each label. Further, we
require a mapping function to map pixel values to actual
values in the chart. Hence, we use the value-tick ratio ( )
to estimate the height of each bar. To find this ratio, we
divide the average of the actual y-label ticks (  ) by
the average distance between ticks in pixels (Δ ).
 =  
/Δ
(1)
Finally, the bar chart’s y values are defined as y value =
 × H, where H is the bar’s height. After getting all the
relevant information, we create a data table using the
same as shown in Figure 2 (e.).</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>This section focuses on creating a test dataset of bar charts from research papers and evaluating various components of our pipeline on this dataset to demonstrate the viability of our approach.</title>
        <sec id="sec-4-1-1">
          <title>4.2. Chart Classification</title>
          <p>The accuracy of our chart classification model is
calculated using stratified five-fold cross validation. Here, we
use 20% of the chart images dataset, created using google
images download API, as our validation set and the
category wise performance (average accuracy) of our model
is presented in Table 4.2. We observe that for bar charts,
our model achieves an accuracy of 97.8%.</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>4.3. Text Recognition</title>
          <p>We use the Intersection Over Union (IoU) metric to assess
our text detection module. This metric determines the
bounding boxes that most closely match the predicted
and actual ones, calculates the area of the intersecting
region divided by the area of the union region for each
match, and considers the prediction successful if the IoU
measure is higher than the threshold, for example, 0.5.We
achieve an F1-score of 0.935 with an IoU threshold of
0.5 and this demonstrates that our module detects text
bounding boxes within the plot area fairly well.
Category
Bar Chart
Line Chart
Scatter Plot
Pareto chart</p>
          <p>Pie Chart
Venn Diagram</p>
          <p>Box Plot
Network Diagram</p>
          <p>Map
Tree Diagram
Area Graph
Flow Chart
Bubble Chart</p>
        </sec>
        <sec id="sec-4-1-3">
          <title>4.4. Content Extraction</title>
          <p>The performance of the final content extraction process
depends on the sequential performance of each module,
i.e., axis detection, axis tick values extraction, label
extraction, legend detection, and so on. First, we apply
the OCR and image processing techniques to the test
dataset and extract relevant content. Then, we compare
the outcome with the manually annotated data and
obtain module-wise evaluation metrics presented in Table
1.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Limitations and Future Work</title>
      <sec id="sec-5-1">
        <title>This section mentions the existing limitations of our fully</title>
        <p>automated pipeline and also proposes future works for
improvement.</p>
        <p>Currently, there is a problem with our proposed
pipeline that prevents it from successfully parsing the
plotted data when there is a lot of clutter. We can employ
vascular tracking methods like those described in [24] to
solve this.</p>
        <p>Also, our pipeline fails to recognize axes when there
is no solid line indicating the y-axis. In this scenario, the
y-axis can be identified by recognizing bounding boxes
along a vertical line in the bar chart. Also, when the
xaxis is at the top of the graphic, x-axis detection may fail.
This case can be handled by employing a bidirectional
sweeping line with heuristic rules.</p>
        <p>We also realize that the axes, legend, and data
extraction modules are currently modeled and trained
independently in our figure analysis approach. It can be an
exciting approach to jointly model and train them
together within an end-to-end deep network.</p>
        <p>In our future work, we will extend our pipeline to other
types of charts as well including line charts, scatter plots,
etc. which have an L-shaped axis, similar to bar charts
and also, follow a similar algorithm for extraction of chart
elements such as axes, labels, ticks, legends, etc. Instead
of simply presenting the raw data in tabular form, we
can also generate insights from the data by employing
reasoning on chart images at a high level by finding
relationships between various chart elements.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this paper, we present our ongoing work in making
scientific documents accessible to the blind, low-vision,
and print-disabled individuals. Our work focuses on the
problem of poor accessibility of infographics/charts in
research papers. We propose an end-to-end pipeline to
extract all figures from a research paper, classify them
into various chart categories, obtain relevant
information from them, specifically bar charts and present the
retrieved content into accessible data tables. Finally, we
apply our pipeline to a test dataset of research papers
from two diferent sources: arXiv and PMC to
demonstrate the viability of our approach. We continue to work
towards making charts fully accessible to print-impaired
individuals by overcoming the existing limitations of our
work.
[7] S. Demir, S. Carberry, K. F. McCoy, Summarizing neural networks, in: 2017 14th IAPR International
information graphics textually, Computational Lin- Conference on Document Analysis and Recognition
guistics 38 (2012) 527–574. (ICDAR), volume 1, IEEE, 2017, pp. 533–540.
[8] S. R. Choudhury, S. Wang, C. L. Giles, Scalable algo- [20] A. Balaji, T. Ramanathan, V. Sonathi, Chart-text:
rithms for scholarly figure mining and semantics, A fully automated chart image descriptor, arXiv
in: Proceedings of the International Workshop on preprint arXiv:1812.10636 (2018).</p>
      <p>Semantic Big Data, 2016, pp. 1–6. [21] Y. He, X. Yu, Y. Gan, T. Zhu, S. Xiong, J. Peng, L. Hu,
[9] A. Kembhavi, M. Salvato, E. Kolve, M. Seo, H. Ha- G. Xu, X. Yuan, Bar charts detection and
analyjishirzi, A. Farhadi, A diagram is worth a dozen sis in biomedical literature of pubmed central, in:
images, in: European conference on computer vi- AMIA Annual Symposium Proceedings, volume
sion, Springer, 2016, pp. 235–251. 2017, American Medical Informatics Association,
[10] S. Ebrahimi Kahou, V. Michalski, A. Atkinson, 2017, p. 859.</p>
      <p>A. Kadar, A. Trischler, Y. Bengio, Figureqa: An [22] Kaggle arXiv Dataset, arxiv dataset,
annotated figure dataset for visual reasoning, arXiv
https://www.kaggle.com/datasets/Cornelle-prints (2017) arXiv–1710. University/arxiv, 2017.
[11] Z. Chen, M. Cafarella, E. Adar, Diagramflyer: A [23] ibm-aur nlp, Github - ibm-aur-nlp/publaynet,
search engine for data-driven diagrams, in: Pro- https://github.com/ibm-aur-nlp/PubLayNet, 2019.
ceedings of the 24th International conference on [24] A. Sironi, V. Lepetit, P. Fua, Multiscale centerline
world wide web, 2015, pp. 183–186. detection by learning a scale-space distance
trans[12] T. Hiippala, M. Alikhani, J. Haverinen, form, in: Proceedings of the IEEE Conference on
T. Kalliokoski, E. Logacheva, S. Orekhova, Computer Vision and Pattern Recognition, 2014, pp.
A. Tuomainen, M. Stone, J. A. Bateman, Ai2d-rst: A 2697–2704.
multimodal corpus of 1000 primary school science
diagrams, Language Resources and Evaluation 55
(2021) 661–688.
[13] D. Jung, W. Kim, H. Song, J.-i. Hwang, B. Lee, B. Kim,</p>
      <p>J. Seo, Chartsense: Interactive data extraction from
chart images, in: Proceedings of the 2017 chi
conference on human factors in computing systems,
2017, pp. 6706–6717.
[14] L. Yang, W. Huang, C. L. Tan, Semi-automatic
ground truth generation for chart image
recognition, in: International Workshop on Document</p>
      <p>Analysis Systems, Springer, 2006, pp. 324–335.
[15] W. R. Shadish, I. C. Brasil, D. A. Illingworth, K. D.</p>
      <p>White, R. Galindo, E. D. Nagler, D. M. Rindskopf,
Using ungraph to extract data from image files:
Verification of reliability and validity, Behavior Research</p>
      <p>Methods 41 (2009) 177–183.
[16] N. Yokokura, T. Watanabe, Layout-based approach
for extracting constructive elements of bar-charts,
in: International workshop on graphics recognition,</p>
      <p>Springer, 1997, pp. 163–174.
[17] Y. P. Zhou, C. L. Tan, Hough technique for bar
charts detection and recognition in document
images, in: Proceedings 2000 International
Conference on Image Processing (Cat. No. 00CH37101),
volume 2, IEEE, 2000, pp. 605–608.
[18] R. Al-Zaidy, C. Giles, A machine learning approach
for semantic structuring of scientific charts in
scholarly documents, in: Proceedings of the AAAI
Conference on Artificial Intelligence, volume 31, 2017,
pp. 4644–4649.
[19] S. Tsutsui, D. J. Crandall, A data driven approach for
compound figure separation using convolutional</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>arXiv</given-names>
            <surname>Monthly</surname>
          </string-name>
          <string-name>
            <surname>Stats</surname>
          </string-name>
          ,
          <article-title>Global survey monthly submissions</article-title>
          , https://arxiv.org/stats/monthly_ submissions,
          <year>2022</year>
          . November 14,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>W.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <article-title>A system for understanding imaged infographics and its applications</article-title>
          ,
          <source>in: Proceedings of the 2007 ACM symposium on Document engineering</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>9</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Savva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chhajta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fei-Fei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Agrawala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heer</surname>
          </string-name>
          ,
          <article-title>Revision: Automated classiifcation, analysis and redesign of chart images</article-title>
          ,
          <source>in: Proceedings of the 24th annual ACM symposium on User interface software and technology</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>393</fpage>
          -
          <lpage>402</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Song</surname>
          </string-name>
          , J.-i. Hwang,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Seo</surname>
          </string-name>
          , Chartsense:
          <article-title>Interactive data extraction from chart images</article-title>
          ,
          <source>in: Proceedings of the 2017 chi conference on human factors in computing systems</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>6706</fpage>
          -
          <lpage>6717</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Poco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heer</surname>
          </string-name>
          ,
          <article-title>Reverse-engineering visualizations: Recovering visual encodings from chart images</article-title>
          , in: Computer graphics forum, volume
          <volume>36</volume>
          , Wiley Online Library,
          <year>2017</year>
          , pp.
          <fpage>353</fpage>
          -
          <lpage>363</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Choo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Elmqvist</surname>
          </string-name>
          ,
          <article-title>Visualizing for the non-visual: Enabling the visually impaired to use visualization</article-title>
          , in: Computer Graphics Forum, volume
          <volume>38</volume>
          , Wiley Online Library,
          <year>2019</year>
          , pp.
          <fpage>249</fpage>
          -
          <lpage>260</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>