<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Paris, France
∗Corresponding author.
†
These authors contributed equally.
£ t.p.smits@uva.nl(T. Smits); wouter.haverals@princeton.ed(uW. Haverals);loren.verreyen@uantwerpen.be
(L. Verreyen);mona.allaert@uantwerpen.b(eM. Allaert);mike.kestemont@uantwerpen.be(M. Kestemont)
ç https://thomassmits.eu/ (T. Smits); https://whaverals.github.io/(W. Haverals);
https://mikekestemont.github.io/(M. Kestemont)
ȉ</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Greetings From! Extracting Address Information From 100,000 Historical Picture Postcards</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Thomas Smits</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>WouterHaverals</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>LorenVerreyen</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mona Allaer</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mike Kestemont</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Amsterdam School of Historical Studies (ASH), University of Amsterdam</institution>
          ,
          <country country="NL">the Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Antwerp Center for Digital Humanities and Literary Criticism (ACDC), University of Antwerp</institution>
          ,
          <country country="BE">Belgium</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Center for Digital Humanities (CDH), Princeton University</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Institute for the Study of Literature in the Low Countries (ISLN), University of Antwerp</institution>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This paper details the development and validation of computational methods aimed at creating a comprehensive dataset from a vast collection of historical picture postca1rBdsy. connecting three distinct locations - the sender's, the recipient's, and the depicted - the medium of the picture postcard has contributed to the formation of extensive spatial networks of information exchange. So far, the analysis of these spatial networks was hampered by the fact that picture postcards are - literally and 昀椀guratively - hard to read. Using traditional methods, transcribing and analyzing a sizeable number of postcards would take a lifetime. To address this challenge, this paper presents a pipeline that leverages Computer Vision, Handwritten Text Recognition, and Large Language Models to extract and disambiguate address information from a collection of 102K historical postcards sent from Belgium, France, Germany, Luxembourg, the Netherlands, and the UK. We report a mAP of 0.94 for the CV model, a character error rate of 7.62%, and a successful extraction rate of 419 coordinates from an initial sample set of 500 postcards for the LLM. Overall, our pipeline demonstrates a reliable address information extraction rate for a signi昀椀cant proportion of the postcards in our data (with an average distance di昀erence between the HTR-determined addresses and the Ground Truth text of 36.95km). Deploying our pipeline on a larger scale, we will be able to reconstruct the spatial networks that the medium of the postcard enabled.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Historical Postcards</kwd>
        <kwd>Spatial Networks</kwd>
        <kwd>Address Information Extraction</kwd>
        <kwd>Computer Vision</kwd>
        <kwd>Handwritten Text Recognition</kwd>
        <kwd>Large Language Models</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Since the late nineteenth century, billions of picture postcards have connected people all over
the globe. Combining standardized pictures with room for a written message, the postcard is
o昀琀en regarded as one of the earliest forms of mass media that allowed personal communication
on a large scale 2[
        <xref ref-type="bibr" rid="ref7">7, 25</xref>
        ]. Produced in print runs of several ten-thousands, they contributed to
the forming of persistent visual stereotypes. However, these visual commonplaces were always
combined with personal texts: from a short “greetings from”, to longer messages scribbled
on every inch of available white space. Especially since the so-called Divided Back Period
[
        <xref ref-type="bibr" rid="ref28">29</xref>
        ], where the front of the card was used for a photograph or illustration, and the recto side
for a written message (le昀琀), a stamp (top-right), and address (middle-right), postcards became
a medium for conveying countless personal micro-narratives of lived experience, that were
highly structured and multimodal in nature (see Figu1)r.e
      </p>
      <p>Next to these characteristics, the speci昀椀c spatiality of the postcard has been described as one
of the medium’s de昀椀ning features [26, 25]. Typically, postcards are sent from one speci昀椀c
location (Place A) to a destination (Place B). In addition to this, they normally depict (and textually
relate to) a third location (Place C), as shown in Figu1r.eWhile it may appear that Place A
and Place C are necessarily the same, this is not always the case. Essentially, postcards
create a triadic connection between the real-world locations of the sender (Place A) and recipient
(Place B), and the constructed location portrayed on the front of the card (Place C), which is
o昀琀en idealized and possibly described in the text. A single postcard links these three places
for a speci昀椀c duration: the period between its sending and receiving. When observed on a
larger scale, postcards create extensive, complex, and constantly changing spatial networks of
information exchange.</p>
      <p>
        The combination of handwritten messages – o昀琀en scribbled down in varying styles and
without much attention to legibility – with images renders the postcard a challenging
historical source to decipher and study8][. As a result, most studies focus on close reading a small
number of postcards. Capitalizing on the large-scale digitization of postcards by online
auction platforms, this paper presents the 昀椀rst step towards a comprehensive distant reading of the
postcard medium. It describes a pipeline that fuses Computer Vision (CV), Handwritten Text
Recognition (HTR), and Large Language Models (LLM) methods to extract and disambiguate
structured address information from a large collection of handwritten postcards. Although
we obtained a dataset of∼102,000 cards (sent from Belgium, France, the UK, the Netherlands,
Germany, and Luxemburg), the present paper presents a pilot study on a representative subset
of these as a proof-of-concept. We (1) train a CV model (YOLOv813[]) to identify the address
regions on the back of the cards, (2) apply a transformer-based HTR model (TranskribTuexst’
Titan I [
        <xref ref-type="bibr" rid="ref27">28</xref>
        ]) to convert the identi昀椀ed regions into machine-readable text, and (3) use an LLM
(GPT-4 [
        <xref ref-type="bibr" rid="ref24">23</xref>
        ]) to extract, disambiguate, and structure address information from these texts. This
paper presents results for the CV model (0.94 mAP), the HTR model (7.62 character error rate),
and the GPT-4 disambiguation task. For this last task, we propose a simple metric that
adequately captures the average distance between the proposed address and the correct address.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Background: postcards and historical spatial information</title>
      <p>
        Despite their omnipresence as a medium of mass communication in the last 150 years,
scholarly attention for postcards has been described as ‘inconsistent at bes3t’0][. Historians have
mostly viewed postcards as pieces of trivial and insigni昀椀cant popular culture. As a result, most
work on the medium is popularizing in nature and anecdotal in coverage, providing readers
with images of bygone times of a speci昀椀c place or subject [
        <xref ref-type="bibr" rid="ref26">27</xref>
        ]. In recent years, the study of
picture postcards was reinvigorated by comparing them to (social) digital media, such as text
messages, email, and micro-blogging services for image sharing, such as Instagram. Scholars
noted that these contemporary ‘new media’ carry many similar features to the postcard and
provoked similar societal response2s7[
        <xref ref-type="bibr" rid="ref21 ref32 ref8">, 22, 8, 33</xref>
        ]. Using this analogy, historians have
studied how postcards popularized and standardized concepts and knowledge. For examp2l0e,, [
38] note how cards played an important role in popular visual nationalism. Pointing to the
same underlying process, others have shown that postcards contributed to disseminating and
popularizing colonial and orientalist stereotyp1e,s3[
        <xref ref-type="bibr" rid="ref13 ref2 ref35 ref7">2, 36, 14, 7</xref>
        ].
      </p>
      <p>
        The fact that the postcard is a complex historical source, which is – literally and 昀椀guratively
– hard to read, might explain the methodological focus of most studies on close reading. For
example, [
        <xref ref-type="bibr" rid="ref13">14</xref>
        ] uses a sample of only ten cards to examine the ‘cross-imperial production and
reception of picture postcards from the Dutch East Indies.’ Similar2ly],u[ses only six cards to
draw broad conclusions about the ‘voyeuristic economy of the colonial gaze,’ which transforms
‘other cultures into objects for analysis.’ While the meaning of individual postcards might be
complex, Pyne [
        <xref ref-type="bibr" rid="ref26">27</xref>
        ] points out that, on a larger scale, the meaning of cards is o昀琀en closely
related: ‘the more one looks through thousands of postcards [...] the more predictable and
samey [they] start to seem’. This paper argues that the speci昀椀c medial features of the postcard,
especially its highly structured multimodal structure, make it a perfect candidate for a distant
reading approach. In other words, computational means have the potential to uncover visual,
textual, and multimodal patterns in the vast reservoir of historic postcards – our paper hopes
to function as a prolegomenon to such an endeavour.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Data</title>
      <p>
        Historical postcards are omnipresent in libraries, archives, antique shops, and 昀氀ea markets.
However, in their original analog form, they cannot be studied at scale. In the realm of
postcard address recognition, notable advancements have been made in deciphering handwritten
and machine-printed texts to enhance mail delivery systems31[
        <xref ref-type="bibr" rid="ref20">, 21</xref>
        ]. Furthermore, there are
initiatives for analyzing historical postcards through query-by-example word spotting
methods [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In the last twenty years, several institutions worldwide have started to digitize their
collections of picture postcard1s6[]. However, for the purposes of this paper, most of them are
unusable as they only contain unsent cards (without address information). Next to archives
and libraries, a large number of postcards have been digitized to be sold via online auction
platforms. We rely on Delcampeu[rl], a large economic stakeholder in this domain, which
o昀ers millions of postcards for prices as low as €0.05. Using the website’s architecture, we
were able to download a maximum of ca. 10,000 images per country/spatial catego1rTyo. lend
geographic focus to our work, we focus here on the postcards that depict places in Belgium
and its 昀椀ve neighboring countries: France, the UK, the Netherlands, Germany, and
Luxembourg. We collected the maximum number of cards from the general country category and
their capital cities: Brussels, Paris, London, Amsterdam, Berlin, and Luxembourg City. Next to
the front and verso sides of the cards, we extracted a title/description (provided by the seller),
the country/city category (provided by the auction side), and the listed price.
      </p>
      <p>We construct two sample datasets, one to train and validate the CV model and one to
validate the performance of the HTR and LLM models. The 昀椀rst dataset contains 1,220 randomly
sampled (backsides of) postcards. To provide the model with negative and positive examples,
the set contains both cards with and without an address. We manually annotated the address
1The site displays a maximum of 10,000 results per search query. While the number changes on a daily basis,
Delcampe o昀ers around 60 million cards for sale.
regions using rectangle bounding boxes. To validate the HTR and LLM models, we use a
second subset containing the addresses of 500 randomly selected postcards. The address regions
have been detected and cropped using the trained CV model. We manually transcribed the
addresses and recorded the street address, city and country to which the cards were sent. Using
the Google Maps API, we added a geolocation for each card.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methods</title>
      <p>Our dataset – which is intrinsically hyper-diverse – presents signi昀椀cant challenges: we have
to deal with addresses spanning the entire globe, ranging from the late 19th to the 20th
century, and which are inscribed in various languages, characterized by an impressive array of
handwriting styles. Furthermore, addresses on these postcards are only semi-structured: some
contain detailed information, including the addressee’s name, street and house number, postal
code, place, region, while others bear minimal instructions for postal services, such as simply a
name and a village. For instance, Figur1efeatures a postcard sent to Sint-Amandsberg, merely
identi昀椀ed as ‘near Ghent’, without the mentioning of a postal code. In addition, the problem
becomes more complex when studying spatial networks that transcend linguistic borders. The
names of countries, places, and even streets can be spelled di昀erently in di昀erent languages.
This is an especially pressing problem for multilingual countries, such as Belgium.</p>
      <p>Most historical geocoding studies utilize (fuzzy) string matching between addresses from a
historical dataset and entries in historical gazetteers or contemporary databa5s,e1s7[].
However, this technique is highly sensitive to the quality and organization of the address strings in
the historical data5[]. Even when historical spatial data is transcribed manually from primary
sources – a task requiring signi昀椀cant e昀ort – the resulting entries o昀琀en contain textual
inaccuracies. Misunderstandings may also arise from naming conventions adopted for place name
variations, such as ‘The Hague’, ‘Den Haag’, ‘La Haye’, and ‘’s-Gravenhage’, all referring to the
same location.</p>
      <p>Given the nature of our dataset, conventional (fuzzy) string-matching techniques are of
limited relevance. We initially extract the addresses from these diverse handwritten images using
the technique of Handwritten Text Recognition (HTR). However, while e昀ective, this approach
introduces its own set of problems. Speci昀椀cally, HTR inevitably introduces textual errors due
to the considerable variations in handwriting and language in our data. Therefore, our dataset’s
suitability for traditional geocoding methods is signi昀椀cantly diminished.</p>
      <p>In response to the challenges outlined above, we devise innovative strategies to correctly
extract machine-readable addresses that allow for e昀ective geocoding. For this project, we
conceived a pipeline consisting of four key stages (see Figur2e), which operate sequentially to
provide a holistic solution to the task of address resolution in historic postcards:
1. Extraction: We use a Computer Vision (CV) model to pinpoint and segment address
regions on the digitized postcards’ back sides.
2. Transcription: These isolated address images are then processed usinHgandwritten
Text Recognition (HTR), converting the handwritten data into a machine-readable
format.
3. Parsing: A昀琀er text extraction, we employ a Large Language Model (LLM) to
systematically structure the raw text into organized address formats.
4. Resolution: Finally, we assign geographic coordinates througgheocoding and validate
the accuracy of the extracted addresses.</p>
      <p>In stage 1, we train a state-of-the-art YOLOv8 object detection model on the CV train set.</p>
      <p>Postcards hosted on</p>
      <p>Delcampe
Front postcards</p>
      <p>Back postcards
Computer vision</p>
      <p>model
Cropped address</p>
      <p>regions
Layout analysis</p>
      <p>Example of a cropped address
region after layout analysis
Fern. M Middé</p>
      <p>Dreef 38</p>
      <p>Junda
Netherlands</p>
      <p>Postcard #39
Name: Fern. M Middé
Street: Dreef 38
City/Village: Zundert
Country: Netherlands
Instead of manually selecting hyperparameters, we resorted to the default 昀椀netune method
and the default parameters (for 30 epochs). To train and validate the model, we use the 昀椀rst
subset containing 1,220 randomly sampled postcards, and apply a basic 80/20 split; meaning
that the model is trained on a sample of 975 postcards and validated on a second sample of
245 postcards. The model is trained to detect one object class: address region. The randomly
selected subset of postcards contains both postcards with and without an address region. The
postcards without address are either le昀琀 blank or contain writing but no address. The majority
of the postcards with address have a divided back, with the address region located on the
righthand side. A minority of the postcards with address are undivided, meaning that the back of
the postcard only contains the address, written in the center of the card. The trained YOLOv8
model achieves an mAP50 of 0.94 and mAP50-95 of 0.72 on our validation set (predicted
bounding box is considered to be correct if it shows an overlap of at least 50% with the ground truth
bounding box). A昀琀er training and validating the object detection model, we use the model to
detect and crop the address regions of the postcards used in the downstream tasks.</p>
      <p>
        In stage 2, we apply the HTR modelText Titan I to a random sample of 500 address regions.
The address regions are collected by applying the trained YOLOv8 model to our dataset and
randomly sampling 500 address regions detected by the modTeelx.t Titan I, the recently
developed transformer-based ‘super model’ by Transkribus is one of the most advanced HTR models
available today 2[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Given the signi昀椀cant variation within our data set – diversity in image
resolution, size, handwriting styles, and language – the decision was made to utilize this robust
engine, instead of training our own ad hoc modeTle.xt Titan I is particularly suitable for our
needs because of its exceptional performance across di昀erent handwritings and languages.
Using the HTR evaluation packageCERberus, we observed a Character Error Rate (CER) of 7.62%
for our subset of 500 automatically transcribed address1e1s].[2
      </p>
      <p>
        In stage 3, we feed both the automatically transcribed text and the manually corrected text
to GPT-4, a Large Language Model (LLM). Following work in several 昀椀elds that apply
promptengineering techniques to harness the capabilities of LLM’s10[
        <xref ref-type="bibr" rid="ref14 ref34">, 35, 15</xref>
        ], this was done with two
main objectives in mind: (1) to correct potential spelling errors within the addresses, and (2) to
impose structure on the raw text. This was achieved by creating a JSON object for each address
comprising the following 昀椀elds: ‘Person’s Name’, ‘Street and House Number’, ‘City/Village
Name’, ‘Postal Code’, and ‘Country’ (where availabl3e.).
      </p>
      <p>We illustrate the output of our methodology with the example below. The raw text from
the HTR model – bearing a spelling error (‘Gouda’ had been misread as ‘Junda’) – when fed
2This calculation was performed for the address information only. Case-sensitivity, punctuation, and personal
names were excluded.
3For this purpose, the following prompt was used: “As a sophisticated AI, you’re presented with several addresses,
each written in multi-line text following the format typically used on postcards. These addresses may comprise
a person’s name, the street name with house number, the name of a city or village, and occasionally, the country
name. However, not all details are consistently provided, and spelling errors may be present. Your task is to identify
and rectify these spelling errors, speci昀椀cally in the city, village, and country names. Cross-reference these details
with a comprehensive list of geographical locations. For instance, “Douwersgracht Asteldam” should be corrected
to “Brouwersgracht, Amsterdam,” and “Brucfel” to “Brussels.” Finally, translate this cleaned-up information into
a single, uninterrupted structured JSON format. The structure should contain the following 昀椀elds, if available:
‘Person’s Name’, ‘Street and House Number’, ‘City/Village Name’, ‘Postal Code’, and ‘Country’. The purpose of
this structured format is to facilitate easier data analysis and ensure uniformity in the dataset.”
{"Transcription level": HTR,
"Person's Name": "Fern. M Middé",
"Street and House Number": "Dreef 38",
"City/Village Name": "Zundert",
"Postal Code": "",
"Country": "Netherlands"}
{"Transcription level": Ground Truth,
"Street and House Number": "Dreef 38",
"City/Village Name": "Gouda",
"Country": "Netherlands"}
into GPT-4, results in the LLM model structuring this text into a more organized fo4rImnt.his
process, it modi昀椀es ‘Junda’ to a somewhat similar, but incorrect name ‘Zundert’:
In the ‘Ground Truth’ version of this address, the spelling error has been corre5cWtedh.en we
input this corrected text into the LLM, the place name ‘Gouda’ remains unchanged:</p>
      <p>While it is true that the HTR-generated text holds an error that impacts the derived
structured information from the LLM, it is noteworthy that other elements of the address data, such
as the country and street name, remain consistent and accurate. Nevertheless, the most
signi昀椀cant challenge in our approach arises when the handwriting is challenging for the HTR to
interpret, resulting in the introduction of numerous incorrect characters. An example of this
issue is when the HTR misreads ‘rue Churchill n 96, Courcelles (Hainaut)’ as ‘Kne Churchill n
96, Camelles (Hamant)’. It is also worth noting that the handwritten addresses can be
challenging even for humans to read. We suspect that such di昀케cult readability might even be inherent
to our dataset. It is possible that many of the postcards that end up on auction websites were
actually le昀琀 unmailed, due to their hard-to-decipher addresses. Evidence of this lies in the
notes on some of the postcards that are marked as ‘Poste restante’.</p>
      <p>Feeding both of these raw texts into the LLM, it interprets and structures them as follows.
For the manually corrected ground truth text, we get:
{"Transcription level": "Ground Truth",
"Street and House Number": "rue Churchill n 96",
"City/Village Name": "Courcelles (Hainaut)"}</p>
      <p>And for the erroneous HTR-generated tex6t:
4It is worth noting that even though the parsing instructions for both sets of text were identical, variations in
information structuring emerged. For instance, in the output for the HTR text, an empty ‘postal code’ 昀椀eld is
introduced, a feature that is notably absent in the output corresponding to the Ground Truth text.
5For the construction of the Ground Truth text, 昀椀ve human annotators looked at the HTR output and suggested
improvements. They followed speci昀椀c conventions during the correction: using ‘#’ for unreadable characters,
pre昀椀xing lines without address information (e.g., a person’s name) with ‘*’, and pre昀椀xing irrelevant lines with ‘@’.
Only text pertaining to the geographical address information was corrected.
6It is worth highlighting that the LLM, in this scenario, adds a country (France) to the structured output, even
though Courcelles is located in Belgium. This not only underscores the occasional unpredictable nature of LLM
outputs but also their potential for inaccuracies.
{"Transcription level": "HTR",
"Person's Name": "Medames Dennit et Dubois",
"Street and House Number": "Kne Churchill n 96",
"City/Village Name": "Camelles (Hamant)",
"Postal Code": "",
"Country": "France"}</p>
      <p>
        The logical sequence of our approach now prompts us to consider the following: how will a
geocoder, tasked with translating this structured address information into tangible real-world
coordinates, respond7? To tackle this, thefourth and 昀椀nal stage in our pipeline involves
both validating and geocoding these addresses. To accomplish this, we rely on two distinct
APIs: the Address Validation API o昀ered by the Google Maps Platform and OpenStreetMap’s
Nominatim geocoding service9[
        <xref ref-type="bibr" rid="ref4">, 4, 24</xref>
        ]. These APIs transform the address data into
geographical coordinates, accurately describing their physical locations. Google’s API comes with the
added advantage of handling potential typing errors, misspelled words, and abbreviations of
address elements, e昀케ciently conforming them to both national and international postal
address norms. Nevertheless, it also has a downside: its country coverage is somewhat limited,
currently only extending to 34 countries. In contrast, Nominatim, while providing support for
a substantially broader list of countries and regions, shows little tolerance for spelling errors
[
        <xref ref-type="bibr" rid="ref11">12</xref>
        ].
      </p>
      <p>Beyond merely obtaining coordinates, we also derive the level of geocoding granularity. This
measure serves as an indication of the precision or the level of detail o昀ered by the geocoding
process. One of Google’s API’s unique features is its ability to di昀erentiate between various
granularity levels for the interpreted addresses. For our data, both for the HTR addresses and
the Ground Truth addresses, we distinguish among the following levels:
• PREMISE: The geocode is accurate up to the level of an individual house or building.
• PREMISE_PROXIMITY: The geocode provides an approximate location at the
buildinglevel.
• ROUTE: The geocode o昀ers granularity at the level of a street, road, or highway.
• OTHER: The geocode returned corresponds to a larger area.
• NONE: Both Google’s Address Validation API and Nominatim were unable to suggest
coordinates.</p>
      <p>
        Our process culminates in this 昀椀nal stage, which also involves quantifying the precision
of the suggested coordinates. This step entails determining the average geographic distance,
in kilometers, between two sets of coordinates. Each pair consists of one set extracted from
the correct address text, and another derived from the text processed by the HTR model. The
haversine formula, a mathematical equation frequently employed in navigation, is utilized to
perform these calculations. This formula is particularly suitable for determining distances
between two points on a sphere using their longitudes and latitude3s4,[
        <xref ref-type="bibr" rid="ref18">19</xref>
        ].
7We also attempted to request the coordinates from the LLM, but the model hallucinated too o昀琀en for this to be
workable.
Herkingen,
Holland
Berkelweg 1,
7218 AS Almen,
Holland
Rue Jean l’Aveugle N 7,
Arlon,
Belgique,
Europe
Dreef 38,
Gouda,
Netherlands
Niška 16/II,
Beograd,
Jougoslavie
rue du Clair Matin,
71100, St Remy,
FRANCE
######straat 2,
Den Haag
rue Churchill n 96,
Courcelles,
(Hainaut)
58 Rue Ga##d,
St Cl###,
#####
      </p>
      <p>
        To provide a clearer illustration of the 昀椀nal result of our methodology, we present T1a.ble
This table demonstrates the Ground Truth and HTR-processed structured address information,
alongside their associated coordinates and the granularity at which these coordinates are given.
The right-most column of the table quanti昀椀es the distance in kilometers between these two sets
of coordinates, representing the level of precision achieved through the application of our HTR
system and geocoding APIs. This distance is calculated using the haversine formula, which
provides a reliable measurement of the geographical distance between two sets of coordinates
[
        <xref ref-type="bibr" rid="ref18 ref33">34, 19</xref>
        ]. In doing so, the table also provides insights into the speci昀椀c discrepancies that arise
during the address decoding process.
      </p>
      <p>A review of the examples provided in Tabl1eyields several noteworthy results which shed
light on both the successes and challenges of our pipeline. One major success of the HTR
system and geocoding APIs is demonstrated by their ability to pinpoint accurate geographical
coordinates even when slight alterations are made in the structured address, as seen in the cases
Herkingen,
Netherlands
Berkelweg,
Almen, 7218 AS,
Netherlands
Rue Jean l’Aveugle,
Liège,
Belgium
Dreef 38,
Zundert,
Netherlands
Niska 16/II,
Belgrade,
Yugoslavia
21.100. St Remy.,
Saoué,
France
Hendszstraat 2,
Den Hage,
Netherlands
Kne Churchill n 96,
Camelles (Hamant),
France
58 Kur Gounod,
S Clone,
Deuinctaire</p>
      <p>HTR
of “Herkingen, Holland” and “Berkelweg 1, 7218 AS Almen, Holland”. The former produced an
identical result, while the latter demonstrated a di昀erence of only 0.04 km. Nonetheless, the
table also testi昀椀es to the obstacles our method faces. Major discrepancies arise when
interpreting addresses with multiple possible interpretations or when important elements of the address
are misread by the HTR model. For instance, in the case of “Rue Jean l’Aveugle N 7, Arlon,
Belgique, Europe”, the coordinates deviated signi昀椀cantly, resulting in a 109.51km di昀erence, as the
LLM that was fed the HTR text misinterpreted the location “Arlon” as “Liège”. A similar issue
occurs with “Dreef 38, Gouda, Netherlands” and “rue du Clair Matin, 71100, St Remy, FRANCE”,
leading to a substantial distance error. Furthermore, unreadable addresses represented another
challenge, as in the case of “58 Rue Ga##d, St Cl###, ######”, which could not be processed and
resulted in non-applicable (N/A) outputs. These cases underline the necessity for high-quality
text recognition to ensure accurate geocoding results.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>We present results for all four steps of our pipeline: the CV, the HTR, structuring the data using
LLMs, and assigning exact coordinates through geocoding.</p>
      <p>1: Identify address regions – Using a small number of training examples, the YOLOv8
model achieves a mAP50 of 0.94, as highlighted in Tabl2e. As we only train the model to detect
a single category (address regions) this high performance was expected. While the mAP50-95 is
slightly lower (0.72), we feel con昀椀dent that the model performs well enough to function in our
pipeline. The di昀erence between both metrics can be explained by di昀erent standards in how
much the bounding boxes of the model and the ground truth should overlap (Intersection over
Union). For our task, drawing near-perfect bounding boxes is not of the highest importance
and recall should be favoured over precision. A昀琀er all, most textual information (our focal
point of interest) gravitates toward the middle of the box.</p>
      <p>2: Automatically transcribe handwritten addresses – Using the generalText Titan I
HTR model from Transkribus, we report a CER of 7.62% on the address information of the 500
postcards in our dataset. We use CERberus to inspect the CER11[]. This CER is encouraging as
a proof of concept, but remains relatively high in comparison to other published work, which is
probably caused by the hyper-diversity in the informal handwriting on the cards. However, it is
important to emphasize that our dataset essentially boasts as many handwriting styles as there
are postcards, a unique challenge that truly puts HTR technology to the test. In this context,
only supermodels likTeext Titan I that are trained on massive corpora encompassing a wealth
of variations can handle such a complex task. This highlights the signi昀椀cance of leveraging
top-tier HTR models when dealing with data imbued with inherent richness and variety.</p>
      <p>3: Disambiguate address information – Our sample subset constituted originally of 500
postcard images. Unfortunately, 昀椀ve of these were of such low resolution that the Handwritten
Text Recognition (HTR) model could not recognize any text regio8nsC.onsequently, these 昀椀ve
cards were omitted from the dataset and all subsequent analyses, leaving us with 495 postcards.</p>
      <p>From these 495 postcards, both Ground Truth (GT) and HTR derived text were fed into GPT-4.
8Speci昀椀cally, the problem arises when text regions need to be recognized by the layout analysis model. For these
particular 5 postcards, the resolution is too low to recognize any text regions at all.
In some cases, the Language Model did not structure the extracted text as an address but rather
treated it as irrelevant text regions. Such content includes messages like “Mit freundlichen,
Grüssen”, which likely results from too greedy an extraction by the object detection. In these
instances, the LLM did not propose an address. This led to 34 Ground Truth texts and 14 HTR
derived texts marked by the LLM as void of relevant address informat9ion.</p>
      <p>
        It is worth mentioning that the LLM was not prompted to suggest geographic coordinates for
the processed addresses immediately. This decision was informed by a preliminary test where
the LLM was observed to have a strong propensity to ‘hallucinate’ by suggesting coordinates
that did not match the address information at all. Such hallucinations are a risk at this stage of
the method nonetheless (and a danger that has been highlighted in other research as well, see
e.g. [
        <xref ref-type="bibr" rid="ref17">18, 39</xref>
        ]). An example of this would be when the LLM suggests the country ‘France’ for
French-sounding address text (e.g., because the word “Rue” appears), even when the original
postcard does not provide this information. An example of this can be observed in Ta1ble
where the non-existent place name, but French-sounding HTR text “Camelles (Hamant)” is
located in France; while the GT indicates that it is actually a place in the French-speaking
Belgian province of Hainaut.
      </p>
      <p>4: Resolution through Geocoding and Validation – In the 昀椀nal step of our method, we
assigned geographic coordinates through geocoding and validated the accuracy of the extracted
addresses, following the process of address disambiguation. This led us to assess the degree of
divergence between the proposed locations for the GT text and the HTR text.</p>
      <p>To assess the degree of divergence between the proposed locations for the GT text and the
HTR text, two analyses were conducted. Initially, we evaluated the granularity of the suggested
geocodes. Figure3 presents the count of geocodes returned at each granularity level for both
GT and HTR extracted text provided to the LLM. Our observations show that the “PREMISE”
level has the highest count for GT, while the “OTHER” level tops the count for HTR. This
suggests that the manual correction of the geocoded text re昀椀nes the precision of the address
information.</p>
      <p>Despite these improvements, there were still instances where place names remained
unresolved and did not yield any coordinates from the 495 addresses (70 instances for GT text and
76 for HTR text, as seen in Figure3). The reasons for this vary. Some texts were not addresses
at all but incorrectly recognized text regions on the postcard - 34 instances were noted for the
GT text. Additionally, two postcards were found to contain a so-called ‘Feldpost’ number, a
special postcode for items sent via military mail, which cannot be converted into coordinates
9The di昀erence in number primarily stems from the GT text being manually checked by human annotators. If a
text region was deemed to not contain address information, it was excluded.
PREMISE</p>
      <p>PREMISE_PROXIMITY</p>
      <p>ROUTE
Geocode Granularity</p>
      <p>OTHER</p>
      <p>None
with our method 3[].10 The remaining texts for which no coordinates could be retrieved by
the geocoding APIs were either incomplete, entirely illegible, or simply erroneous addresses.
A signi昀椀cant overlap exists between the GT and the HTR text: out of the 70 unlocalizable GT
addresses, there were 39 instances where the APIs couldn’t suggest a location for the HTR text
either. In summary, out of the original 500 postcards, there were 425 suggested coordinates
for the GT text and 419 for the HTR text. If we further 昀椀lter this data to consider only those
cases where coordinates were proposed for both the GT and HTR text, we end up with 388
pairs of coordinates. This subset forms the basis for our next stage of analysis: the comparison
of distances between the locations suggested by the GT and the HTR methods.</p>
      <p>In the subsequent phase, we quanti昀椀ed the distances between the sets of coordinates
proposed by the GT and the HTR methods. Out of the 388 comparisons, we obtained an average
distance of around 36.95 km (see Table3). Intriguingly, the median value, along with the 25th
and 50th percentiles, register at 0 km. This indicates that more than half of the time, both
techniques returned the same set of coordinates. However, the standard deviation of 206.54
km reveals a considerable divergence in certain cases. The maximum distance observed was a
sizable 3585.99 km. This extreme result was due to a particularly hard-to-read address. As the
human annotator noted “#eg ###, ####, ####”, it resulted in the coordinates for “Egypt” (the
only legible letters ‘eg’ forced this interpretation by the geocoding API). On the other hand,
the HTR model made an attempt – albeit not very successful – and read “Vig Car, rens Stang”,
which translated into coordinates for the Danish town ‘Vig’, that is, indeed, a long way from
Egypt.</p>
      <p>
        To better understand these results and go beyond just the numerical summaries, we
ultimately constructed a map that can serve as a powerful tool to visually compare and
under10Furthermore, geocoding APIs like that of Google might not always re昀氀ect historical geographies or naming
conventions, especially concerning places that had their names changed due to colonial rule and subsequent
decolonization [
        <xref ref-type="bibr" rid="ref36">37</xref>
        ]. Quantifying the extent of this issue poses an additional challenge.
      </p>
      <p>1500 2000 2500
Distance between GT and HTR (km)
stand the variations between the GT and HTR coordinates. Figur5eshows the result of this
map, which graphically depicts the geographical locations proposed by both GT and HTR
methods, with each method having its own markers. The color of these markers is determined by
the distance between the GT and HTR coordinates, with the colormap ranging from dark blue
(indicating a smaller distance) to orange (indicating a larger distance). This visual approach
allows an intuitive understanding of the geographical spread of the addresses, and more
importantly, the variance between the GT and HTR suggested coordinates. A closer inspection of
the map highlights areas of low deviation, represented by clusters of blue-colored points. This
visual representation supports our initial 昀椀nding that more than half the time, the two methods
returned identical coordinates. However, the scattering of intensely colored points across the
map visually emphasizes instances of substantial divergence.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>This paper presented the 昀椀rst step towards a computational distant reading of the postcard
medium. In general, we show that our pipeline is e昀ective in extracting spatial information
from digitized picture postcards. There are several ways by which we can improve the di昀erent
steps of our pipeline. For example, the CV model might be improved by providing a larger
training set. We achieved notable success with theText Titan I HTR model when dealing
with the immense diversity in handwriting. This underscores the necessity and the utility of
employing large-scale HTR supermodels for such intricate tasks. Additionally, 昀椀ne-tuning the
prompts might further boost the performance of the GPT-4-based address disambiguation.</p>
      <p>An important re昀氀ection to make is on the 昀椀nancial scalability and reproducibility of our
approach. In our pipeline, we incorporated three commercial products, Transkribus HTR,
OpenAI’s GPT-4 and Google’s Address Validation API. While these o昀er e昀케ciency and accuracy,
they introduce 昀椀nancial implications and potential challenges for widespread reproducib1i1lity.
To address these challenges, future implementations could explore the use of open-source
models or free alternatives that provide similar capabilities.</p>
      <p>In future work, we plan to use similar models to extract more and di昀erent kinds of
infor11For our dataset of 500 postcards, the total approximate cost was $11.3, composed of charges from Transkribus (5
credits were used, which amounts to approximately $0.8.), OpenAI’s GPT-4 (ca. $2 for both prompt and
completion), and Google’s Address Validation API ($8.5 for 500 postcards). Costs mentioned are based on current pricing
as of July 2023. It’s noteworthy that these calculations are made without considering potential free tiers or free
credits that some services may o昀er.
mation from digitized postcards. For example, as Figu1reshows, most sent postcards contain
a stamp and a postmark. Combined with the address, these elements can be used to fully
reconstruct the journey of the card: where it was sent from (and to), how long this journey took,
and how much it cost. In a second avenue of research, we can apply an HTR model to extract
the message on the le昀琀 side of a picture card. Combined with a computational analysis of the
pictures on the front of the cards, a distant reading of these texts might tell us a lot about
the popularization of speci昀椀c visual concepts, which can be linked to nationalism, colonialism,
Orientalism, and other cultural categories.</p>
      <p>While picture postcards have o昀琀en been dismissed as a trivial or insigni昀椀cant form of
communication, we note that, by approaching them computationally, they o昀er us the opportunity
to discover more about the personal lives of people in the past. In fact, digitized cards o昀er
a vast historic reservoir of untapped micro-spatial narratives of lived experiences. As these
personal messages are combined with visual commonplaces, they can also be used to discover
more about the connection between personal experience and cultural phenomena, such as
nationalism and colonialism. If we are willing to make a trade-o昀 between precision and scale,
the presented pipeline o昀ers an interesting instrument for future postcard studies.</p>
      <p>W. Haverals.CERberus: guardian against character errors. Version 1.0. 2023. url:https:
//github.com/WHaverals/CERberu.s</p>
      <p>A. Wilson. “Image wars: the Edwardian Picture Postcard and the Construction of Irish
Identity in the early 1900s”. InM:edia Connections between Britain and Ireland. Routledge,
2022, pp. 30–44.</p>
      <p>M. Zhang, O. Press, W. Merrill, A. Liu, and N. A. Smith. “How language model
hallucinations can snowball”. Ina:rXiv preprint arXiv:2305.13534 (2023).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>AlloulaT.he Colonial</surname>
          </string-name>
          <article-title>Harem</article-title>
          . Vol.
          <volume>21</volume>
          . Manchester University Press,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Burns</surname>
          </string-name>
          . “
          <article-title>Six Postcards from Arabia: A Visual Discourse of Colonial Travels in the Orient”</article-title>
          .
          <source>In:Tourist Studies 4.3</source>
          (
          <issue>2004</issue>
          ), pp.
          <fpage>255</fpage>
          -
          <lpage>275</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Cape</surname>
          </string-name>
          . Youth at War.
          <article-title>Feldpost Letters of a German Boy to His Parents,</article-title>
          <year>1943</year>
          -
          <fpage>1945</fpage>
          . New York: Peter Lang Verlag,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>K.</given-names>
            <surname>Clemens</surname>
          </string-name>
          . “
          <article-title>Geocoding with openstreetmap data”</article-title>
          .
          <source>InG:EOProcessing</source>
          <year>2015</year>
          10 (
          <year>2015</year>
          ). url: https://nominatim.org.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Cura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Dumenieu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Abadie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Costes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Perret</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Gribaudi</surname>
          </string-name>
          . “
          <article-title>Historical Collaborative Geocoding”</article-title>
          .
          <source>In:ISPRS International Journal of Geo-Information 7.7</source>
          (
          <issue>2018</issue>
          ), p.
          <fpage>262</fpage>
          . doi:
          <volume>10</volume>
          .3390/ijgi7070262.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Fink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rothacker</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Grzeszick</surname>
          </string-name>
          . “
          <article-title>Grouping Historical Postcards Using Queryby-Example Word Spotting”</article-title>
          .
          <source>In1:4th International Conference on Frontiers in Handwriting Recognition</source>
          .
          <year>2014</year>
          , pp.
          <fpage>470</fpage>
          -
          <lpage>475</lpage>
          . doi:
          <volume>10</volume>
          .1109/icfhr.
          <year>2014</year>
          .
          <volume>85</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Geary</surname>
          </string-name>
          and V.-L. Webb, eds.Delivering Views:
          <article-title>Distant Cultures in Early Postcards</article-title>
          . Washington D.C.: Smithsonian Institution Scholarly Press,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Gillen</surname>
          </string-name>
          .
          <article-title>The Edwardian Picture Postcard as a Communications Revolution: A Literacy Studies Perspective</article-title>
          . Taylor &amp; Francis,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Google</surname>
          </string-name>
          .
          <article-title>Address Validation API</article-title>
          . https://developers.google.com/maps/documentation/ad dress-validation./
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Hatakeyama-Sato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Yamane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Igarashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nabae</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Hayakawa</surname>
          </string-name>
          .Prompt Engineering of GPT-4 for Chemical Research: What Can/Cannot Be Done?
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .26434 /chemrxiv-2023
          <source>-s1x5p.</source>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S. Ho昀mann. Quo</given-names>
            <surname>Vadis</surname>
          </string-name>
          .
          <year>2020</year>
          . url: https://nominatim.org/
          <year>2020</year>
          /09/14/Quo-Vadis.htm.l
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>G.</given-names>
            <surname>Jocher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chaurasia</surname>
          </string-name>
          , and J.
          <source>QiuU.ltralytics YOLOv8. Version 8.0.0</source>
          .
          <year>2023</year>
          . url:https: //github.com/ultralytics/ultralyt.ics
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Junge</surname>
          </string-name>
          . “Familiar Distance:
          <article-title>Picture Postcards from Java from a European Perspective</article-title>
          , ca.
          <year>1880</year>
          -
          <fpage>1930</fpage>
          ”. In: Bijdragen en Mededelingen betre昀ende de
          <source>Geschiedenis der Nederlanden 134.3</source>
          (
          <issue>2019</issue>
          ), pp.
          <fpage>96</fpage>
          -
          <lpage>121</lpage>
          . doi:
          <volume>10</volume>
          .18352/bmgn-lchr.
          <volume>10743</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kumar.Geotechnical Parrot</surname>
          </string-name>
          <article-title>Tales (GPT): Harnessing Large Language Models in Geotechnical Engineering</article-title>
          .
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2304.02138. arXiv:
          <volume>2304</volume>
          .02138 [physics].
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ladd</surname>
          </string-name>
          . “
          <article-title>Access and Use in the Digital Age: A Case Study of a Digital Postcard Collection”</article-title>
          .
          <source>In: New Review of Academic Librarianship 21.2</source>
          (
          <issue>2015</issue>
          ), pp.
          <fpage>225</fpage>
          -
          <lpage>231</lpage>
          . doi:
          <volume>10</volume>
          .1080/1 3614533.
          <year>2015</year>
          .
          <volume>1031258</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>T.</given-names>
            <surname>Lan</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Longley</surname>
          </string-name>
          . “
          <string-name>
            <surname>Geo-Referencing</surname>
          </string-name>
          and
          <article-title>Mapping 1901 Census Addresses for England and Wales”</article-title>
          .
          <source>InI:SPRS International Journal of Geo-Information 8.8</source>
          (
          <issue>2019</issue>
          ), p.
          <fpage>320</fpage>
          . doi:
          <volume>10</volume>
          .3390/ijgi8080320.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bubeck</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Petro</surname>
          </string-name>
          . “Bene昀椀ts, limits, and
          <article-title>risks of GPT-4 as an AI chatbot for medicine”</article-title>
          .
          <source>In: New England Journal of Medicine 388.13</source>
          (
          <year>2023</year>
          ), pp.
          <fpage>1233</fpage>
          -
          <lpage>1239</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Mapado</surname>
          </string-name>
          . Haversine. https://github.com/mapado/haversine.
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Meikle</surname>
          </string-name>
          .
          <source>Postcard America: Curt Teich and the Imaging of a Nation</source>
          ,
          <year>1931</year>
          -
          <fpage>1950</fpage>
          . University of Texas Press,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>V.</given-names>
            <surname>Mekala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Manimegalai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sasipriya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Selvakani</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. Gautham. “Digital</given-names>
            <surname>Address Identi昀椀cation From Handwritten Address In</surname>
          </string-name>
          <article-title>Postcards”</article-title>
          .
          <source>InI:nternational Journal of Scienti昀椀c &amp; Technology Research</source>
          <volume>9</volume>
          (
          <year>2020</year>
          ), pp.
          <fpage>1663</fpage>
          -
          <lpage>1667</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>E.</given-names>
            <surname>Milne</surname>
          </string-name>
          .Letters, Postcards, Email: Technologies of Presence. Routledge,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          OpenAI. “GPT-4
          <source>Technical Report”. In:ArXiv abs/2303</source>
          .08774 (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <article-title>OpenStreetMap contributors</article-title>
          .Planet dump retrieved from https://planet.osm.org.
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [23] [24] [26] [25]
          <string-name>
            <given-names>J.-O.</given-names>
            <surname>Östman</surname>
          </string-name>
          . “
          <article-title>The Postcard as Media”</article-title>
          .
          <source>In:Text &amp; Talk 24.3</source>
          (
          <issue>2004</issue>
          ), pp.
          <fpage>423</fpage>
          -
          <lpage>442</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <given-names>D.</given-names>
            <surname>Prochaska</surname>
          </string-name>
          and J. MendelsonP.ostcards: Ephemeral Histories of Modernity. Pennsylvania State University Press,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>L.</given-names>
            <surname>Pyne</surname>
          </string-name>
          .
          <article-title>Postcards: The Rise and Fall of the World's First Social Network</article-title>
          .
          <source>Reaktion Books</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [28]
          <article-title>Read-coop</article-title>
          .
          <source>Introducing Transkribus Super Models</source>
          .
          <year>2023</year>
          . url: https://readcoop.eu
          <article-title>/introd ucing-transkribus-super-models-get-access-to-the-text-</article-title>
          <string-name>
            <surname>titan</surname>
          </string-name>
          .
          <article-title>-i/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>B.</given-names>
            <surname>Rogan</surname>
          </string-name>
          . “
          <article-title>An Entangled Object: The Picture Postcard as Souvenir and Collectible, Exchange and Ritual Communication”</article-title>
          .
          <source>InC:ultural Analysis 4.1</source>
          (
          <issue>2005</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Simpson</surname>
          </string-name>
          . “
          <article-title>Postcard Culture in America: The Tra昀케c in Tra昀케c”</article-title>
          .
          <source>In: The Oxford History of Popular Print Culture</source>
          . Ed. by G. Kelly,
          <string-name>
            <given-names>J.</given-names>
            <surname>Raymond</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Bold</surname>
          </string-name>
          . Oxfored: Oxford University Press,
          <year>2011</year>
          , pp.
          <fpage>169</fpage>
          -
          <lpage>191</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>S. N.</given-names>
            <surname>Srihari</surname>
          </string-name>
          . “
          <article-title>Recognition of handwritten and machine-printed text for postal address interpretation”</article-title>
          .
          <source>In:Pattern Recognition Letters 14.4</source>
          (
          <issue>1993</issue>
          ), pp.
          <fpage>291</fpage>
          -
          <lpage>302</lpage>
          . doi:
          <volume>10</volume>
          .1016/016 7-
          <fpage>8655</fpage>
          (
          <issue>93</issue>
          )
          <fpage>90095</fpage>
          -
          <lpage>u</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>E. R.</given-names>
            <surname>Stevenson</surname>
          </string-name>
          . “Home, Sweet Home:
          <article-title>Women and the “Other Space” of Domesticity in Colonial Indian Postcards</article-title>
          , ca.
          <year>1880</year>
          -
          <fpage>1920</fpage>
          ”.
          <source>InV:isual Anthropology 26.4</source>
          (
          <issue>2013</issue>
          ), pp.
          <fpage>298</fpage>
          -
          <lpage>327</lpage>
          . doi:
          <volume>10</volume>
          .1080/08949468.
          <year>2013</year>
          .
          <volume>804383</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [33]
          <string-name>
            <surname>G. Teulié.</surname>
          </string-name>
          “
          <article-title>Orientalism and the British Picture Postcard Industry: Popularizing the Empire in Victorian and Edwardian Homes”</article-title>
          .
          <source>InC:ahiers Victoriens et Édouardiens 89 Spring</source>
          (
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .4000/cve.5178.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>G. Van</given-names>
            <surname>Brummelen</surname>
          </string-name>
          .
          <article-title>Heavenly mathematics: The forgotten art of spherical trigonometry</article-title>
          . Princeton University Press,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wu</surname>
          </string-name>
          , C. Ma,
          <string-name>
            <given-names>H.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Shen</surname>
          </string-name>
          , T. Liu, and
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          .Prompt Engineering for Healthcare:
          <source>Methodologies and Applications</source>
          .
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.23
          <volume>04</volume>
          .14670. arXiv:
          <volume>2304</volume>
          .14670 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>R.</given-names>
            <surname>Wehbe</surname>
          </string-name>
          . “
          <article-title>Seeing Beirut through Colonial Postcards: A Charged Reality”D.Ienp:artment of Architecture and Design: Beirut</article-title>
          ,
          <string-name>
            <surname>Lebanon</surname>
          </string-name>
          (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>B.</given-names>
            <surname>Williamson</surname>
          </string-name>
          . “
          <article-title>Historical geographies of place naming: Colonial practices and beyond”</article-title>
          .
          <source>In: Geography Compass 17.5</source>
          (
          <issue>2023</issue>
          ),
          <year>e12687</year>
          .
          <source>doi: 0</source>
          .1111/gec3.
          <fpage>12687</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>