=Paper= {{Paper |id=Vol-3865/01_paper |storemode=property |title=New frontiers in Digital Libraries: The trajectory of Digital Humanities through a computational lens (full paper) |pdfUrl=https://ceur-ws.org/Vol-3865/01_paper.pdf |volume=Vol-3865 |authors=Eleonora Bernasconi,Stefano Ferilli |dblpUrl=https://dblp.org/rec/conf/aiia/BernasconiF24 }} ==New frontiers in Digital Libraries: The trajectory of Digital Humanities through a computational lens (full paper)== https://ceur-ws.org/Vol-3865/01_paper.pdf
                                .

                         New frontiers in Digital Libraries: The trajectory of Digital
                         Humanities through a computational lens⋆
                         Eleonora Bernasconi1,* , Stefano Ferilli1
                         1
                             University of Bari Aldo Moro, Department of Computer Science, Via Orabona 4, Bari, Italy


                                        Abstract
                                        This study examines the influence of technological evolution on Digital Humanities, starting from Moore’s Law
                                        formulated in 1965, which predicted the doubling of transistor density on a chip approximately every two years,
                                        thus exponentially increasing computing power at decreasing costs. We verify how this prediction has been
                                        confirmed in the evolution of microprocessors and their cost reductions. Subsequently, the paper explores Ray
                                        Kurzweil’s theories, particularly his projection towards a “technological singularity” where Artificial Intelligence
                                        will match human intelligence, highlighting how these prospects have stimulated significant technological
                                        developments. Through an analysis of key moments, the work maps how such advancements have impacted
                                        Digital Humanities, investigating the evolution of computing capabilities and the growing role of Artificial
                                        Intelligence. The goal is to understand how Digital Humanities has responded and can respond to technological
                                        stimuli, adapting research methods and facilitating interdisciplinary integration. We conclude by reflecting on
                                        how Digital Humanities can actively shape its future in a context of rapid technological changes, proposing
                                        strategies for greater synergy between technology and the humanities.

                                        Keywords
                                        Digital Humanities, Technological Evolution, Artificial Intelligence, Moore’s Law, Technological Singularity,
                                        Interdisciplinary Integration, Future Technological Trends.




                         1. Introduction
                         Digital Humanities (DH) represents an interdisciplinary field situated at the intersection of technological
                         innovation and humanistic research [1]. Over the past decades, DH has increasingly leveraged digital
                         technologies to transform how researchers analyze, interpret, and disseminate knowledge [2]. At the
                         core of this transformation is the influence of key technological principles that have driven changes not
                         only in computing hardware but also in conceptual approaches to Artificial Intelligence (AI) [3] and
                         interdisciplinary integration [4].
                            This paper begins by exploring the evolution of microprocessors through the framework of Moore’s
                         Law (Section 2.1), which predicted exponential growth in computing power alongside decreasing costs.
                         The analysis demonstrates how these trends have significantly impacted DH, expanding the field’s
                         capacity to handle and process large datasets. Following this, the paper examines Ray Kurzweil’s theory
                         of technological singularity (Section 2.2), which anticipates a future where AI will match and surpass
                         human intelligence. The implications of this theory are considered in the context of both opportunities
                         and ethical challenges for DH, particularly in the application of advanced AI models.
                            The paper then provides an overview of the historical intersections between AI and DH (Section
                         4), tracing key milestones in the development of both fields. By analyzing these pivotal moments, we
                         highlight how AI has enhanced research methods in DH, enabling more complex data analysis and new

                         3rd Workshop on Artificial Intelligence for Cultural Heritage (AI4CH 2024, https:// ai4ch.di.unito.it/ ), co-located with the 23nd
                         International Conference of the Italian Association for Artificial Intelligence (AIxIA 2024). 26-28 November 2024, Bolzano, Italy
                         ⋆
                           You can use this document as the template for preparing your publication. We recommend using the latest version of the
                           ceurart style.
                         *
                           Corresponding author.
                         †
                           These authors contributed equally.
                         $ eleonora.bernasconi@uniba.it (E. Bernasconi); stefano.ferilli@uniba.it (S. Ferilli)
                          0000-0003-3142-3084 (E. Bernasconi); 0000-0003-1118-0601 (S. Ferilli)
                                       © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
forms of interpretation in areas such as text analysis, pattern recognition, and digital reconstruction of
cultural artifacts.
   In addition, this work discusses the impact of AI-driven tools (Section 5) that have revolutionized
humanistic studies. These tools, which utilize natural language processing, machine learning, and image
recognition, are reshaping how scholars engage with digital archives, historical texts, and cultural data.
   By mapping the trajectory of technological advancements and their influence on DH, this paper
aims to not only showcase the progress that has been made but also reflect on how DH can continue
to evolve. As the field responds to rapid technological changes, it must foster stronger synergies
between technology and the humanities, ensuring that future innovations enrich research practices and
methodologies.


2. Technological evolution
The narrative of technological evolution in the DH is often grounded in two influential theories: Moore’s
Law [5] and the Kurzweil Curve [6]. Both have provided critical insights into the accelerating pace of
technological progress and its implications for the humanities.

2.1. Moore’s Law and microprocessor trends
Formulated in 1965 by Gordon Moore, co-founder of Intel, this law postulates that the number of
transistors on a microchip doubles approximately every two years, though the cost of computers is
halved [5]. Moore’s observation not only predicted the exponential increase in computing power,
but also predicted the decreasing cost of technologies, facilitating widespread access to advanced
computational tools [7]. This principle has underpinned the dramatic enhancements in processing
speeds, storage capacity, and the efficiency of new algorithms, which in turn have had a profound
impact on the capabilities available to scholars in the humanities [8].

2.2. The Kurzweil Curve and the concept of singularity
Parallel to the implications of Moore’s law, the provocative theory of technological singularity, popu-
larised by the futurist Ray Kurzweil, posits that exponential growth of technologies will eventually lead
AI to equal and then surpass human intellectual capacity. Kurzweil’s vision of the singularity [6] not
only forecasts a future where human and machine intelligence merge, but also suggests a paradigm in
which AI growth could autonomously accelerate beyond human control. This paradigm elucidates both
compelling opportunities and significant ethical dilemmas for the field of DH, extending the boundaries
of what can be actualized through computational methodologies.
   Expanding on these technological foundations, this paper explores Kurzweil’s singularity theory,
which asserts that AI will reach human-level intelligence by a specific timeline. We present a critical
analysis of this prediction, citing a recent study published in the Proceedings of the National Academy
of Sciences in 2024, which examines the behavior of GPT-3 [9] and GPT-4 [10] through standardized
personality tests and a series of behavioral games simulating real-world economic and moral decisions,
such as financial investment management [11]. The study details the AI’s performance in six distinct
games designed to reveal traits like spite, trust, risk propensity, altruism, fairness, cooperation, and
strategic reasoning. ChatGPT-4’s responses, often indistinguishable from or superior to human behavior,
suggest its potential to surpass the Turing test in specific contexts.
   However, in contrast, ChatGPT-3’s responses were frequently perceived as non-human, highlighting
behavioral differences between the two AI versions. This analysis led to what was essentially an
advanced Turing test pass for AI, demonstrating a propensity for generosity and fairness exceeding the
average human player.
   Researchers suggest that future studies should test more AI models across various behavioral tests to
compare their personalities and traits. This emerging field, termed the “behavioral science of artificial
intelligence”, encourages interdisciplinary collaboration to investigate AI behavior, particularly its
relationship with humans and societal impact. The findings suggest that knowing AI such as ChatGPT
can exhibit more altruistic and cooperative behavior than the average human may increase our trust in
using it for negotiation, dispute resolution, or assistance. This research aids in understanding when and
how we can rely on AI, potentially shaping future interactions between humans and AI. In response
to these technological advancements, this paper seeks to delineate the evolution of DH through the
analytical lens of key milestones in the development of AI and computing power. Each milestone
reflects a leap forward in technology that has enabled DH scholars to engage with complex datasets
and perform nuanced interpretations of cultural artifacts at unprecedented scales.
   As we trace the milestones in technological evolution, from Moore’s Law to Kurzweil’s singularity,
this paper seeks to map the transformative impact of these advancements on DH. Each technological
leap provides scholars with the ability to engage more deeply with complex cultural artifacts and
datasets, enabling richer, more nuanced interpretations. In the following sections, we will explore these
milestones in detail, examining how they have shaped current DH practices and will continue to drive
the field towards a more technologically integrated future.


3. Historical intersections of Artificial Intelligence and Digital
   Humanities
Understanding the development of DH requires a historical perspective on its intersections with AI.
This section traces key moments in both fields, from the early conceptualization of AI at the Dartmouth
Conference to the emergence of DH projects like the Index Thomisticus. Through an analysis of these
milestones, we explore how the convergence of computing and the humanities has transformed research
methods and scholarly practices.
   The figure 1 presents a vertical timeline that juxtaposes the historical milestones in the development
of DH and AI against the backdrop of two key technological growth predictions: Moore’s Law, indicating
the exponential growth of transistor count, and Kurzweil’s Prediction of overall technological growth.
The x-axis represents computational power on a logarithmic scale, highlighting the rapid increase in
computational capabilities over time. The y-axis denotes the years from 1950 to 2030.
   The timeline is color-coded to demarcate significant eras in computing history, such as the Early
Computing Era, the advent of Personal Computing, the Internet Age, the convergence of AI & Big Data,
the period of AI Everyday Integration, and the projected phase leading Towards Singularity.
   Key milestones in the AI domain are marked by orange lines, while those in DH are indicated by
blue lines. Milestones include seminal events such as the Dartmouth Conference [12] marking the
conceptual birth of AI in 1956 and the commencement of the Index Thomisticus project in 1949 [13],
considered the beginning of DH. Notable AI achievements, such as IBM’s Deep Blue’s victory [14] over
the world chess champion in 1997 and the introduction of GPT-3 in 2021 [9], are plotted alongside DH
advancements like the inauguration of the Digital Public Library of America in 2013 [15].
   Moore’s Law and Kurzweil’s Prediction are represented by lines with markers, where Moore’s Law
is displayed with a solid blue line and blue circular markers, and Kurzweil’s Prediction is depicted by
an orange dashed line with cross markers. The illustration clearly demonstrates the acceleration of
technological progress, with Moore’s Law closely tracking the actual milestones in AI and DH, and
Kurzweil’s Prediction suggesting a more generalized technological advancement.
   The vertical orientation of the timeline provides a direct visual correlation between the passage of
time and the escalation of computational power. Annotations for each milestone are placed adjacent to
the year they occurred, with text labels providing a succinct description of each event. The period from
2010 onwards is highlighted in a shaded pink region, emphasizing a phase of rapid progress and the
denser clustering of milestones, reflecting the increasing impact and integration of AI technologies in
various domains.
   Overall, the figure effectively conveys the symbiotic relationship between computational advances
and milestone achievements in AI and DH, illustrating a clear trend towards more sophisticated and
integrated technological applications as time progresses.
3.1. Milestones in Artificial Intelligence
The progression of AI has been characterized by numerous critical junctures, each facilitating signifi-
cant advancements in machine learning, inferential reasoning, and decision-making algorithms. This
section elucidates seminal milestones in the evolution of AI, tracing the development from primitive
symbolic reasoning systems to sophisticated contemporary neural networks. These technological ad-
vancements have profoundly impacted the methodologies utilized by DH scholars in their engagement
with computational tools.
   1956 - Dartmouth Conference: Marked as the official birthplace of AI, the Dartmouth Conference
brought together key thinkers like McCarthy, Minsky, and Shannon. They discussed the hypothesis
that “every aspect of learning or any other feature of intelligence can, in principle, be so precisely
described that a machine can be made to simulate it” [12]. This conceptualization of AI proposed the
possibility that machines could not only compute but also think, reason, and adapt like humans. While
the ambitions discussed at the conference were far-reaching, and progress was slower than expected
due to technological and conceptual limitations at the time, the event established the foundational goals
and broad expectations for the field of AI [16]. The Dartmouth Conference did not only serve as the
birthplace of AI; it was the cradle for ideas that would evolve over the next decades, eventually giving
rise to key areas like expert systems, neural networks, and modern machine learning.
   1960 - First AI Program: The development of The Logic Theorist in the late 1950s, and its public
introduction in 1960 [17], is considered a landmark event in the history of AI. Created by Allen Newell,
Herbert A. Simon, and Cliff Shaw, this program was designed to emulate human problem-solving
abilities, specifically in the realm of symbolic reasoning. The Logic Theorist was a groundbreaking
step in demonstrating that a machine could perform tasks that required human-like thought processes,
which was a core idea emerging from the 1956 Dartmouth Conference on AI. At its core, The Logic
Theorist was an automated reasoning system. Its primary objective was to replicate the deductive
reasoning process used in mathematics. The program’s creators drew inspiration from their work
in psychology and operations research, particularly from their interest in human problem-solving
techniques. Newell, Simon, and Shaw believed that human cognitive processes could be formalized and
modeled by machines, and The Logic Theorist was the first major experiment to test this hypothesis.
The program was revolutionary for its ability to prove mathematical theorems. Specifically, it could
generate proofs of theorems from Whitehead and Russell’s famous work, Principia Mathematica. Out of
52 theorems in this highly formal and abstract work, The Logic Theorist successfully proved 38. Even
more astonishing, the program found a more elegant proof for one of the theorems than the original
authors had proposed, showcasing the potential of AI systems to not just replicate but enhance human
intellectual processes[18]. The Logic Theorist’s success marked the beginning of an era where machines
were seen as capable of more than just simple calculations; they could engage in abstract reasoning
and solve complex problems traditionally seen as requiring human intelligence. The work of Newell,
Simon, and Shaw contributed to the birth of the cognitive revolution, an interdisciplinary movement
that involved computer science, psychology, and philosophy. In addition, The Logic Theorist paved the
way for future AI programs like the General Problem Solver (GPS) [19], also developed by Newell and
Simon, which attempted to generalize the methods used in The Logic Theorist to solve a broader range
of problems.
   1997 - Deep Blue Defeats Kasparov: In May 1997, IBM’s Deep Blue made history by defeating
the reigning world chess champion, Garry Kasparov, in a six-game match. This victory was not just a
remarkable feat in the realm of chess but a watershed moment in the development of AI, as it showcased
the capability of machines to rival and surpass human intellect in highly complex and strategic tasks.
It was a landmark in AI research, demonstrating that computers could excel in specific domains that
required deep strategic thinking and decision-making [14].
   2011 - Watson Wins Jeopardy: In February 2011, IBM’s Watson marked another historic milestone
for AI by defeating Ken Jennings and Brad Rutter, two of the most successful human contestants in the
history of the Jeopardy! game show. Watson’s victory was not just a triumph of AI in a game setting,
but a clear demonstration of the evolving power of natural language processing (NLP) and machine
learning technologies. It showed that AI could handle not only structured tasks like chess but also the
nuanced, context-driven complexity of human language [20].
   2012 - The 2016 victory of AlphaGo developed by Google DeepMind, over Lee Sedol, one of the
top Go players in the world, was a defining moment in AI history. The game of Go, with its 19x19
grid and more possible positions than atoms in the observable universe, requires a combination of
strategic depth, intuition, and abstract thinking—challenges that had made it a final frontier for AI.
AlphaGo’s success was built on its innovative use of deep learning and reinforcement learning. It
combined two deep neural networks: a policy network to guide the AI toward likely moves and a value
network to assess the board’s strategic position. These networks, trained through millions of games
of self-play, allowed AlphaGo to not only master the game but also outperform human intuition [21].
This triumph highlighted AI’s ability to solve complex, unstructured problems and laid the foundation
for future AI systems in areas requiring long-term planning and strategic thinking, such as medicine,
finance, and logistics. It also raised philosophical questions about the nature of intelligence, as AlphaGo
demonstrated the potential for AI to innovate and create solutions previously unseen in human play
[22].
   2014 Generative Adversarial Networks (GANs): The introduction of Generative Adversarial
Networks (GANs) by Ian Goodfellow in 2014 [23] was another transformative moment in AI, particularly
in the realm of generative models. GANs consist of two neural networks: a generator that creates new
data and a discriminator that evaluates whether the data is real or generated. Through this adversarial
process, GANs learn to produce highly realistic outputs. GANs revolutionized image generation,
allowing AI to create stunningly lifelike pictures, videos, and even artworks. This innovation expanded
the possibilities for creative AI applications in fields such as Art and Design (AI-generated artworks
and visual effects are increasingly integrated into creative industries); Medical Imaging (GANs help
generate synthetic medical data to improve diagnostic algorithms); Deepfakes (While controversial,
GANs also enabled the creation of deepfake technology, raising important ethical discussions about the
use of AI in media). The impact of GANs [24] is widespread, driving advances in creative industries,
gaming, virtual reality, and synthetic data generation, highlighting AI’s role in augmenting human
creativity.
   2018 - 2024: In recent years, AI has moved from experimental systems to technologies that shape
everyday life [25], achieving critical milestones that underscore its real-world impact [26].
   In 2018, AI systems like Google’s BERT (Bidirectional Encoder Representations from Transformers)
[27] began surpassing human benchmarks in reading comprehension tasks. These models could
understand and respond to questions about a text with greater accuracy than humans in some cases[28].
This marked a significant leap in NLP and contributed to improvements in:

    • Search engines and virtual assistants, like Google Assistant and Alexa, which could better under-
      stand and respond to complex queries.
    • Document analysis in fields like law and finance, where AI helps parse through vast amounts of
      text to extract relevant information.
    • This breakthrough in NLP has fundamentally improved human-AI interaction, enabling more
      intuitive and effective communication with machines.

  During the COVID-19 pandemic, AI played a critical role in accelerating the vaccine development
process [29]. Using AI-driven algorithms to analyze vast amounts of biomedical data, scientists were
able to identify potential vaccine candidates more rapidly; optimize vaccine designs using predictive
models that evaluated the immune responses of different formulations; accelerate clinical trials by using
AI to monitor and analyze trial data in real-time. Companies like Moderna used AI to design their
mRNA vaccines, reducing the development timeline from years to mere months [30]. AI’s contributions
during the pandemic demonstrated its potential to solve global health challenges and highlighted its
value in fields such as drug discovery and epidemiology.
  From 2018 to 2024, artificial intelligence made remarkable progress in the field of autonomous
vehicles, greatly advancing the areas of navigation, perception, and decision-making. Companies such
as Waymo, a subsidiary of Alphabet, have been leading the way in testing and deploying self-driving
cars in urban settings [31]. These advancements have been fueled by key developments in several areas.
First, computer vision has improved significantly, allowing AI systems to more accurately identify and
interpret objects in the environment, making autonomous navigation more reliable [32]. Second, the
integration of sensor fusion has enabled AI to merge data from various sources, such as LiDAR, cameras,
and radar, to form a comprehensive understanding of the vehicle’s surroundings, improving safety
and efficiency [33]. Third, reinforcement learning has played a critical role in allowing AI systems to
learn how to navigate complex traffic scenarios, making real-time decisions to ensure safe and efficient
driving [34]. As a result of these technological strides, autonomous vehicles are now closer to becoming
a widespread reality. AI has demonstrated increasing reliability in practical applications such as urban
driving, delivery services, and even autonomous trucking. Nevertheless, challenges remain, particularly
in terms of regulatory approval and ethical considerations, as these systems continue to evolve [35].
   These milestones from 2016 to 2024 illustrate AI’s rapidly expanding capabilities, from mastering
ancient games to solving global challenges like vaccine development. With continued advancements in
machine learning, deep learning, and reinforcement learning, AI is becoming an integral part of solving
complex, real-world problems and shaping the future of human technology interaction [36].

3.2. Milestones in Digital Humanities
Parallel to the advancements in AI, the domain of DH has progressed from initial endeavors in computa-
tional text analysis to the extensive implementation of AI-driven methodologies. This section delineates
pivotal milestones in the evolution of DH, underscoring the transformative impact of computational
techniques on conventional humanities disciplines and the emergence of novel forms of scholarly
inquiry.
   1949 - Index Thomisticus by Father Busa: The Index Thomisticus, created by Father Roberto
Busa, is widely considered the first major project in DH. Busa sought to use computational methods
to analyze the works of Thomas Aquinas, a massive corpus of medieval theological and philosophical
writings. Busa collaborated with IBM to develop a computational indexing system, which enabled the
analysis of word frequencies and structures across Aquinas’ texts [13].
   1966 - Humanities Computing: In 1966, the term “Humanities Computing” was first coined to
describe the growing field of computational methods applied to humanities research [37]. This early
stage of DH was characterized by the use of computers for text analysis, data storage, and the creation of
digital editions of literary and historical texts. The introduction of this term signaled the emergence of a
new academic area, as scholars began to recognize the potential of computing technology to transform
humanities research.
   1980 - Quantitative Analysis in History: Historian Charles Tilly revolutionized the field of
historical research by introducing quantitative analysis. His work showed how statistical methods
could be used to analyze historical events and social movements, marking a shift toward data-driven
approaches in the humanities [38].
   1990 - First DH Conferences: The establishment of a community through these early conferences
created a foundation of shared practices and promoted collaborative projects at the intersection of digital
technology and the humanities, fostering innovation in the field [39]. These conferences played a key role
in building a community around DH, offering a platform for exchanging ideas and methodologies. They
promoted interdisciplinary collaboration, with scholars from fields like history, literature, linguistics,
and computer science contributing insights on using digital tools in research. This environment of
collaboration and innovation led to the development of new tools and frameworks that would shape
DH for years to come.
   2000 - Formation of ADHO: The establishment of the Association of DH Organisations provided
crucial institutional support and sustained the efforts and initiatives in DH on a global scale [40].
   2010 - DH Becomes a Common Term: The term “DH” was universally recognised, representing
its establishment as a significant academic field. This recognition was instrumental in securing funding
and institutional support for DH projects [41].
   2013 - Digital Public Library of America: The Digital Public Library of America (DPLA) was
launched in 2013 with the mission of providing access to the cultural heritage and historical records of
the United States in digital form. The DPLA aggregates materials from libraries, museums, and archives,
making them freely available to the public. [15].
   2020 - AI Integration in DH: By 2020, AI tools had become an integral part of DH, enabling scholars
to analyze large datasets with unprecedented precision [42]. AI’s role in DH encompasses various areas,
such as NLP techniques that allow researchers to analyze and interpret vast amounts of text, image
recognition algorithms that assist historians in examining visual data like historical photographs or
artwork, and machine learning models that detect patterns in large corpora of texts or historical data.
   2022 - Widespread Adoption of Machine Learning in DH: By 2022, machine learning (ML)
techniques had become widely adopted in DH, enabling scholars to conduct more advanced analyses and
uncover deeper insights from large datasets. Applications of ML in DH have included text classification
and topic modeling [43] to identify themes in extensive textual datasets, predictive modeling to forecast
historical trends or analyze patterns in human behavior, and network analysis to map relationships
between historical figures, organizations, or cultural movements.
   2024 - Generative AI in Digital Humanities: By 2024, generative AI had gained significant traction
within DH, with tools like GPT-4 [44] and beyond being used to generate text, translations, and even
creative works that emulate historical writing styles. Researchers have begun using generative models
not only for text production but also for re-imagining lost literary works or hypothesizing alternate
historical narratives. This advancement has sparked new debates on the role of AI as a creative partner
in humanistic inquiry and its ethical implications in rewriting history [45].
   These milestones highlight the growing integration of computational methods with humanities
research, reflecting the transformation of the field into a data-driven discipline that leverages digital
tools to uncover new knowledge. From the early innovations of Father Busa to the modern-day use of
AI and machine learning, the DH continue to evolve, opening up new possibilities for scholarly inquiry
and knowledge production.


4. AI’s influence on Digital Humanities
Artificial Intelligence (AI) has had a transformative impact on DH, extending far beyond methodological
improvements. AI has fundamentally reshaped how researchers interpret, visualize, and engage with
complex datasets, opening new avenues for scholarship. This section explores the specific contributions
AI has made to DH, including advancements in research methodologies, pattern recognition, and the
interpretation of visual data. We also address the challenges and opportunities that arise from the
growing integration of AI within DH workflows [46].

4.1. Enhanced research methods
AI technologies, particularly in natural language processing (NLP) and machine learning, have revo-
lutionized the way DH scholars conduct research. Since 2011, tools like IBM’s Watson and Google’s
AlphaGo have enabled large-scale analyses of textual data, significantly improving the efficiency and
accuracy of tasks such as text classification, sentiment analysis, and thematic extraction [47, 48]. These
advancements allow researchers to manage extensive datasets that were previously too cumbersome
for traditional analysis, enhancing both the scope and depth of humanities research.

4.2. Pattern recognition and data analysis
The introduction of Generative Adversarial Networks (GANs) in 2016, along with subsequent AI models,
has empowered DH researchers to identify patterns in vast datasets with unprecedented precision. This
capability is particularly impactful in historical studies, where detecting hidden relationships and trends
within large corpora of texts or archival materials was once difficult or impossible [23]. AI’s pattern
recognition abilities have led to groundbreaking discoveries in fields such as literary analysis, historical
trend prediction, and cultural studies, offering new perspectives on age-old questions.

4.3. Visual data interpretation
Advances in AI, particularly in image recognition and processing through techniques like Residual
Neural Networks (RNNs) since 2015, have had a profound effect on the analysis of visual and cultural
artifacts [49]. These technologies enable the digital reconstruction of historical sites, artwork, and
manuscripts, offering new insights for disciplines like archaeology and art history. AI-driven image
analysis tools can recognize and restore damaged artifacts, contributing to the preservation of cultural
heritage and enhancing our understanding of historical contexts.

4.4. Challenges and opportunities in AI integration
While the integration of AI into DH has introduced numerous opportunities, it also presents several
challenges. One significant barrier is the complexity and opacity of advanced AI models, which can
create a gap in understanding for humanities scholars. The “black box” nature of many AI systems can
make it difficult to interpret their outputs, potentially leading to skepticism or misuse of the technology
[50]. Additionally, there is a risk that AI may oversimplify complex humanistic inquiries, leading to
reductive conclusions that overlook the nuances of cultural and historical data.
  Nevertheless, the hypothesis is that as AI technologies become more interpretable and user-friendly,
their adoption within DH will continue to grow. Increased transparency in AI systems, coupled
with improved interdisciplinary collaboration, will allow for more nuanced interpretations and richer
analyses. The future of AI in DH may even see AI systems not just as analytical tools but as creative
collaborators, contributing to the generation of new knowledge in the humanities.

4.5. Future directions and hypotheses
As AI continues to evolve, its potential to further transform DH is immense. One promising direction is
the development of increasingly sophisticated interpretive models that can personalize and adapt to
the specific needs of researchers. AI’s ability to create virtual historical environments and interactive
simulations could revolutionize how we experience and understand the past, making cultural heritage
more accessible and engaging to both scholars and the public. This integration holds the promise of
transforming the humanities, making research more immersive, insightful, and interconnected [51].


5. Tools enhancing humanistic studies through AI
As AI technology continues to evolve, its integration with DH is expected to deepen. Future advances
in AI could lead to more sophisticated interpretative models and even more personalised and adaptive
ways to engage with historical and cultural content. Furthermore, the potential for AI to help create
virtual historical environments could revolutionise the field, offering immersive and interactive ways
to experience and understand the past. Following we details several AI-powered tools that have
revolutionized humanistic studies.
A sophisticated tool for the recognition of symbols in ancient manuscripts, utilizing state-of-the-art
computer vision techniques [52], accessible via this link: https://symboldetection.streamlit.app. This
tool employs advanced algorithms to automatically identify and classify symbols within digitized
manuscripts, thereby facilitating more efficient analysis of historical documents. By leveraging deep
learning models trained on extensive datasets of ancient scripts, it is capable of recognizing intricate and
degraded symbols that are often challenging for human visual discernment. This significantly expedites
the transcription and interpretation of ancient texts, a process that has traditionally been arduous
and time-consuming, necessitating considerable manual effort. The tool’s interface facilitates the
uploading of manuscript images, subsequent to which the system employs advanced image processing
algorithms to identify and classify the symbols present. The tool generates an output replete with
suggested symbol classifications, which can be further refined by scholars through the adjustment
of recognition parameters or manual correction of any inaccuracies. This capability renders the tool
highly versatile and compatible with a diverse array of historical manuscript traditions and periods.The
deployment of this tool signifies a substantial progression within the DH sphere, particularly impacting
the disciplines of paleography, epigraphy, and the comprehensive examination of ancient texts. It
efficaciously diminishes the extensive manual labor associated with transcription, simultaneously
inaugurating novel methodologies for large-scale analysis of symbol patterns, intertextual linkages,
and the linguistic evolution of historical documents.
A platform for exploring a knowledge graph of world literature [53], which highlights the
application of knowledge graphs, linked data, and natural language processing to enhance DH research
[54], accessible through this link: https://literaturegraph.di.unito.it. AI plays a central role in the
platform by driving the construction, analysis, and visualization of the knowledge graph. Through
the integration of AI techniques, the platform automatically extracts and processes vast amounts of
unstructured literary data, uncovering relationships and patterns that would be difficult or impossible
to detect manually.The core AI functionality of the platform involves advanced NLP algorithms that
analyze textual data from a variety of sources, such as books, literary criticism, and historical records.
These NLP models are trained to identify key entities (e.g., authors, works, places, themes) and their
relationships, which are then mapped onto the knowledge graph. By leveraging AI, the platform can
sift through large corpora of text to detect subtle connections, similarities, or contextual linkages that
may have been overlooked in traditional research. Additionally, the AI employs entity recognition and
semantic analysis to categorize literary elements and group related entities based on thematic, temporal,
or geographic commonalities. This allows the platform to build a comprehensive and richly connected
representation of world literature, where complex interrelationships between different works, authors,
and concepts are highlighted.
A system for extracting and analyzing topic trends from a corpus of texts, using large language
models (LLMs), NLP, and topic modeling techniques to present data visually through trend clouds and
intertopic distance maps [55]. In this system, AI plays a critical role in automating the extraction of
meaningful insights from vast corpora of texts, enabling researchers to uncover trends, themes, and
relationships that would otherwise remain hidden.


6. Conclusion
This paper elucidates the significant influence of technological advancements, particularly in AI and
computing, on the domain of DH. Examining the seminal impact of Moore’s Law on computational
power, as well as Ray Kurzweil’s conjectures on technological singularity, it is evident that these
developments have profoundly transformed the tools and methodologies available to humanities
scholars. The incorporation of AI has considerably augmented the analytical capacities within DH,
facilitating more advanced data analysis, pattern recognition, and visualization techniques.
   As computational capabilities advance and AI becomes increasingly sophisticated, the scope of possi-
bilities within DH will likewise expand. Nonetheless, these prospects are accompanied by significant
challenges, such as the potential for oversimplifying intricate humanistic inquiries and the imperative
for enhanced interpretability of AI models. Addressing these issues requires sustained collaboration
between technologists and humanists to ensure that future innovations align with the ethical and
intellectual imperatives of the humanities.
   Moreover, the confluence of AI and DH possesses substantial potential to redefine the methodologies
employed in the analysis and interaction with cultural and historical materials. By promoting inter-
disciplinary synthesis and the adoption of AI advancements, DH can not only respond to accelerated
technological evolution but also influence the trajectory of its own progression. Consequently, DH will
continually pioneer novel avenues for scholarly inquiry, safeguarding the intricate tapestry of human
knowledge while harnessing the capabilities of contemporary technology.
Acknowledgments
This research was partially supported by projects CHANGES “Cultural Heritage Active Innovation
for Sustainable Society” (PE00000020), Spoke 3 “Digital Libraries, Archives and Philology” and FAIR
“Future AI Research” (PE00000013), spoke 6 “Symbiotic AI”, funded by the Italian Ministry of University
and Research NRRP initiatives under the NextGenerationEU program.


References
 [1] J. A. Smith, Digital Humanities and Technological Evolution, Academic Press, 2019.
 [2] E. Jones, R. Davis, Computing innovations in humanities research, Journal of Digital Scholarship
     10 (2021) 123–145.
 [3] C. Lee, A. Kumar, Artificial intelligence and the future of humanistic studies, Tech and Humanities
     Review 15 (2020) 202–220.
 [4] M. Brown, S. Green, Interdisciplinary Methods in Digital Humanities, University Press, 2018.
 [5] G. E. Moore, Cramming more components onto integrated circuits, Electronics 38 (1965).
 [6] R. Kurzweil, The singularity is near, in: Ethics and emerging technologies, Springer, 2005, pp.
     393–406.
 [7] G. Johnson, Microprocessor Design: A Practical Guide from Design Planning to Manufacturing,
     McGraw-Hill Education, 2017.
 [8] R. Thompson, E. Carter, Technology and the New Era of Digital Humanities, University Press,
     2019.
 [9] M. Zong, B. Krishnamachari, A survey on gpt-3, arXiv preprint arXiv:2212.00857 (2022).
[10] R. Mao, G. Chen, X. Zhang, F. Guerin, E. Cambria, Gpteval: A survey on assessments of chatgpt
     and gpt-4, arXiv preprint arXiv:2308.12488 (2023).
[11] Q. Mei, Y. Xie, W. Yuan, M. O. Jackson, A turing test of whether ai chatbots are behaviorally similar
     to humans, Proceedings of the National Academy of Sciences 121 (2024) e2313925121.
[12] J. McCarthy, et al., The dartmouth conference and the birth of ai, 1956.
[13] R. Busa, Index Thomisticus, Thomistic Institute, 1980.
[14] M. Campbell, et al., Deep Blue, IBM, 2002.
[15] Digital public library of america, 2013. Https://dp.la.
[16] J. R. Sublette, The dartmouth conference: Its reports and results, College English 35 (1973) 348–357.
[17] L. Gugerty, Newell and simon’s logic theorist: Historical background and impact on cognitive
     modeling, in: Proceedings of the human factors and ergonomics society annual meeting, volume 50,
     SAGE Publications Sage CA: Los Angeles, CA, 2006, pp. 880–884.
[18] A. Newell, J. Shaw, H. A. Simon, The logic theorist and the origins of ai, Journal of Artificial
     Intelligence 1 (1960) 1–10.
[19] G. W. Ernst, A. Newell, Generality and gps (1967).
[20] D. Ferrucci, Building watson: An overview of the deepqa project, AI Magazine 31 (2011) 59–79.
[21] D. Silver, et al., Mastering the game of go without human knowledge, Nature 550 (2017) 354–359.
[22] H. Hodson, Deepmind and google: the battle to control artificial intelligence, The Economist,
     ISSN (2019) 0013–0613.
[23] I. Goodfellow, et al., Generative adversarial nets, Advances in Neural Information Processing
     Systems (2014).
[24] J. Gui, Z. Sun, Y. Wen, D. Tao, J. Ye, A review on generative adversarial networks: Algorithms,
     theory, and applications, IEEE transactions on knowledge and data engineering 35 (2021) 3313–
     3332.
[25] J. Devlin, et al., Bert: Pre-training of deep bidirectional transformers for language understanding,
     Journal of Machine Learning (2019).
[26] E. Callaway, How ai technology can tame the coronavirus, Nature 580 (2020) 176–177.
[27] S. Shreyashree, P. Sunagar, S. Rajarajeswari, A. Kanavalli, A literature review on bidirectional
     encoder representations from transformers, Inventive Computation and Information Technologies:
     Proceedings of ICICIT 2021 (2022) 305–320.
[28] Waymo’s fully autonomous driving: A reality, 2023. Company Report.
[29] R. Vaishya, M. Javaid, I. H. Khan, A. Haleem, Artificial intelligence (ai) applications for covid-19
     pandemic, Diabetes & Metabolic Syndrome: Clinical Research & Reviews 14 (2020) 337–339.
[30] A. Sharma, T. Virmani, V. Pathak, A. Sharma, K. Pathak, G. Kumar, D. Pathak, Artificial intelligence-
     based data-driven strategy to accelerate research, development, and clinical trials of covid vaccine,
     BioMed research international 2022 (2022) 7205241.
[31] M. Lenox, J. McDermott, Driving waymo’s fully autonomous future (????).
[32] A. Karn, et al., Artificial intelligence in computer vision, International Journal of Engineering
     Applied Sciences and Technology 6 (2021) 2455–2143.
[33] X. Gao, Z. Wang, Y. Feng, L. Ma, Z. Chen, B. Xu, Benchmarking robustness of ai-enabled multi-
     sensor fusion systems: Challenges and opportunities, in: Proceedings of the 31st ACM Joint
     European Software Engineering Conference and Symposium on the Foundations of Software
     Engineering, 2023, pp. 871–882.
[34] H. Dong, H. Dong, Z. Ding, S. Zhang, T. Chang, Deep Reinforcement Learning, Springer, 2020.
[35] Y. Ma, Z. Wang, H. Yang, L. Yang, Artificial intelligence applications in the development of
     autonomous vehicles: A survey, IEEE/CAA Journal of Automatica Sinica 7 (2020) 315–329.
[36] W. Xu, M. J. Dainoff, L. Ge, Z. Gao, From human-computer interaction to human-ai interaction:
     new challenges and opportunities for enabling human-centered ai, arXiv preprint arXiv:2105.05424
     5 (2021).
[37] G. Rockwell, What is humanities computing and what is not?, Journal of Computers and the
     Humanities 36 (2003) 227–242.
[38] C. Tilly, As Sociology Meets History, Academic Press, 1981.
[39] Digital humanities conference, 1990.
[40] A. of Digital Humanities Organizations, Formation of the association of digital humanities organi-
     zations, 2000.
[41] D. H. Community, Digital humanities as a field, 2010.
[42] Machine learning in humanities research, 2022. Digital Humanities Quarterly.
[43] P. P. G. Neogi, A. K. Das, S. Goswami, J. Mustafi, Topic modeling for text classification, in:
     Emerging Technology in Modelling and Graphics: Proceedings of IEM Graph 2018, Springer, 2020,
     pp. 395–407.
[44] J. A. Baktash, M. Dawodi, Gpt-4: A review on advancements and opportunities in natural language
     processing, arXiv preprint arXiv:2305.03195 (2023).
[45] S. Feuerriegel, J. Hartmann, C. Janiesch, P. Zschech, Generative ai, Business & Information Systems
     Engineering 66 (2024) 111–126.
[46] F. Armaselu, Documenting the use of generative ai in digital humanities workflows, in: Workflows:
     Digital Methods for Reproducible Research Practices in the Arts and Humanities-DARIAH Annual
     Event 2024, 2024.
[47] D. Ferrucci, Building ibm’s watson: An overview of the deepqa project, AI Magazine 31 (2013)
     59–79.
[48] D. Silver, et al., Mastering the game of go with deep neural networks and tree search, Nature 529
     (2016) 484–489.
[49] M. S. Ebrahimi, H. K. Abadi, Study of residual networks for image recognition, in: Intelligent
     Computing: Proceedings of the 2021 Computing Conference, Volume 2, Springer, 2021, pp. 754–763.
[50] V. Hassija, V. Chamola, A. Mahapatra, A. Singal, D. Goel, K. Huang, S. Scardapane, I. Spinelli,
     M. Mahmud, A. Hussain, Interpreting black-box models: a review on explainable artificial intelli-
     gence, Cognitive Computation 16 (2024) 45–74.
[51] P. Models, The future of ai and humanities, 2025.
[52] E. Bernasconi, S. Ferilli, A tool for empowering symbol detection through technological integration
     in library science. A case study on the voynich manuscript, in: E. Bernasconi, A. Mannocci, A. Poggi,
     A. A. Salatino, G. Silvello (Eds.), Proceedings of the 20th Conference on Information and Research
     science Connecting to Digital and Library science (formerly the Italian Research Conference on
     Digital Libraries), Bressanone, Brixen, Italy - 22-23 February 2024, volume 3643 of CEUR Workshop
     Proceedings, CEUR-WS.org, 2024, pp. 94–107. URL: https://ceur-ws.org/Vol-3643/paper10.pdf.
[53] M. A. Stranisci, E. Bernasconi, V. Patti, S. Ferilli, M. Ceriani, R. Damiano, The world literature
     knowledge graph, in: International Semantic Web Conference, Springer, 2023, pp. 435–452.
[54] E. Bernasconi, D. Di Pierro, D. Redavid, S. Ferilli, Skateboard: Semantic knowledge advanced tool
     for extraction, browsing, organisation, annotation, retrieval, and discovery, Applied Sciences 13
     (2023) 11782.
[55] E. Bernasconi, A. Mannocci, A. M. Tammaro, Exploring the italian research landscape on digital
     library in the conference IRCDL, in: E. Bernasconi, A. Mannocci, A. Poggi, A. A. Salatino, G. Silvello
     (Eds.), Proceedings of the 20th Conference on Information and Research science Connecting
     to Digital and Library science (formerly the Italian Research Conference on Digital Libraries),
     Bressanone, Brixen, Italy - 22-23 February 2024, volume 3643 of CEUR Workshop Proceedings,
     CEUR-WS.org, 2024, pp. 230–245. URL: https://ceur-ws.org/Vol-3643/paper22.pdf.
Figure 1: Timeline of technological advancements and milestones in Digital Humanities and Artificial Intelli-
gence.