<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Computational Creativity by Heuristic Search and Machine Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Amit Konar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sayantani Ghosh</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Artificial Intelligence Laboratory, Electronics and Telecommunication Engineering Department, Jadavpur University</institution>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Computational creativity refers to synthesis of human-like creativity on machines. This paper aims at synthesizing computational creativity by taking up an interesting problem to develop chapter-end problems of mathematics and physics using Artificial Intelligence techniques. The methodology employed to address the problem includes traditional random search and heuristic search, and modern techniques including Generative Adversarial Networks and Large Language Models. The true spirit of these models is to autonomously enhance diversity of problems by random exploration, structured search and learning. In addition, the paper demonstrates the scope of inductive learning to learn problem-solving from analogous problems, and to employ it to solve or generate similar problems. A set of metrics is proposed to compare the relative merits of the proposed and existing algorithms with a view to a have a uniform framework of comparison for the present and future research.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Computational creativity</kwd>
        <kwd>diversity</kwd>
        <kwd>problem generation</kwd>
        <kwd>scientific domain</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The present work focuses on designing innovative approaches to generate chapter-end
problems in science, particularly in mathematics and physics, by leveraging both classical and
modern AI techniques. Classical methods for fostering creativity include random search and
heuristic search while modern techniques involve Generative Adversarial Networks (GANs) [8]
LLMs. The study also explores the potential of inductive learning for developing computational
creative systems by drawing insights from analogous problems and applying them to solve
or generate similar ones. The methodologies for problem synthesis and problem-solving are
demonstrated through illustrative examples. Furthermore, a set of metrics is proposed to
evaluate and compare the efectiveness of the proposed generative algorithms against existing
state-of-the-art (SOTA) techniques. While this work provides a foundation for achieving
scientific creativity through machines, it acknowledges the limitations of the current models
and emphasizes the need for future advancements in integrating contextual and environmental
understanding to enhance the diversity and/or quality of generated problems, making them
comparable to those created by humans.</p>
      <p>The rest of the paper is organized as follows. Section 2 covers computational creativity using
random search techniques, while Section 3 focuses on heuristic-guided algorithms for generating
trigonometric identity problems. Section 4 explores inductive learning in mathematical creativity,
and Sections 5 and 6 discuss scientific problem generation using GANs and LLMs. Section 7
presents the performance analysis of the proposed algorithms in comparison to SOTA methods.
Section 8 addresses limitations and future directions, and Section 9 concludes the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Creativity by Random Search and Experiments</title>
      <p>Random experimentation refers to the process of conducting experiments in which certain
variables or conditions are assigned randomly. This approach often leads to unexpected
discoveries by enabling researchers to explore outcomes without preconceived notions or biases,
fostering an environment of genuine exploration and creativity. History demonstrates that
random experimentation can yield groundbreaking results, serving as a quintessential example
of innovation in science.</p>
      <p>Many revolutionary breakthroughs have stemmed from the process of random
experimentation, where unexpected outcomes have paved the way for transformative insights [9]. For
instance, Faraday’s Law of Electromagnetic Induction resulted from simple experimental setups
that, through systematic and open-ended exploration, fundamentally reshaped our
understanding of electromagnetism. Similarly, the discovery of carbon’s atomic structure and its various
allotropes including graphite, diamond, and graphene, underscores how randomness and
curiosity in experimental approaches can lead to groundbreaking findings. Another striking example
is the discovery of DNA’s double-helix structure by Watson and Crick. Their work was guided
by experimental data and serendipitous results, where seemingly random but insightful
observations played a crucial role in unraveling the mysteries of genetic inheritance, revolutionizing
biology in the process.</p>
      <p>
        To demonstrate the power of random search, let us instantiate the simple identity, often
taught in middle schools:
( − )2 = 2 + 2 − 2
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
where  and  are real numbers. Let us randomly select  = √ and  = √. The above
substitution yields,
      </p>
      <p>(√ − √)2 =  +  − 2√ ≥ 0, as a whole-square of a real number is always positive or
zero.</p>
      <p>⇒
 + 
2
≥
√
⇒ arithmetic mean of 2 numbers:  and  ≥ geometric mean of the same numbers.</p>
      <p>
        The above result indeed is an innovative outcome that unexpectedly follows as a result of
random substitution for  and . Now let us substitute  =  ,  = −  in (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) just as a
random selection. The substitution yields:
( − −  )2 = 22 + 2− 2 − 22 − 
⇒ (2 sin  )2 = 2(2 cos 2 − 2)
⇒ − 42 sin2  = 22(cos 2 − 1)
      </p>
      <p>⇒ cos 2 = 1 − 2 sin2</p>
      <p>
        It is noteworthy that the expression (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) is an important identity in trigonometry. However,
its sudden appearance makes us rethink that random substitution may give rise to interesting
and often unexpected results. Can we enhance the random search by adding structures to the
search process? This will be undertaken in our next problem by exploring search and pruning
unwanted search space by a heuristic algorithm. To guide the exploration process, we add a
new term: diversity cost in the estimation of the diversity of a generated trial solution with
respect to an initial trial solution, called root node in a search-tree.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Heuristic Search Approach to Computational Creativity</title>
      <p>Consider the problem of generating trigonometric identities as the chapter-end problems of
a first learner of trigonometry. Here, an approach similar to A* algorithm [ 10] is proposed to
handle the afore-said identity generation problem. The rules utilized for generating identities
of trigonometric first course are provided in Table 1. We here try to maximize () − (),
where () is the diversity cost of node  with respect to the root node and () is the cost of
generation with respect to the root node. Diversity cost between a parent and child node = no.
of mismatched symbols in the left of the parent and child node + no. of mismatched symbols
in the right of the parent and child node. Diversity cost of a node  = diversity cost of all the
nodes lying between the root node and the node .</p>
      <sec id="sec-3-1">
        <title>3.1. Computation of Diversity Cost</title>
        <p>
          The diversity cost of a node  with respect to its parent node  is evaluated by (
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
() = | − | + | − |
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
where,
= set of terms (operands) in the left side of node .
        </p>
        <p>
          = set of terms (operands) in the right side of node . Similarly,  and  represent
the left and right hand terms of the node  respectively. || represents the number of elements
in a given set . The diversity cost of the root node () is always 0. Now if the path from the
root node  to node  covers a sequence of nodes 1, 2, ....,  then the diversity cost of node
 with respect to root is evaluated by (
          <xref ref-type="bibr" rid="ref6">6</xref>
          )

() = () + ∑︁ ()
=1
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
        </p>
        <p>Let us now illustrate the computation of diversity cost of a node from the root node. Let the
node  be the identity 1/ = / , its parent node  be the identity  = /
and the root node  is 1=1. Here, there is no mismatch between the right sides of the parent
and the current node i.e., ={, }, ={, } and thus, |˘| = 0. But
there exists 2 mismatches in the left hand sides of the parent and child node, i.e., = {},
={1,} and so, |˘| = 2. Thus, () = 0 + 2 = 2. Since, () = 0 and so
() = 0 + 2 = 2.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Algorithm for Heuristic Guided Search</title>
        <p>The algorithm for the automatic generation of trigonometric identifies utilizing the heuristic
guided search is outlined in Algorithm 1. For the present context, the terminating condition is
imposed on number of selection of nodes for expansion. An illustration of the heuristic guided
search tree is provided in Figure 1 for a branching factor of 2 and pre-defined search depth of 3.
The value of the cost function (i.e., () − ()) is provided in {·} beside each generated node.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Inductive Learning Approach to Computational Creativity</title>
      <p>An alternative modality of problem generation is through inductive learning [11, 12]. Here,
the program learns a theme from examples and the learnt knowledge is utilized to develop
a new problem. As a simple example, suppose a program learns the rule: um → a to
transform the plural form: bacteria from the singular form: bacterium, and employs the
derived knowledge to compute the plural form of penicillium. Similar examples hold for
Algorithm 1 : Heuristic guided search algorithm for automatic trigonometric identity
generation
1. Initialize a list  with a start-up element : 1=1.
2. While terminating condition is not reached do</p>
      <p>Begin
a) Expand  by using the rules 1 to 5  times randomly to generate  ofspring of ,
called , =1 to .
b) Add them in descending order of their () − (), =1 to . Delete  from list.
c) Rename the first element of the list as .</p>
      <p>End-While
learning the knowledge us → i for transformation of the singular form alumnus to alumni,
and uses it to derive fungi from its singular form fungus. Naturally, the question arises:
can we utilize similar inductive knowledge for the generation of new problems? The
answer to this question is illustrated though an example from integration problems in mathematics.</p>
      <p>Suppose a machine learns some examples of solving integrals which include:
Example 1: ∫︀ 1  = ln() + 
Example 2: ∫︀ 22+ 1  = ln(2 + 1) +</p>
      <p>
        If the machine can accidently identify that the generalization of the above integrals is
which can be learnt by random experiments with known tables of  () and  ′() (Table 2),
then using the acquired knowledge (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ), the machine can generate several problems on integrals
using (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ) such as
∫︁
      </p>
      <p>2 + 5
2 + 5 + 2</p>
      <p>= ln(2 + 5 + 2) +</p>
    </sec>
    <sec id="sec-5">
      <title>5. Computational Creativity by Generative Adversarial Network</title>
      <p>A GAN [8] is a deep learning framework consisting of two primary components: a generator
and a discriminator. These components work in a competitive setup to enable the creation of
realistic outputs. The generator is tasked with producing synthetic data, while the discriminator
acts as a classifier that evaluates the authenticity of the generated data. The competition arises
as the generator simultaneously tries to fool the discriminator by improving the quality of its
outputs. This adversarial process continues until the generator produces outputs so realistic that
the discriminator fails to identify them as fake. The LeakGAN (Generative Adversarial Network
with leaked Information) [2] is an advanced variant of GAN designed for text generation, where,
in addition to the adversarial setup, the discriminator leaks high-level features of partially
generated sequences to guide the generator, improving coherence and diversity in the generated
text outputs. This feedback mechanism allows LeakGAN to produce more structured and
meaningful sequences compared to standard GANs. The present work utilizes the LeakGAN
framework for generating trigonometric identity problems.</p>
      <p>In LeakGAN, the discriminator, implemented as a Convolutional Neural Network (CNN), and the
generator, which employs Long Short-Term Memory (LSTM) units, are initially pre-trained on
a dataset of mathematical identities sourced from textbooks. During the pre-training phase, all
identity problems are tokenized and converted into embedding vectors, as detailed in Appendix
[13].</p>
      <p>The generator consists of two LSTM modules: the manager and the worker. The generation
process begins with the selection of a random term, represented as an embedding vector, which
serves as the starting point for generating a new identity. This vector is first passed to the
discriminator, which produces a pooled vector. This pooled vector is then leaked to the manager
module. The manager (LSTM) generates a goal vector that predicts the next term being an
operand or an operator. This goal vector is concatenated with the initial vector and fed into the
worker module, which comprises an LSTM and a Multi-Layer Perceptron (MLP). The worker
predicts the next term based on the above concatenated input. This iterative process continues
until the complete identity is generated. The detailed procedure for generating new identities
is described in Algorithm 2 and its schematic view is shown in Figure 2. Additionally, an
exemplar problem is included in Appendix [13] to demonstrate the generation process due to
space limitations.</p>
      <p>Once an identity is generated, it is passed to the discriminator to classify it as either real or fake.
Identities classified as real are further evaluated for mathematical correctness using the SymPy
module (a Python library for equation solving) as shown in Figure 3. If an identity is verified as
correct, it is considered for inclusion as a chapter-end exercise; otherwise, it is discarded.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Large Language Models for Scientific Problem Generation</title>
      <p>The Blackboard approach in AI [14] ofers an efective framework for generating physics
problems by integrating symbolic reasoning with the capabilities of large language models
(LLMs). At the heart of this system is the blackboard manager. The blackboard serves as a
shared workspace (see Figure 4) that displays the initial variables, such as final velocity , time
, and initial velocity , making this information accessible to various agents for computation
and problem generation via LLMs. Suppose the blackboard manager displays the values of , ,
Algorithm 2 : Generation of identity problems by LeakGAN
1. Initialize any random embedded vector as the start-up element.
2. While entire identity is not generated do</p>
      <p>Begin
a) Transfer the current embedded vector to the discriminator that performs convolution
and pooling upon this vector.
b) Transfer the pooled vector to the manager module to generate a goal vector.
c) Concatenate the goal vector and the current embedded vector and feed to the worker
module to predict the next vector.
d) Append the next vector to the current vector. Also, update the current vector with
the next vector.</p>
      <p>End-While
3. Return the completely generated trigonometric identity problem by decoding its embedded
vector.
and . These values are passed to Agent 1, which calculates the value of acceleration . Next,
this agent transfers the value of  back to the blackboard manager. The blackboard manager
then displays the value of a along with , , and . The values of , , and  are then taken up
by Agent 2, which uses them to calculate the value of .</p>
      <p>The detailed procedure of this approach is provided in Algorithm 3. When  = 5/,  =
10/,  = 5, find  and  is represented by LLM as: “A ball is thrown with an initial velocity
 = 5/ downwards. When the ball reaches a distance , the velocity becomes  = 10/.
Find the time of traversal of the ball and the distance traversed."
Algorithm 3 : Blackboard approach for problem generation in physics
1. Initialize Agent 1 to compute  =  + , Agent 2 to compute 2 = 2 + 2, Agent 3 to
compute  =  + 1/22, and iteration  = 0.
2. For each agent do in parallel
a) If one but all parameter is unknown, evaluate the parameter using Agents.
b) Submit it to the blackboard manager for updating.</p>
      <p>c)  ← +1</p>
      <p>End-For
3. The blackboard manager updates the newly recorded parameters on the blackboard.
4. a) If  ≤ maximum number of parameters and  ≤ user defined runtime, loop through Step 2.</p>
      <p>b) If  &gt; run-time, stop.
5. If condition 1 holds and = maximum number of parameters, then the blackboard manager
passes on the known parameters and unknown parameters to the LLM to prepare a question.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Experiments: Relative Performance Analysis</title>
      <p>The performance of the proposed techniques in comparison to the existing ones is discussed
below.</p>
      <sec id="sec-7-1">
        <title>7.1. Comparative Study of Search Based Methods for Identity Generation</title>
        <p>Table 3 presents a comparative analysis of identity problems generated by SOTA methods and
the proposed approaches, evaluated using two key metrics: i) Mean Diversity per Unit Depth
(MDPUD), defined as the mean diversity of the best node normalized by its depth across 150
problems, and ii) average run-time complexity over 150 instances. For this comparative study,
the problems generated by SOTA methods were back-traced to the root, and the proposed
algorithms generated identities at the same depth within the search tree to ensure consistency.
The findings demonstrate that the proposed approaches outperform traditional methods by
producing more diverse and less predictable problems while achieving greater eficiency in
run-time performance.</p>
        <p>Although the MDPUD for both the random and heuristic-based search methods are similar,
the random search method requires more time to generate identities. This is because the
random search explores a larger search space to find a solution (creative outcome). In contrast,
the heuristic-guided approach limits the depth of the search tree, resulting in lower runtime
complexity.</p>
      </sec>
      <sec id="sec-7-2">
        <title>7.2. Comparative Study of Generative Neural Models for Identity Generation</title>
        <p>Table 4 presents the performance comparison between the LeakGAN model and traditional
generative neural networks in generating 150 trigonometric identities. The metrics used for
comparison include the Bilingual Evaluation Understudy (BLEU) score (a commonly employed
to assess the quality of both text and equation generation [2]) and runtime complexity. The
BLEU score evaluates how closely a machine translation matches reference data by measuring
n-gram overlap between the generated and reference sequences. Common n-grams include
Algorithms
tri-grams (3-grams), quadra-grams (4-grams), and penta-grams (5-grams). To avoid favoring
shorter outputs, a brevity penalty is applied. The final score is the geometric mean of n-gram
precisions, adjusted by this penalty.</p>
        <p>The results in Table 4 show that the proposed approach significantly outperforms all
traditional networks in terms of BLEU scores. However, the runtime complexity of the LeakGAN
model is higher than that of the traditional networks. This trade-of between improved quality
and increased complexity reflects a balance between performance and eficiency.</p>
      </sec>
      <sec id="sec-7-3">
        <title>7.3. Comparative Study of Generative Neural Models for Identity Generation</title>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>8. Further Scope of Computational Creative Models</title>
      <p>Existing computationally creative models ofer numerous advantages but also face significant
limitations. One notable drawback is their inability to efectively contextualize problems within
environmental representations. For instance, in Figure 5 (a) and (b), there are 2 scenarios
illustrated which are as follows.</p>
      <p>Scenario 1: A lotus of height  + ℎ stands upright in a pond, where ℎ is above water and
 is submerged. A gust of wind bends the lotus, making its tip touch the water a units away
from its original position. Given ℎ and , find . The above scenario can be addressed using
Pythagoras theorem by equating ( + ℎ)2 = 2 + 2.</p>
      <p>Scenario 2: An electric pole of height H has an electric wire attached to it at the top, with
the lower end initially free. Next, the wire is stretched to  + 1 inches and fixed 2 inches from
the base. Find . The above scenario can be addressed using Pythagoras theorem by equating
( + 1)2 = 2 + 22.</p>
      <p>A machine capable of understanding the concepts depicted in scenarios 1 and 2, recognizing
the Pythagorean theorem as the underlying principle across these problems, and tagging this
rule to generate a novel problem in a diferent scenario would demonstrate remarkable creative
reasoning. This type of learning mechanism is referred to as cross-domain learning [27]. The
afore-said novel problem generation in a diferent scenario is illustrated in Figure 5 (c) and is
described as follows.</p>
      <p>Developed New Scenario: A coconut tree breaks and falls across a canal after a storm. A
goat, tied to a string held by a man, walks along the trunk to the other side. Determine the
canal’s width.</p>
      <p>However, current CC models fall short of achieving this. This limitation stems from the lack
of perceptual knowledge about real-world entities, such as ponds or electric poles, and the
inability to grasp the physical and spatial relationships [28, 29] inherent in these objects and
their environments. In other words, these models are constrained by their limited capacity
for environmental context, perceptual reasoning [30], and spatial understanding. Addressing
this gap requires the development of more advanced computational frameworks that integrate
environmental context and reasoning capabilities. By incorporating a richer understanding of
the physical world and its representations, future models could enable spatial reasoning and
generate innovative solutions or problems across diverse domains.</p>
    </sec>
    <sec id="sec-9">
      <title>9. Conclusions</title>
      <p>The present study highlights the potential of CC to replicate human-like creative processes
within the scientific domain. By combining classical AI techniques, such as random
experimentation, heuristic search, and inductive learning, with modern approaches like GANs and
(a)
LLMs, this work demonstrates innovative methodologies for generating complex chapter-end
problems in mathematics and physics. A comparative performance analysis confirms the
eficacy of the proposed models in synthesizing creativity compared SOTA methods. While this
research lays a strong foundation for computational creativity in scientific problem generation,
it also emphasizes the need for future advancements that integrate contextual reasoning and
perceptual knowledge to unlock AI’s full potential for cross-domain learning, enabling the
application of acquired knowledge across diverse domains.</p>
    </sec>
    <sec id="sec-10">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
arXiv:2207.06679 (2022).
[19] B. T. Tabuguia, Hypergeometric-type sequences, Journal of Symbolic Computation 125
(2024) 102328.
[20] I. Briggs, P. Panchekha, Synthesizing mathematical identities with e-graphs, in:
Proceedings of the 1st ACM SIGPLAN International Symposium on E-Graph Research, Applications,
Practices, and Human-factors, 2022, pp. 1–6.
[21] A. Graves, A. Graves, Long short-term memory, Supervised sequence labelling with
recurrent neural networks (2012) 37–45.
[22] S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, S. Bengio, Generating
sentences from a continuous space, arXiv preprint arXiv:1511.06349 (2015).
[23] L. Yu, W. Zhang, J. Wang, Y. Yu, Seqgan: Sequence generative adversarial nets with policy
gradient, in: Proceedings of the AAAI conference on artificial intelligence, volume 31,
2017.
[24] K. Lin, D. Li, X. He, Z. Zhang, M.-T. Sun, Adversarial ranking for language generation,</p>
      <p>Advances in neural information processing systems 30 (2017).
[25] M. Imran, N. Almusharraf, Google gemini as a next generation ai educational tool: a
review of emerging educational technology, Smart Learning Environments 11 (2024) 22.
[26] M. Masalkhi, J. Ong, E. Waisberg, N. Zaman, P. Sarker, A. G. Lee, A. Tavakkoli, A
side-byside evaluation of llama 2 by meta with chatgpt and its application in ophthalmology, Eye
(2024) 1–4.
[27] H. Xu, Y. Liu, L. Liu, S. Zhi, S. Sun, T. Liu, M. Cheng, Step-wise distribution alignment
guided style prompt tuning for source-free cross-domain few-shot learning, arXiv preprint
arXiv:2411.10070 (2024).
[28] S. Ghosh, A. Konar, A. K. Nagar, Decoding subjective creativity skill from visuo-spatial
reasoning ability using capsule graph neural network, in: 2021 International Joint Conference
on Neural Networks (IJCNN), IEEE, 2021, pp. 1–8.
[29] S. Ghosh, A. Konar, A. K. Nagar, Cognitive assessment of scientific creative-skill by
brainconnectivity analysis using graph convolutional-interval type-2 fuzzy network, IEEE
Transactions on Cognitive and Developmental Systems (2024).
[30] M. Laha, S. Ghosh, A. Konar, Exploration of depth perception in human binocular vision
using eeg-based neuro-fuzzy classifier, in: 2023 8th International Conference on Computers
and Devices for Communication (CODEC), IEEE, 2023, pp. 1–2.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Mateja</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Heinzl,</surname>
          </string-name>
          <article-title>Towards machine learning as an enabler of computational creativity</article-title>
          ,
          <source>IEEE Transactions on Artificial Intelligence</source>
          <volume>2</volume>
          (
          <year>2021</year>
          )
          <fpage>460</fpage>
          -
          <lpage>475</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Konar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Nagar</surname>
          </string-name>
          ,
          <article-title>Computational creativity by generative adversial network with leaked information</article-title>
          , in: 2024
          <source>International Joint Conference on Neural Networks (IJCNN)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Liu</surname>
          </string-name>
          , Q.-L. Han,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <article-title>A brief overview of chatgpt: The history, status quo and potential future development</article-title>
          ,
          <source>IEEE/CAA Journal of Automatica Sinica</source>
          <volume>10</volume>
          (
          <year>2023</year>
          )
          <fpage>1122</fpage>
          -
          <lpage>1136</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Porter</surname>
          </string-name>
          , E. Machery,
          <article-title>Ai-generated poetry is indistinguishable from human-written poetry and is rated more favorably</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <fpage>26133</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L. R.</given-names>
            <surname>Varshney</surname>
          </string-name>
          ,
          <article-title>Mathematical limit theorems for computational creativity</article-title>
          ,
          <source>IBM Journal of Research and Development</source>
          <volume>63</volume>
          (
          <year>2019</year>
          )
          <fpage>2</fpage>
          -
          <lpage>1</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Veale</surname>
          </string-name>
          , R. Pérez y Pérez,
          <article-title>Leaps and bounds: An introduction to the field of computational creativity</article-title>
          ,
          <source>New Generation Computing</source>
          <volume>38</volume>
          (
          <year>2020</year>
          )
          <fpage>551</fpage>
          -
          <lpage>563</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Colton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Wiggins</surname>
          </string-name>
          ,
          <article-title>Computational creativity: The final frontier?</article-title>
          ,
          <source>in: ECAI</source>
          <year>2012</year>
          , IOS Press,
          <year>2012</year>
          , pp.
          <fpage>21</fpage>
          -
          <lpage>26</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pouget-Abadie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Warde-Farley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ozair</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <article-title>Generative adversarial networks</article-title>
          ,
          <source>Communications of the ACM</source>
          <volume>63</volume>
          (
          <year>2020</year>
          )
          <fpage>139</fpage>
          -
          <lpage>144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Kaufman</surname>
          </string-name>
          , R. J.
          <string-name>
            <surname>Sternberg</surname>
          </string-name>
          , The Cambridge Handbook of Creativity, Cambridge University Press,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Konar</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence and soft computing: behavioral and cognitive modeling of the human brain</article-title>
          , CRC press,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Konar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Nagar</surname>
          </string-name>
          ,
          <article-title>Identification of brain activation regions in inductive learning based scientific creativity test</article-title>
          ,
          <source>in: 2018 IEEE Symposium Series on Computational Intelligence (SSCI)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>950</fpage>
          -
          <lpage>957</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>A. M. AlMana</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Aksoy</surname>
          </string-name>
          ,
          <article-title>An overview of inductive learning algorithms</article-title>
          ,
          <source>International Journal of Computer Applications</source>
          <volume>88</volume>
          (
          <year>2014</year>
          )
          <fpage>20</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <article-title>The appendix is uploaded in the google drive</article-title>
          ,
          <year>2024</year>
          . URL: https://drive.google.com/file/d/ 1OsEUWFsEJRj87k0SQZNTWa7Okz5kpIp7/view?usp=drive_link.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Nii</surname>
          </string-name>
          ,
          <article-title>The blackboard model of problem solving and the evolution of blackboard architectures</article-title>
          ,
          <source>AI</source>
          magazine
          <volume>7</volume>
          (
          <year>1986</year>
          )
          <fpage>38</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>O.</given-names>
            <surname>Polozov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. O</given-names>
            <surname>'Rourke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gulwani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Popović</surname>
          </string-name>
          ,
          <article-title>Personalized mathematical word problem generation</article-title>
          ,
          <source>in: Twenty-Fourth International Joint Conference on Artificial Intelligence</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Papasalouros</surname>
          </string-name>
          ,
          <article-title>Automatic exercise generation in euclidean geometry</article-title>
          ,
          <source>in: Artificial Intelligence Applications and Innovations: 9th IFIP WG 12</source>
          .5 International Conference, AIAI 2013, Paphos, Cyprus,
          <source>September 30-October 2</source>
          ,
          <year>2013</year>
          , Proceedings 9, Springer,
          <year>2013</year>
          , pp.
          <fpage>141</fpage>
          -
          <lpage>150</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pearce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>McKinney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Alvin</surname>
          </string-name>
          ,
          <article-title>Query-based generation of trigonometric identity problems and solutions</article-title>
          ,
          <source>in: The Thirty-Third International Flairs Conference</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Learning to prove trigonometric identities, arXiv preprint</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>