<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Six Thinking Chatbots: A Creativity Technique deployed via a Large Language Model ⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alexander</forename><surname>Rachmann</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">CBS International Business School</orgName>
								<address>
									<addrLine>Hammer Landstraße 89</addrLine>
									<postCode>41460</postCode>
									<settlement>Neuss</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Six Thinking Chatbots: A Creativity Technique deployed via a Large Language Model ⋆</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">7062951488A847399198AA16313788A7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:07+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>ai supported creativity</term>
					<term>six thinking hats</term>
					<term>large language model</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>AI supported Creativity is a hot debated topic, due to the rise of LLMs. This paper explores a prototypical implementation of the classic Six Thinking Hats method via a LLM, using the OpenAI API. The results suggest that such AI support is useful in a creative process; but will not replace the human creativity.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Large Language Models (LLMs, singular: LLM), generative mathematical models of the statistical distribution of tokens in vast public corpus of human generated texts <ref type="bibr" target="#b0">[1]</ref>, are very prominent due to the success of ChatGPT 1 .</p><p>The current Gartner Hype Cycle sees LLMs at their zenith <ref type="bibr" target="#b1">[2]</ref>. Some practical disadvantages and weaknesses of the technologies are already known, such as hallucinations <ref type="bibr" target="#b2">[3]</ref> or copyright violations <ref type="bibr" target="#b3">[4]</ref>. It is currently unclear when and to what extent the disillusionment will occur and at what level productivity will level out. What is clear, however, is that this technology is predicted to have a major impact on creative work and knowledge work <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b6">[7]</ref>. One research line towards this direction is ai driven creativity / ai driven inspiration / ai powered creativity <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b7">[8]</ref>, <ref type="bibr" target="#b8">[9]</ref>, <ref type="bibr" target="#b9">[10]</ref>.</p><p>This paper describes a prototypical implementation of the Six Thinking Hats method via a LLM (Six Thinking Chatbots). It shows what a combination of creativity technique and a large language model might look like. Section 2 takes on related works. Section 3 describes the design assumptions, the program flow and according prompts. In section 4 the results of the prototype are evaluated. Section 5 closes this paper. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related works</head><p>Related work from the view of requirements engineering such as <ref type="bibr" target="#b10">[11]</ref> describe the usage of LLM in each major stage of requirements engineering according to <ref type="bibr" target="#b11">[12]</ref>. For the elicitation phase, a single-shot prompt is used. This is comparable to the webbrowser prompt in section 4.2 of this paper. This point of view to this topic is not well explored; first experiments such as <ref type="bibr" target="#b12">[13]</ref> are being carried out.</p><p>Related work from the view of LLM usage is in full bloom. In the ChatGPT hype, the development of prompting strategies such as <ref type="bibr" target="#b13">[14]</ref>, <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b15">[16]</ref>, <ref type="bibr" target="#b16">[17]</ref> provide standardized approaches to prompting. These approaches take on the technical usage of prompting, such as the "write clear instructions", "provide reference texts" <ref type="bibr" target="#b16">[17]</ref>. Though these are very useful, these approaches do often not tackle complex situations of human conversations.</p><p>The main new idea of this paper is the simulation of a human workshop, using the LLM in multiple, programmed steps.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Design</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Design Considerations for a programmed Six Thinking Hats workshop</head><p>The Six Thinking Hats methodology is well known in Requirements Engineering. Although it entails several ways to use the different hats<ref type="foot" target="#foot_0">2</ref> , this paper only explores one application i.e. the identification of one alternative from a group of three.</p><p>A practical workshop needs an introduction by the moderator (blue hat). Due to the OpenAI API, this type of introduction is not necessary or is covered by providing appropriate information before the prompt. Before the actual prompt, a system content is given that contains the content of an introduction to a classic workshop. The order of colors is white, green, yellow, black, red and blue.</p><p>To limit the necessary computing power, the contribution from each hat is limited to three bullet points, without explicit limit of words or characters.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Program flow</head><p>Python<ref type="foot" target="#foot_1">3</ref> was used as the implementation environment. A simple object-oriented design of the prototype was chosen. The source code is available at https://GitHub.com/rachmann-alexander/ sixthinkingchatbots.</p><p>ChatGPT-4 was used as the LLM. In general, ChatGPT-4 is considered the market leader and is therefore suitable as an implementation tool for the prototype. It is unclear whether other LLMs would show a substantially different result. Choosing ChatGPT entails the use of the corresponding API, which in turn entails certain prompt forms.</p><p>Six Thinking Chatbots follows this flow:</p><p>1. The user delivers a problem statement.</p><p>2. The messages for white, green, yellow, black, and red are computed. The messages to each hat follow the structure:</p><p>• System Content gives the LLM the context to interpret the prompt. System Content is a necessary parameter to the ChatGPT API. • User content, also a necessary parameter to the ChatGPT API, is subdivided in two entities: Problem Statement, a description of the topic to-be-discussed by the hats. Task, a description of what the hat should do.</p><p>The messages are submitted to ChatGPT API and responses are received. 3. Compute the prompt for blue hat; submit the prompt to ChatGPT API. The prompt to blue follows the same structure as above, but incorporate the responses to the other hats.</p><p>The response for blue is received. 4. All prompts and responses are exported to a Markdown-formatted document.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Prompts to the hats</head><p>OpenAI expects the user to give context to a certain prompt, which is given in the system content. This context gives the LLM a better understanding of what the task should be. Each hat receives a different system content, depending on its nature as defined in Six Thinking Hats Methodology. Table 1 lists the system content of each hat. Each prompts of all hats, except for blue Hat, contain a problem statement and a generic task. Table <ref type="table">2</ref> gives the content for the white hat, table <ref type="table">3</ref> for the blue hat.</p><p>The blue hat owns the moderator role and has therefore the task to summarize the outputs of the other hats. The prompt to the blue hat is therefore more complex. At first, the content is given, then the task itself. After that all other contributions are given. Table <ref type="table">4</ref> gives an example for a response to the blue hat.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>System Contents of the hats. All system contents start with "You are a participant of a workshop". The content in the table complete the prefix.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Hat</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>System content White hat</head><p>Your decision making is fact based and your argumentation is very structured.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Green hat</head><p>Your decision making is based on the possibilities of innovations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Yellow hat</head><p>You are an optimist.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Black hat</head><p>You are a pessimist.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Red hat</head><p>Your decision making is emotion based.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Blue hat</head><p>You are the moderator of a workshop, using the Six Thinking Hats methodology. You are neutral to the contributions from the participants and value each contribution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 2</head><p>Example of a prompt, structured by system content (example of the White Hat), problem statement (example according to "Sales Channel" as used in the prototype) and task (generic for all Hats except Blue Hat).</p><p>System content: You are a participant of a workshop. Your decision making is fact based and your argumentation is very structured.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Prompt (Problem statement):</head><p>The goal of the workshop is to evaluate three alternatives to a problem. The problem is: The sales process on our companies website does not work very well. Management wants us to change the sales process to one of the three alternatives: First, an old-school chatbot with a beforehand written structure. Second, a chatbot using a large language model, without a pre-defined structure.</p><p>Third, an old-school contact form. Prompt (Task): Describe your point of view to this problem, using only three bullet points.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 3</head><p>Example of the prompt to blue hat. The problem statement is the same as in the prompts to the other hats and therefore emitted ("…") in the table. The prompt contain the contribution from all other hats; this is quite long and therefore also emitted in the table. All contents are available at the GitHub Repository.</p><p>System content: You are the moderator of a workshop, using the Six Thinking Hats methodology. You are neutral to the contributions from the participants and value each contribution. Prompt (Task): Summarize the contributions of the participants, as listed beneath. Think about the problem statement and decide, based on the contributions of the participants, which alternative should be chosen.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Prompt (Problem statement):</head><p>The problem statement of the workshop is as follows: … Prompt (Contribution White): Contribution from a fact-oriented participant: … Prompt (Other Contributions): Contribution from …hat: …</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 4</head><p>Example of the response from blue hat.</p><p>Response from the LLM to the Blue Hat Prompt After analyzing all of these contributions, it appears that the majority lean towards investing in a large language model chatbot for the company. Those in favor of this option highlighted several benefits including its adaptability to changing customer needs, the potential for learning and improvement over time, and the ability to provide tailored, efficient service on the fly. They also lauded its innovative approach and the potential for the drastically improved client interaction it could offer. It's recommended to consider the concerns raised by the pessimist too, ensuring to mitigate potential issues like consistency, the maintenance of brand tone, and avoiding technical glitches. Availability of resources, cost implications and consideration for the long-term impact are also factors that should be taken into account when implementing a large language model chatbot solution.</p><p>The participants showed less enthusiasm for the old-school chatbot and the contact form, identifying limitations like lack of flexibility and evolution, impersonal service, slow turnaround, and inability to handle complex inquiries. These options could result in a process that may feel outdated and inefficient, potentially deterring customers rather than facilitating sales and improving the customer experience. Therefore, based on these contributions, the large language model chatbot stands out as the most promising alternative.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Similarity between exports (Q4)</head><p>To answer Q4, five texts were generated. To compute how similar these texts are, one can use the shingle algorithm to compute the jaccard similarity. The jaccard similarity describes how similar two documents are to each other. A similarity of 1 means the two documents are identical, a 0 means the documents do not have an intersection. There is no definition on how the jaccard similarity must be in order to identify two documents as the same or nearly same. <ref type="bibr" target="#b19">[20]</ref> The average similarity of different texts is 0,608. Considering, that there is no common understanding of how similar this is, one may conclude, that the prompts are always distinctively different, but always alike.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4.">Relation to a real world workshop (Q5)</head><p>The answer to question 4 is not as to compute as the other answers. First, the quality of the outcomes of a workshop depend highly on the participants, their motivation, the moderator, the conduction of the workshop, etc. If a person workshop would provide different or better outcomes, is not easy to determine. However, the arguments made by the chatbots are mostly common knowledge and practice. It would be no surprise, if these outcomes were elaborated by non-experts. The chosen application scenario was generic. It is also unclear whether the same results would arise with very specific use cases or domain experts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>This paper introduces a prototypical implementation of the Six Thinking Hats method, called Six Thinking Chatbots. The source code is available at GitHub, as are all evaluation cases. The experiment showed that the Six Thinking Chatbots deliver distinctively different results than regular prompting (Q1). The different hats play out their specific role (Q2 and Q3). The script delivers always distinctively different outputs (Q4). It is unclear, if the responses of the chatbots are different / in what way they are different to responses of humans (Q5). However, the author would use the script only to support human creativity (such as a preparation to a Six Thinking Hats workshop), not as a substitution for human creativity.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>In: D. Mendez, A. Moreira, J. Horkoff, T. Weyer, M. Daneva, M. Unterkalmsteiner, S. Bühne, J. Hehn, B. Penzenstadler, N. Condori-Fernández, O. Dieste, R. Guizzardi, K. M. Habibullah, A. Perini, A. Susi, S. Abualhaija, C. Arora, D. Dell'Anna, A. Ferrari, S. Ghanavati, F. Dalpiaz, J. Steghöfer, A. Rachmann, J. Gulden, A. Müller, M. Beck, D. Birkmeier, A. Herrmann, P. Mennig, K. Schneider. Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters &amp; Tools Track, and Education and Training Track. Co-located with REFSQ 2024. Winterthur, Switzerland, April 8, 2024. Envelope a.rachmann@cbs.de (A. Rachmann) GLOBE https://cbs.de/ (A. Rachmann) Orcid 0000-0001-8217-7149 (A. Rachmann)</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_0">E.g. generating initial ideas (hat sequence: blue, white, green, blue), identifying solutions (blue, white, green, yellow, black, red, blue), etc. See also<ref type="bibr" target="#b17">[18]</ref>).</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_1">In the opinion of the author, one could have chosen for almost any other modern programming language, as no exclusive Python-specific capabilities were used.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_2">The prompt and its response is available in GitHub, exactly at https://GitHub.com/rachmann-alexander/ sixthinkingchatbots/blob/main/opt/StrategyQA</note>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Evaluation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Evaluation question and methods</head><p>One can evaluate the Six Thinking Chatbots in terms of:</p><p>• Q1: Would a regular prompt deliver the same results with less effort? • Q2: Are the arguments by the colored hats disjoint? Did the hats take different position? • Q3: Did the blue hat summarize the contributions from the other chatbots correctly? • Q4: How "creative" is the script? How much do the responses differ from each other, if the script runs several times? • Q5: Are the contributions valid, do they represent the state of the art or are they comparable to a workshop with people?</p><p>To answer Q1 and Q4, one needs to generate material for comparison. As comparison for Q1, the StrategyQA-Prompting style was chosen <ref type="bibr" target="#b18">[19]</ref>. The prompt is structured in a context-part (see table <ref type="table">5</ref>), which informs the LLM about the workshop. The second part is the task itself. The third part defines the output format 4 . This last part was not necessary in Six Thinking Chatbots, because the export is handled by the script.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 5</head><p>Prompt to generate comparison material. This prompt is not deployed via software, so that the code repository is not equipped with code that will only be executed once. All contents are available at the GitHub Repository. The Web browser interface of ChatGPT was used execute this prompt.</p><p>The goal of the workshop is to evaluate three alternatives to a problem. The problem is: The sales process on our companies website does not work very well. Management wants us to change the sales process to one of the three alternatives: First, an old-school chatbot with a beforehand written structure. Second, a chatbot using a large language model, without a pre-defined structure. Third, an old-school contact form. Evaluate which solution one would prefer. Give at most 15 arguments. After listing the arguments, write a conclusion including a selection on a solution. Use markdown syntax.</p><p>The response to this prompt is structured along the three alternatives (table <ref type="table">6</ref>); each alternative is structured in a pro-and contra-section with each three to five arguments.</p><p>To answer Q4, one needs to compute several results. After developing the Six Thinking Chatbots script (and thereby generating several preliminary results), five results were computed. All five results are available at GitHub. Only one Text (29.12.2023, 13.06.md in the GitHub Repo) was used a standard analysis material in tables 7 and 8. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Semantic cohesion of the Responses (Q1-Q3)</head><p>To answer Q1 and Q2, table 7, page 8, shows the arguments made by the chatbots. One sees, that the arguments are mostly disjoint; this holds within the results of the colored hats as well as between the Six Thinking Chatbots result and the Web browser result. The chatbots did contribute different aspects and therefore fulfilled the expected outcome. To answer Q3, see table <ref type="table">8</ref>. Blue hat voted for the AI chatbot, which represents the majority of the votes of the other chatbots. </p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Talking about large language models</title>
		<author>
			<persName><forename type="first">M</forename><surname>Shanahan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">67</biblScope>
			<biblScope unit="page" from="68" to="79" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">What&apos;s new in artificial intelligence from the 2023 gartner hype cycle</title>
		<author>
			<persName><forename type="first">L</forename><surname>Perri</surname></persName>
		</author>
		<ptr target="https://www.gartner.com/en/articles/what-s-new-in-artificial-intelligence-from-the-2023-gartner-hype-cycle" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Ye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Hua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Jia</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2309.06794</idno>
		<title level="m">Cognitive mirage: A review of hallucinations in large language models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Karamolegkou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Søgaard</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2310.13771</idno>
		<title level="m">Copyright violations and large language models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">When and how artificial intelligence augments employee creativity</title>
		<author>
			<persName><forename type="first">N</forename><surname>Jia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Liao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Academy of Management Journal</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Conceiving creativity and learning in a world of artificial intelligence: A thinking model</title>
		<author>
			<persName><forename type="first">E</forename><surname>Creely</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Creative provocations: Speculations on the future of creativity, technology &amp; learning</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="35" to="50" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The intersection of human and artificial creativity</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">H</forename><surname>Cropley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">E</forename><surname>Medeiros</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Damadzic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Creative provocations: Speculations on the future of creativity, technology &amp; learning</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="19" to="34" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Artificial intelligence in software requirements engineering: State-of-the-art</title>
		<author>
			<persName><forename type="first">K</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Reddivari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Reddivari</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 23rd International Conference on Information Reuse and Integration for Data Science (IRI), IEEE</title>
				<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="106" to="111" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Artistic user expressions in ai-powered creativity support tools</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J Y</forename><surname>Chung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1" to="4" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Fashionq: an ai-driven creativity support tool for facilitating ideation in fashion design</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Jeon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">C</forename><surname>Shih</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Han</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2021 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="18" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Arora</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Grundy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Abdelrazek</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2310.13976</idno>
		<title level="m">Advancing requirements engineering through generative ai: Assessing the role of llms</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Lamsweerde</surname></persName>
		</author>
		<title level="m">Systematic requirements engineering-from system goals to uml models to software specifications</title>
				<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Brand</surname></persName>
		</author>
		<ptr target="https://www.linkedin.com/posts/anja-brand_selbstversuch-covid-chatgpt-activity-7136342918503489537-W0Jm/" />
		<title level="m">Linkedin post: Was wurde denn nun eigentlich aus dem vortrag in leipzig zu meinem selbstversuch?</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Why johnny can&apos;t prompt: how non-ai experts try (and fail) to design llm prompts</title>
		<author>
			<persName><forename type="first">J</forename><surname>Zamfirescu-Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">Y</forename><surname>Wong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hartmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</title>
				<meeting>the 2023 CHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="21" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">Z</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2310.04959</idno>
		<title level="m">Towards better chain-of-thought prompting strategies: A survey</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Jeong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kang</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2311.03754</idno>
		<title level="m">Which is better? exploring prompting strategy for llm-based metrics</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><surname>Openai</surname></persName>
		</author>
		<ptr target="https://platform.openai.com/docs/guides/prompt-engineering" />
		<title level="m">Prompt engineering</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">De</forename><surname>Bono</surname></persName>
		</author>
		<title level="m">Six Thinking Hats: The multi-million bestselling guide to running better meetings and making faster decisions</title>
				<imprint>
			<publisher>Penguin uk</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies</title>
		<author>
			<persName><forename type="first">M</forename><surname>Geva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Khashabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Segal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Khot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Roth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Berant</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Transactions of the Association for Computational Linguistics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="346" to="361" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Information Retrieval and Search Engines</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Aggarwal</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2022">2022</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="page" from="257" to="302" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
