<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Towards Automatic Generation of iStar Models Using ChatGPT</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Yoshitake</forename><surname>Hirabayashi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Nanzan University</orgName>
								<address>
									<postCode>466-8673</postCode>
									<settlement>Nagoya</settlement>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Motoshi</forename><surname>Saeki</surname></persName>
							<email>saekimot@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Nanzan University</orgName>
								<address>
									<postCode>466-8673</postCode>
									<settlement>Nagoya</settlement>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">The 17th International i* Workshop</orgName>
								<address>
									<addrLine>October 28</addrLine>
									<postCode>2024</postCode>
									<settlement>Pittsburgh</settlement>
									<region>Pennsylvania</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Towards Automatic Generation of iStar Models Using ChatGPT</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">E2AA762CCC9A723DD6E0C1C244F029DE</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:03+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Goal Model Generation</term>
					<term>Bad smell</term>
					<term>ChatGPT</term>
					<term>Self-Instruct</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The goal-oriented requirements analysis method is useful in requirements analysis; however, due to the lack of guidelines and methodologies, generating goal models can be challenging, especially for beginners. Therefore, this research aims to support the creation of iStar models, one of the goaloriented requirements analysis methods, using ChatGPT. In this paper, our approach consists of two steps: generating iStar models without syntactic constraint violation and modifying iStar models. For the generation of iStar models without syntactic constraint violation, efforts were made in designing input prompts, describing syntactic rules, and refining the generation procedure. In the modification of iStar models, the focus was placed on the "Bad Smells" in iStar models, defined by us, and correction candidates were categorized into omissions, ambiguities, and inconsistencies, with modifications made by referencing specific examples. As a result, our technique successfully generated iStar models without syntactic constraint violation and improved the quality of iStar models through modifications. However, challenges remain, such as the overly generic nature of the generated iStar models, the inability to represent conflicts between elements, and insufficient detail.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>One of the requirements analysis methods that aim at requirements elicitation is Goal-Oriented Requirements Analysis (GORA). This method takes customers' requirements as goals, iteratively decomposes and elaborates on them to derive system or software requirements. In GORA, elicited goals and their relationships are represented in the form of graphs, called a goal model. However, due to the lack of guidelines and methodologies for goal analysis and refinement, generating models is extremely challenging for beginners, and there is a possibility of producing low-quality goal models. In recent years, research has increased that leverages artificial intelligence technologies and generative AI to assist beginners in generating goal-oriented requirements analysis (GORA). Danilo et al. conducted research using a chatbot that employs natural language processing (NLP) as an assistant for KAOS modeling to support novice requirements engineers in eliciting requirements <ref type="bibr" target="#b0">[1]</ref>. Marques et al. investigated how large language models (LLMs), particularly ChatGPT, can be utilized in software engineering, focusing on their role in requirements elicitation, documentation, and validation, based on several studies <ref type="bibr" target="#b1">[2]</ref>. Chen et al. examined the extent of ChatGPT's knowledge of goal-oriented requirements analysis methods and its ability to generate GRL <ref type="bibr" target="#b2">[3]</ref>. Nakagawa et al. proposed a semi-automatic goal generation process that utilizes LLMs and the MAPE-K loop in the goal model generation process. This approach uses LLMs as domain experts <ref type="bibr" target="#b3">[4]</ref>.</p><p>As far as we know, the quality of iStar models generated by AI remains low in all existing studies. Therefore, this research aims to automatically generate high-quality iStar models using ChatGPT-4o <ref type="bibr" target="#b4">[5]</ref>, focusing specifically on iStar within the context of GORA.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Approach</head><p>As highlighted in the research by Chen et al. <ref type="bibr" target="#b2">[3]</ref>, while generative AI possesses a certain level of knowledge related to goal-oriented requirements analysis (GORA), there is a risk that the AI may generate models with syntactic constraint violations and/or produce low-quality models that, while syntactically correct, are still suboptimal. To address these problems, first, we take an approach where ChatGPT generates an iStar model, and then we detect and correct low-quality or erroneous parts in the generated model. Therefore, in this paper, we adopt the following two steps when modeling iStar using ChatGPT.</p><p>1. The generated iStar model should avoid as many syntactic constraint violations as possible.</p><p>To achieve this, we will develop prompt designs that include constraints in the prompt for generating an iStar model. We call these resulting prompts initial input prompts in this paper. 2. Although an iStar model generated using our initial input prompt does not contain syntactic constraint violations, it may have parts of lower quality such as ambiguity, omissions, inconsistency, etc. We use the concept of "Bad smells" <ref type="bibr" target="#b5">[6]</ref> to detect and modify them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Initial input prompt</head><p>Based on the research by Chen et al. <ref type="bibr" target="#b2">[3]</ref>, the basic input prompts are defined by dividing them into a single sentence, domain paragraph, and syntactic explanation. In this paper, in addition to these three prompts, we input a context, which is generally considered to improve the quality of outputs from ChatGPT, along with a text file containing the syntactic definitions of iStar 2.0. The syntactic definitions describe the constraints on which links between iStar elements are permissible, as well as the constraints defined in piStar <ref type="bibr" target="#b6">[7]</ref>. The constraints defined in piStar were described with constraints, incorrect examples, and correct examples to make them easier for ChatGPT to understand. Therefore, the initial prompt included 1) context, 2) single sentence, 3) domain paragraph, 4) syntactic explanation, 5) syntactic constraints, 6) incorrect example, and 7) correct example. The actual prompt is explained in Section 3.1. The model generation process starts with entering the initial prompt, followed by sequentially providing instructions 1 through 4 to ChatGPT.</p><p>1. Create each actor and one goal for each actor.</p><p>2. Generate the necessary elements (goals, tasks, resources) to achieve the goals created in step 1, and connect them with appropriate links. 3. Identify the required "qualities" from the initial input prompt, create them for the appropriate actors, and link the elements related to those qualities. 4. For each element, if there are dependencies where other actors need to be involved, describe those dependencies.</p><p>The reason for adopting this procedure is that if the entire iStar model is generated directly from the input prompt, due to the constraints of the JSON format, elements need to be described first, and then dependencies and links are created based on those elements. This can lead to the violation of syntactic rules. Additionally, if the entire iStar model is generated first and then the syntactic rules are input to correct violations, the corrections often cause other parts of the model to violate the rules, leading to an endless loop of corrections. It is also worth noting that the quality of the iStar model remains consistent regardless of the method used, whether it follows the step-by-step procedure, generates the entire model at once, or corrects the syntax after generation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Modification of the iStar model output by ChatGPT</head><p>The iStar models generated by ChatGPT, while not syntactically incorrect, may still produce low-quality models. In particular, the quality of the content within the elements is influenced by the input prompt, which can result in omissions, ambiguities, and inconsistencies. Therefore, this paper focuses on the Semantic Bad Smells, as defined by us, among the various Bad Smells <ref type="bibr" target="#b5">[6]</ref>. Semantic Bad Smells refer to issues related to the content descriptions of iStar elements, and we have defined five types of such Bad Smells. Detection of each Semantic Bad Smell was attempted by focusing on the similarity of element descriptions, but the accuracy was not very high. Furthermore, in this paper, we revisited the concept of Semantic Bad Smells, categorizing those that cause omissions, ambiguities, and inconsistencies, and added new types of Semantic Bad Smells. Table <ref type="table" target="#tab_0">1</ref> shows the Semantic Bad Smells used in this paper.</p><p>For the detection and modification of Bad Smells, we adopted a method where ChatGPT generates its own modification prompts, based on the Self-Instruct approach by Wang et al. <ref type="bibr" target="#b7">[8]</ref>. This approach was chosen because ChatGPT may be able to identify areas that humans might find difficult to judge as Semantic Bad Smells. Additionally, it was expected that this method could detect additional Bad Smells beyond those defined by us. In the Self-Instruct method used in this paper, the input consists of a prompt for self-generation and a CSV file. In the CSV file, the input example is an iStar model (JSON) containing a Bad Smell, and the output example includes the locations of the Bad Smell, the reasons for it, and the corrected iStar model. The following explains the content of each description. The input prompt is: "The input CSV file contains instructions (prompts), inputs, and outputs for detecting bad smells in iStar 2.0. Refer to this CSV file to self-generate prompts for bad smell detection with iStar 2.0 and modify the model accordingly." The instructions are: sentences to modify input examples; input examples: iStar models with bad smells; output examples: the iStar model with identified bad smell areas, their reasons, and modifications. ChatGPT generates the instruction and output examples using Self-Instruct. These will be used to perform instruction tuning on ChatGPT.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">An Example Applied to the Inventory Problem at a Liquor Store</head><p>An Example was conducted using the inventory problem of a liquor store to evaluate the effectiveness of the approach proposed in this paper. The inventory problem used here is an actual problem studied by university students learning requirements engineering. Next, within the same thread, a prompt was entered to detect and correct Bad Smells using the Self-Instruct approach on the output result. The outcome of this process was considered the example result.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">The Example procedure for the input prompt and its Results</head><p>In this example, the initial step involved providing ChatGPT-4o with an input prompt that included all elements: context, a single sentence, a domain paragraph, and a syntactic explanation. Additionally, a text file containing the syntactic definitions of iStar 2.0 was provided. Table <ref type="table">2</ref> shows the context, single sentence, and a part of domain paragraph. The syntactic explanation was derived from the iStar 2.0 Language Guide <ref type="bibr" target="#b8">[9]</ref>, where the iStar model was described using piStar, and unnecessary parts for generating the iStar model, such as positional information, were removed from the JSON file. Regarding the text file containing the syntactic definitions, experiments were conducted to evaluate how well ChatGPT adhered to the syntactic rules when provided with only the constraints, constraints with incorrect examples, and constraints with both correct and incorrect examples. The results showed that when only the constraints or constraints with incorrect examples were provided, no syntactic constraint violations occurred. However, when both correct and incorrect examples were provided alongside the constraints, The omission of elements necessary to achieve a goal hinders the fulfillment of the requirements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Ambiguities</head><p>The descriptions between parent and child nodes as a graph in an iStar model are semantically the same. The refinement between parent and child nodes is not done and it is unclear whether and how the parent nodes are refined.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Inconsistency</head><p>The descriptions of "dependum" and "dependerElmt" are not so semantically related to each other. The reason why the actor having dependerElmt requires dependum is unclear. The descriptions of "dependum" and "dependeeElmt" are not so semantically related to each other. It is unclear how the actor having dependeeElmt can achieve or realize dependum . The descriptions between parent and child nodes as a graph in an iStar model are not so semantically relatedto each other. The refinement between parent and child nodes is inappropriate and semantically unclear. The links of "is-a" or "participates-in" relationships connect from abstract actors to concrete actors. The direction of arrows, such as for inheritance, is reversed, altering the meaning of the iStar model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 2 Input Prompt</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Context</head><p>You are an excellent requirements analyst. Now you are going to model customer requirements using iStar 2.0. Model the following customer requirements in JSON format, ensuring that you follow the iStar2.0 description rules entered.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Single Sentence</head><p>Using the requirement modeling language for the iStar2.0, please provide a goal model for a business systems for liquor stores meant to provide online stores for customers, in stock, shipping, reservation, cancellation including different actors such as the liquor stores, customer, supplier and business systems. Domain Paragraph For example, consider the following requirements. Yakumo Sake Brewery sells wine and beer on reserve. Every day at 10:00 a.m., the manufacturer sends a predetermined number of each of these two types of liquor (e.g., 50 bottles of wine, 100 bottles of beer, etc.).... syntactic constraint violations occurred. The text file used in this example contains constraints with incorrect examples. Figure <ref type="figure" target="#fig_0">1</ref> shows the example results of the input prompt. ChatGPT generated an error-free iStar model according to the syntactic definitions, accurately describing the necessary elements for both the sale and purchase of alcohol. However, the content described in the iStar model appears to be somewhat generic. Although the domain paragraph specifies that Yakumo Liquor Store only sells wine and beer, this information is not reflected anywhere in the iStar model. Furthermore, all generated iStar models merely represent the content of the input prompt as is, without addressing any conflicts between elements or proposing new ideas. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Modification of the iStar Model and its Results</head><p>Next, based on the results from Section 3.1, a prompt was entered within the same thread to detect and correct Bad Smells, and the output results were used as the example outcomes. After inputting the modification prompt, ChatGPT generated detection prompts for each type: inconsistency, ambiguity, and omission. It then generated the detection results and the modified iStar model. The following is a prompt for detecting omissions that was generated by ChatGPT using the CSV file and the prompt for generating prompts, as described in Seciton 2.2: "Detect any omissions in the iStar 2.0 model, where actors have incomplete tasks or goals that don't fully achieve their objectives." Using this prompt, ChatGPT identified the missing element "Track Order" within the Customer actor needed to achieve the "Order Liquor" goal. The modified iStar model is shown in Figure <ref type="figure" target="#fig_1">2</ref>. For other ambiguities and inconsistencies, ChatGPT performed checks and determined that they were not considered Bad Smells. In another example, redundancy was added as a new Bad Smell and was checked accordingly. Additionally, to resolve ambiguity, "Automate Ordering" was revised to "Automate Customer Order Processing," clarifying the ambiguous expression.</p><p>Our findings indicate that we successfully generated iStar models without syntactic constraint violations and effectively detected and corrected Bad Smells. However, challenges remain regarding the quality of the generated iStar models as models; they tend to be too generic and fail to address conflicts between elements or propose new ideas.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion and Future Works</head><p>In this paper, it was highlighted that generating iStar models, one of the goal-oriented requirements analysis methods, is particularly challenging for beginners due to the lack of guidance on how to refine the models. This often leads to the creation of low-quality iStar models. To address this issue, a method for automatically generating iStar models using ChatGPT-4 was proposed. This method involved inputting syntactic definitions alongside the requirements defi- nitions and generating the models sequentially as part of the input prompt design. Furthermore, to improve the quality of the iStar models generated by ChatGPT-4, a method was proposed to detect and correct Semantic Bad Smells in the iStar models using Self-Instruct. As a result, the method successfully generated iStar models without syntactic constraint violation and detected and corrected Semantic Bad Smells. However, the generated iStar models tended to be somewhat generic in content, leaving room for further refinement to support deeper analysis. Additionally, while the models captured the current state of the requirements definitions, they failed to propose conflicts between elements or new ideas. Future works include the following: X Creation of prompts to further refine the generated iStar models X Creation of prompts to propose new ideas, not just reflect the current state of requirements definition X Creation of prompts to incorporate domain information from the requirements definition into the iStar models X Development of a tool to assist in generating iStar models X Examine the necessity of the domain paragraph in the initial input prompts</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The Results of the Input Prompt</figDesc><graphic coords="5,72.00,65.63,451.21,272.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Results of the iStar Model Modification</figDesc><graphic coords="6,188.64,65.67,217.93,134.54" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Semantic Bad Smells</figDesc><table><row><cell>Category</cell><cell>Bad Smells Grounds</cell></row><row><cell></cell><cell>An element necessary to achieve a specific goal is missing.</cell></row><row><cell>Omissions</cell><cell></cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was partly supported by JSPS Grants-in-Aid for Scientific Research Nos. JP21K11823 and Nanzan University Pache Research Subsidy (I-A-2) for the 2024 academic year.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A chatbot for goal-oriented requirements modeling</title>
		<author>
			<persName><forename type="first">D</forename><surname>Arruda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Marinho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Souza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wanderley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computational Science and Its Applications-ICCSA 2019: 19th International Conference</title>
				<meeting><address><addrLine>Saint Petersburg, Russia</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">July 1-4, 2019. 2019</date>
			<biblScope unit="page" from="506" to="519" />
		</imprint>
	</monogr>
	<note>Proceedings, Part IV 19</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Using chatgpt in software requirements engineering: A comprehensive review</title>
		<author>
			<persName><forename type="first">N</forename><surname>Marques</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bernardino</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Future Internet</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page">180</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">On the use of gpt-4 for creating goal models: an exploratory study</title>
		<author>
			<persName><forename type="first">B</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hassani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Amyot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Lessard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Mussbacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sabetzadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Varró</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 31st International Requirements Engineering Conference Workshops (REW), IEEE</title>
				<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="262" to="271" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Mape-k loop-based goal model generation using generative ai</title>
		<author>
			<persName><forename type="first">H</forename><surname>Nakagawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Honiden</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 31st International Requirements Engineering Conference Workshops (REW), IEEE</title>
				<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="247" to="251" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><surname>Openai</surname></persName>
		</author>
		<ptr target="https://chat.openai.com/" />
		<title level="m">Chatgpt: GPT-4o model</title>
				<imprint>
			<date type="published" when="2024-08-18">2024. 2024-08-18</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Defining bad smells and automating their detection in goal-oriented requirement analysis method istar</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Hirabayashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ohota</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fujii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Saeki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2023 30th Asia-Pacific Software Engineering Conference (APSEC)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="349" to="358" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">pistar tool -A pluggable online tool for goal modeling</title>
		<author>
			<persName><forename type="first">J</forename><surname>Pimentel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Castro</surname></persName>
		</author>
		<ptr target="https://github.com/jhcp/piStar" />
	</analytic>
	<monogr>
		<title level="m">26th IEEE International Requirements Engineering Conference</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="498" to="499" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Kordi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mishra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Khashabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hajishirzi</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2212.10560</idno>
		<title level="m">Self-instruct: Aligning language models with self-generated instructions</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">istar 2.0 language guide</title>
		<author>
			<persName><forename type="first">F</forename><surname>Dalpiaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Franch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Horkoff</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/1605.07767v3.pdf" />
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
