<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Learning Fuzzy Cognitive Maps by a Hybrid Method Using Nonlinear Hebbian Learning and Extended Great Deluge Algorithm</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Zhaowei</forename><surname>Ren</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electronic and Computing Systems</orgName>
								<orgName type="institution">University of Cincinnati</orgName>
								<address>
									<addrLine>2600 Clifton Ave</addrLine>
									<postCode>45220</postCode>
									<settlement>Cincinnati</settlement>
									<region>OH</region>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Learning Fuzzy Cognitive Maps by a Hybrid Method Using Nonlinear Hebbian Learning and Extended Great Deluge Algorithm</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">1BB1255CB67E2BAC7E5A72733CD9CD9B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T17:01+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Fuzzy Cognitive Maps (FCM) is a technique to represent models of causal inference networks. Data driven FCM learning approach is a good way to model FCM. We present a hybrid FCM learning method that combines Nonlinear Hebbian Learning (NHL) and Extended Great Deluge Algorithm (EGDA), which has the efficiency of NHL and global optimization ability of EGDA. We propose using NHL to train FCM at first, in order to get close to optimization, and then using EGDA to make model more accurate. We propose an experiment to test the accuracy and running time of our methods.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction:</head><p>Fuzzy Cognitive Maps (FCM) (1) is a modeling methodology that represents graph causal relations of different variables in a system. One way of developing the inferences is by a matrix computation. FCM is a cognitive map with fuzzy logic <ref type="bibr" target="#b1">(2)</ref>.FCM allows loops in its network, and it can model feedback and discover hidden relations between concepts (3). Another advantage is that Neuron network techniques are used in FCM, e.g. Hebbian learning(4), Genetic Algorithm (GA) <ref type="bibr" target="#b4">(5)</ref>, Simulated Anealling (SA) (6).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1 An example of FCM</head><p>The structure of FCM is similar to an artificial neuron network, e.g. Figure <ref type="figure">1</ref>. There are two elements in FCM,  Copyright © 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. concepts and relations. Concepts reflect attributes, qualities and states of system. The value of concepts ranges from 0 to 1. Concepts can reflect both Boolean and quantitative value. For example, a concept can reflect either the state of light (while 0 means off and 1 means on), or water level of a tank. If it reflects a quantitative value, equation <ref type="bibr" target="#b0">[1]</ref> can be used for normalization. <ref type="bibr" target="#b0">[1]</ref> where A is the concept value before normalization, and and are the possible maximum and minimum value of A. Relations reflect causal inference from one concept to another. Relations have direction and weight value.</p><p>is denoted as the weight value from concept to concept . For a couple of nodes, there may be two, one or none relations between them. There are three possible types of causal relations:  the relation from concept to concept is positive. When concept increases (decreases), concept also increases (decreases).  the relation from concept to concept is negative. When concept increases (decreases), on the contrary, concept decreases (increases).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head></head><p>there is no relations between and When initial state of FCM is given, FCM will converge to a steady state through iteration process. One concept value is computed by the sum of weighted sum of all concepts that may be related to it. In each iteration, concept value is calculated by equation <ref type="bibr" target="#b1">[2]</ref>.</p><formula xml:id="formula_0">∑ [2]</formula><p>where is the value of conceptin iteration k+1, is the value of concept in iteration, and is the weight value of the edge from concept to concept . And , which is a transfer function to normalize weight value to <ref type="bibr">[-1,1]</ref>. is a parameter that determines its steepness.</p><p>For example, figure <ref type="figure">2</ref> is It is a problem an industrial process control problem <ref type="bibr" target="#b7">(8)</ref>. There is a tank with two valves where liquids flow into the tank. These two liquid had reaction in this tank. There is another valve which empties the fluid in the tank. There is also a sensor to gauge the gravity of produced liquids (proportional to the rate of reaction) in tank. As described in the figure below</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 2 An industrial control problem</head><p>There are two constraints of this problem. The first one is to maintain value of G in a particular range, and the second one is to keep height of liquids (T) in a range. Parsopoulos et al. <ref type="bibr" target="#b7">(8)</ref> proposes that there should be five concepts: (a) height of liquid in the tank, (b) the state of valve 1, (c) the state of valve 2, (d) the state of valve 3, and (e) the gravity of produced liquid in the tank. Our aim is to find out the causal inference value from one concept to another one.</p><p>There are mainly two strategies to learn an FCM. One is to exploit expert domain knowledge and formulate a specific application's FCM <ref type="bibr" target="#b6">(7)</ref>, this can be used when there is no good automated or semi-automated methods to build this model. If there are multiple domain experts available, each expert choose a value (e.g. very weak, weak, medium, strong, very strong) for the causal effect from one concept to another one; then the values are quantified and combined together into one value between -1 and 1. This strategy has its own shortage: the problem is complex and need a large number of concepts to describe a system, the cost of expert strategy is very high; moreover, it is difficult to discover new hidden relations by this strategy. Another strategy is to develop a data driven learning method. Input, output and state of a system are recorded when it is running, and these records are used as a neuron network training dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Background</head><p>One branch of Fuzzy Cognitive map (FCM) learning is Hebbian learning. Different Hebbian learning has been proposed, for example, Differential Hebbian Learning (DHL)(4), and its modified version Banlanced Differential Hebbian Learning (BDHL) <ref type="bibr" target="#b8">(9)</ref>. DHL changes weight matrix by the difference of two records, but it did not consider the scenario that multiple concepts have effect on one mutually. BDHL covers this situation, but it is costly owe to lack of optimization. The two Hebbian learning methods that have been used in real world is Active Hebbian learning (AHL) <ref type="bibr" target="#b9">(10)</ref> and Nonlinear Hebbian Learning(NHL) <ref type="bibr" target="#b10">(11)</ref>, and both of them require expert knowledge before computation. AHL explores a method to determine the sequence of active concepts. For each concept, only concepts that may affect it are activated. AHL is fast but requires expert intervention. Experts should determine the desired set of concepts and initial structure of FCM. NHL is a nonlinear extension of DHL. In NHL, before iteration starts,experts have to indicate an initial structure and sign of each non-zero weight. Weight values are updated synchronously, and only with concepts that experts indicate.</p><p>Another branch of learning FCM structure is populationbased method. Koulouriotis <ref type="bibr" target="#b11">(12)</ref>proposes evolution strategies to train fuzzy cognitive maps.</p><p>In 2007 Ghazanfari et al. <ref type="bibr" target="#b5">(6)</ref> proposes using Simulated Annealing (SA) to learn FCM, and he compared Genetic Algorithm (GA) <ref type="bibr" target="#b4">(5)</ref> and SA. They concluded that when there are more concepts in the network, SA has a better performance than genetic algorithm. In 2011, Baykasoglu and Adil <ref type="bibr" target="#b12">(13)</ref> proposed an algorithm called extended great deluge algorithm (EGDA) to train FCM. EGDA is quite similar to SA, but it demands smaller number of parameters than SA. Population-based method is capable to reach global optimization even if the initial weight matrix is not good, but it is usually computationally costly, especially when the initial weight matrix is far from optimal position. Moreover, population-based methods have many parameters that have to be set before processing. The parameters are set usually by experiences, and then duplicated experiments with different parameters should be made to get better performance. Hebbian learning methods are relatively fast, but their performance depends on initial weight matrix and predefined FCM structure very much. Expert intervention is usually essential. Experts need to indicate a structure before experiments.</p><p>The third branch is hybrid method, which takes both the effectiveness of Hebbian learning and global search capability of population-based methods. Papageorgiou and Groumpos (14) proposed a hybrid learning method that combines NHL and Differential Evolution algorithm (DE). First, NHL is used to learn FCM, and then its result is feed to DE algorithm. This method makes uses of both the effectiveness of Hebbian learning and the global search ability of population-based method. The three experiments they did show this hybrid method is capable to train FCM effectively. Zhu et.al(15) proposes another hybrid method which combines NHL and Real-coded Genetic Algorithm (RCGA)</p><p>Here I suggest a hybrid method combing NHL and EGDA. EGDA has global search ability and relatively less demand of parameters. If its initial weight matrix is close to optimal condition, it will save much computing expense. Here we use NHL to train FCM roughly first, and then feed its result to EGDA. NHL is picked because it is simple and fast, and it can deal with continuous range of value of concepts</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Hybrid Method Using NHL and EGDA</head><p>This hybrid method is processed by two stages.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Stage 1 use nonlinear Hebbian learning (NHL) (11)to train FCM</head><p>Step 1: Initialize weight matrix with help of experts and read input concept . We feed the initial weight matrix to feed</p><p>Step2: Calculate (concept value in iteration 1.Initial values can be denoted as values in iteration 0) by the equation <ref type="bibr" target="#b2">[3]</ref> ∑ <ref type="bibr" target="#b2">[3]</ref> where . λis a parameter that determines increasing rate of curve. It is a transfer function. When x changes from -∞ to ∞ , f(x) changes from 0 to 1. Therefore, final result of concept value is still from zero to one.</p><p>Step 3:</p><p>Use equation <ref type="bibr" target="#b3">[4]</ref> to update weights, <ref type="bibr" target="#b3">[4]</ref> where is learning rate function, and it decreases as k increases.</p><p>Step 4: At the end of each updating, the error function is computed as equation <ref type="bibr" target="#b4">[5]</ref> ∑ 2 <ref type="bibr" target="#b4">[5]</ref> where k is the iteration number. There are two termination conditions. One is that value of error function <ref type="bibr" target="#b2">[3]</ref> is below a threshold, and the other is there are enough times of iterations. If one of the termination conditions is reached, the iteration ends. Otherwise, go on the next iteration.</p><p>For example, now we have time series data of each concept value as For example, we want to update 2 and 3 using the first tuple of data. First, we use equation <ref type="bibr" target="#b2">[3]</ref> to calculate .</p><p>is set to 1 here.</p><formula xml:id="formula_1">2 2 3 3</formula><p>Then we use equation <ref type="bibr" target="#b1">[2]</ref> to update 2 and 3 .Here the learning rate If J is larger than termination threshold, then go to step 2. Otherwise, terminate this algorithm and got to stage 2.</p><p>Stage 2: use extended great deluge algorithm (EGDA)(13) to train FCM.</p><p>Step 1: Initialize the weight matrix with the suggested value from stage 1. The output of step 1 is feed to this step. Assume the weight matrix we got from last step is as Table <ref type="table">3</ref> </p><formula xml:id="formula_2">W 1 2 3 1 N/A 0.3 0 2 0.6 N/A 0.1 3 -0.4 -0.3 N/A</formula><p>Table <ref type="table">3</ref> Weight matrix after stage 1</p><p>Step 2: find a new neighbor of the current weight matrix. For each non-zero weight (because the edge with zero weight does not exist by expert prediction)in the matrix, use the equation below to generate their neighbor.</p><p>[6] where random( ) is a function to generate random value from 0 to 1, and then is a function to generate random value from -1 to 1. is a step size of moving. It is gradually decreased so this algorithm can have a more detailed search during the end of the search.</p><p>Step 3: Use equation <ref type="bibr" target="#b0">[1]</ref> and new weight matrix to calculate estimated concept value. Then calculate fitness function to determine if new configuration is better than current one. Here we use the total error to be fitness function. The equation is as below ∑ <ref type="bibr" target="#b6">[7]</ref> where K is the number of iteration, and N is the number of concepts.</p><p>Step 4: If the fitness function value of the neighbor configuration is better than tolerance, it is picked as current configuration. And then go to step 5, otherwise, go to step 2. Then reduce the tolerance.</p><p>Step 5: If the value of fitness function is better than best condition, update best condition.</p><p>For example: First we find a new neighbor for this. is set as 0.2. If this value is above a tolerance, it is denoted as current configuration, and then it is compared with best configuration to see if it is the best so far. If the new neighbor is not below tolerance, find another neighbor near current one. Reduce tolerance after each search. If the total error is below a threshold or there is enough number of iteration, then this algorithm terminates.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Experiment Design:</head><p>There are two steps of our experiment. First we are going to test our method by simulated data, and try to find out the scenario that our method can be most efficient and accurate. On the second step, we will use our method in a real application.</p><p>In this experiment, data is generated by a random process. First the number of concepts and density of relations are set. We can try different number of concepts, from small to large, in order to test the performance of this method in network with different complexity. Density represents how many percent of edges exist in a network. It is defined as equation <ref type="bibr" target="#b7">[8]</ref>. <ref type="bibr" target="#b7">[8]</ref> For example, if we set number of concepts as 5, and density as 0.4, number of edges is computed as below then there would be eight edges in this network.</p><p>After number of concepts and edges are set, a model can be generated with random weight, and we name it original model. Then random data is generated, and they are fed to equation <ref type="bibr" target="#b0">[1]</ref> iteratively, until it reaches a steady state (the error in equation <ref type="bibr" target="#b6">[7]</ref> is lower than threshold). The steady state would be a record for simulated data. After a certain time of iteration, if it still cannot reach steady state, a new tuple of data would be generated randomly and fed to equation <ref type="bibr" target="#b0">[1]</ref>. After hundreds of times, we will have a series of data as training set. This data is used to learn FCM by our method. The weight matrix we get would be compared with the original model. The error is calculated as equation <ref type="bibr" target="#b8">[9]</ref> ∑ ∑ ̅̅̅̅ 2 <ref type="bibr" target="#b8">[9]</ref> where N is the number of concepts in this model. Some other methods (NHL, EGDA, SA) can also be programmed, and compared with this method. These methods will be compared in accuracy and running time, in several conditions.</p><p>After simulated experiment, based on the best conditions for our method, we will apply it on a real practical problem.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion</head><p>We propose a hybrid method to learn FCM. Our method has taken advantages of fast speed of NHL and global search ability of EGDA. Moreover, we propose an experiment to test our algorithm, and try to apply it into practice.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc></figDesc><table><row><cell>A1</cell><cell>A2</cell><cell>A3</cell><cell></cell></row><row><cell>0.5</cell><cell>0.5</cell><cell>0.1</cell><cell></cell></row><row><cell>0.6</cell><cell>0.4</cell><cell>0.2</cell><cell></cell></row><row><cell>0.5</cell><cell>0.3</cell><cell>0.3</cell><cell></cell></row><row><cell></cell><cell cols="2">Table 1 Concept value record</cell><cell></cell></row><row><cell cols="3">Each tuple is a record of three concept value.</cell><cell></cell></row><row><cell cols="4">Initial weight matrix is predicted by experts or generated</cell></row><row><cell cols="2">randomly. Here it is as Table 2</cell><cell></cell><cell></cell></row><row><cell>W</cell><cell>1</cell><cell>2</cell><cell>3</cell></row><row><cell>1</cell><cell>N/A</cell><cell>0.3</cell><cell>0</cell></row><row><cell>2</cell><cell>0.7</cell><cell>N/A</cell><cell>0.2</cell></row><row><cell>3</cell><cell>-0.6</cell><cell>-0.3</cell><cell>N/A</cell></row><row><cell></cell><cell cols="2">Table 2 Initial weight matrix</cell><cell></cell></row><row><cell></cell><cell cols="3">(the weight from concept I to concept j) is the value</cell></row><row><cell cols="3">in line I and column j. For example, 2</cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Fuzzy cognitive maps</title>
		<author>
			<persName><forename type="first">Kosko</forename><surname>Bart</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Man-Machine Studies</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page">65</biblScope>
			<date type="published" when="1986">1986</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Fuzzy engineering</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kosko</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1997">1997</date>
			<publisher>Prentice-Hall, Inc</publisher>
			<pubPlace>Upper Saddle River, NJ, USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Unsupervised learning techniques for fine-tuning fuzzy cognitive map causal links</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">I</forename><surname>Papageorgiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Stylios</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Groumpos</forename><surname>Pp</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Human-Computer Studies</title>
		<imprint>
			<biblScope unit="volume">64</biblScope>
			<biblScope unit="page">727</biblScope>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Virtual worlds as fuzzy cognitive maps</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Dickerson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kosko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Virtual Reality Annual International Symposium</title>
				<imprint>
			<date type="published" when="1993">1993. 1993. 1993</date>
			<biblScope unit="page" from="471" to="477" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Genetic learning of fuzzy cognitive maps</title>
		<author>
			<persName><forename type="first">W</forename><surname>Stach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kurgan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Pedrycz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Reformat</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fuzzy Sets Syst</title>
		<imprint>
			<biblScope unit="volume">153</biblScope>
			<biblScope unit="page" from="371" to="401" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Comparing simulated annealing and genetic algorithm in learning FCM</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ghazanfari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Alizadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fathian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Koulouriotis</forename><surname>De</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Mathematics and Computation</title>
		<imprint>
			<biblScope unit="volume">192</biblScope>
			<biblScope unit="page">56</biblScope>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Group decision support using fuzzy cognitive maps for causal reasoning</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Khan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Quaddus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Group Decis Negotiation</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="463" to="480" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">A first study of fuzzy cognitive maps learning using particle swarm optimization</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">E</forename><surname>Parsopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">I</forename><surname>Papageorgiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">P</forename><surname>Groumpos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">N</forename><surname>Vrahatis</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2003">2003</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page">1440</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">A balanced differential learning algorithm in fuzzy cognitive maps</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Huerga</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Active hebbian learning algorithm to train fuzzy cognitive maps</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">I</forename><surname>Papageorgiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">D</forename><surname>Stylios</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Groumpos</forename><surname>Pp</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Approximate Reasoning</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page">219</biblScope>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Fuzzy Cognitive Map Learning Based on Nonlinear Hebbian Rule</title>
		<author>
			<persName><forename type="first">E</forename><surname>Papageorgiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Stylios</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Groumpos</forename><forename type="middle">P</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">T</forename><surname>Gedeon</surname></persName>
		</editor>
		<editor>
			<persName><surname>Fung L Eds</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin / Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2003">2003. 2003</date>
			<biblScope unit="page" from="256" to="268" />
		</imprint>
	</monogr>
	<note>AI</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Learning fuzzy cognitive maps using evolution strategies: A novel schema for modeling and simulating high-level behavior</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Koulouriotis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">E</forename><surname>Diakoulakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Emiris</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2001">2001</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">364</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">An integrated framework for learning fuzzy cognitive map using RCGA and NHL algorithm</title>
		<author>
			<persName><forename type="first">A</forename><surname>Baykasoglu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zdu</forename><surname>Durmusoglu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V ;</forename><surname>Kaplanoglu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">I</forename><surname>Papageorgiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Groumpos</forename><surname>Pp ; Yanchun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Soft Computing</title>
		<imprint>
			<biblScope unit="volume">62</biblScope>
			<biblScope unit="page">409</biblScope>
			<date type="published" when="2005">2011. 2005. 2008</date>
		</imprint>
	</monogr>
	<note>Training fuzzy cognitive maps via extended great deluge algorithm with applications</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
