<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Fusing Modalities in Forensic Identification with Score Discretization</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Y</forename><forename type="middle">L</forename><surname>Wong</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">Soft Computing Research Group</orgName>
								<orgName type="institution">Universiti Teknologi Malaysia</orgName>
								<address>
									<postCode>81310</postCode>
									<settlement>Johor</settlement>
									<country key="MY">Malaysia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Shamsuddin</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">Soft Computing Research Group</orgName>
								<orgName type="institution">Universiti Teknologi Malaysia</orgName>
								<address>
									<postCode>81310</postCode>
									<settlement>Johor</settlement>
									<country key="MY">Malaysia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Yuhaniz</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">Soft Computing Research Group</orgName>
								<orgName type="institution">Universiti Teknologi Malaysia</orgName>
								<address>
									<postCode>81310</postCode>
									<settlement>Johor</settlement>
									<country key="MY">Malaysia</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Sargur</forename><forename type="middle">N</forename><surname>Srihari</surname></persName>
							<email>srihari@cedar.buffalo.edu</email>
							<affiliation key="aff1">
								<orgName type="department">Department of Computer Science and Engineering</orgName>
								<orgName type="institution" key="instit1">University at Buffalo</orgName>
								<orgName type="institution" key="instit2">The State University of New York</orgName>
								<address>
									<postCode>14260</postCode>
									<settlement>Buffalo</settlement>
									<region>NY</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Fusing Modalities in Forensic Identification with Score Discretization</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">EB6E6C6B3B08AB4950B27FC1A6D44D1A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T11:46+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>forensic</term>
					<term>multimodal</term>
					<term>discretization</term>
					<term>matching scores</term>
					<term>fusion</term>
					<term>identification</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The fusion of different forensic modalities for arriving at a decision of whether the evidence can be attributed to a known individual is considered. Since close similarity and high dimensionality can adversely affect the process, a method of score fusion based on discretization is proposed. It is evaluated considering the signatures and fingerprints. Discretization is performed as a filter to find the unique and discriminatory features of each modality in an individual class before their use in matching. Since fingerprints and signatures are not compatible for direct integration, the idea is to convert the features into the same domain. The features are assigned an appropriate matched score, M S bp which are based to their lowest distance. The final scores are then fed to the fusion, F S bp . The top matches with F S bp less than a predefined threshold value, η are expected to have the true identity. Two standard fusion approaches, namely Mean and Min fusion, are used to benchmark the efficiency of proposed method. The results of these experiments show that the proposed approach produces a significant improvement in the forensic identification rate of fingerprint and signature fusion and this findings support its usefulness.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>The goal of forensic analysis is that of determining whether observed evidence can be attributed to an individual. The final decision of forensic analysis can take one of three values: identification/no-conclusion/exclusion. Biometric systems have a similar goal of going from input to conclusion but with different goals and terminology: biometric identification means determining the best match in a closed set of individuals and verification means whether the input and known have the same source. While biometric systems attempt to do the entire process automatically, forensic systems narrow-down the possibilities among a set of individuals with the final decision being made by a human examiner. Automatic tools for forensic analysis have been developed for several forensic modalities including signatures <ref type="bibr" target="#b0">[1]</ref>, fingerprints <ref type="bibr" target="#b1">[2]</ref>, handwriting <ref type="bibr" target="#b2">[3]</ref>, and footwear prints or marks <ref type="bibr" target="#b3">[4]</ref>. In both forensic analysis and biometric analysis more than one modality of data can be used to improve accuracy <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b5">[6]</ref>. Examples of the need to combine forensic evidence in forensic analysis are: signature and fingerprints on the same questioned document, pollen found on the clothing of an assailant together with human DNA <ref type="bibr" target="#b6">[7]</ref>, multiple shoe-prints in a crime scene <ref type="bibr" target="#b7">[8]</ref>, etc. In this paper we explore how evidence of different modalities can be combined for the forensic decision. Biometric identification systems such as token based and password based identification systems, unimodal identification recognizes a user, by "who the person is", using a one-to many matching process (1:M) rather than by "what the person carries along". Conventional systems suffer from numerous drawbacks such as forgotten password, misplaced ID card, and forgery issues. To address these problems, unimodal based identification was developed and has seen extensive enhancements in reliability and accuracy of identification. However, several studies have shown that the poor quality of image samples or the methodology itself can lead to a significant decreasing in the performance of a unimodal based identification system <ref type="bibr" target="#b8">[9]</ref>, <ref type="bibr" target="#b9">[10]</ref>, <ref type="bibr" target="#b10">[11]</ref>. The common issues include intra-class variability, spoof attack, non-universality, and noisy data. In order to overcome these difficulties in unimodal identification, multimodal based identification systems (MIS) have been developed. As the name suggests, in an MIS the identification process is based on evidence presented by multiple modality sources from an individual. Such systems are more robust to variations in the sample quality than unimodal systems due to the presence of multiple (and usually independent) pieces of evidence <ref type="bibr" target="#b11">[12]</ref>. A key to successful multimodal based system development for forensic identification, is an effective methodology organization and fusion process, capable to integrate and handle important information such as distinctiveness characteristic of an individual. Individual's distinctive characteristics is unique to forensic. Therefore, in this paper, the multi-matched scores based discretization method is proposed for forensic identification of an individual from different modalities. Compared to previous methods, the proposed method is unique in the sense that the extracted features correspond to the individuality of a particular person which are discretized and represented into standard sizes. The method is robust and capable to overcome dimensionality issues without requiring image normalization. The low dimension and standardized features make the design of post-processing phase (classifier or decision) straightforward. Moreover, the clear physical meanings of the discretized features are meaningful and distinctive, and be used in more complex systems (e.g., expert systems for interpretation and inference).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. RELATED WORK</head><p>In identification systems, fusion takes into account a set of features that can reflect the individuality and characteristics of the person under consideration. However, it is difficult to extract and select features that are discriminatory, meaningful and important for identification. Different sets of features may have better performance when considering different groups of individuals and therefore, a technique is needed to represent for each sample set of features. In this paper, multimatched scores fusion based discretization is proposed for forensic identification to represent the distinctiveness in multimodalities of an individual.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Representation of individuality features</head><p>Extracting and representing relevant features which contains the natural characteristics of an individual is essential for a good performance of the identification algorithms. Existing multimodal based identification systems make the assumptions that each modality feature set from an individual is local, wide-ranging, and static. Thus, these extracted feature sets are commonly fed to individual matching or and classification algorithms directly.</p><p>As a result, the identification system becomes more complex, time consuming, and costly because a classifier is needed for each modality. Furthermore, concatenating features from different modalities after the feature extraction method leads to the need of comparing high dimensional, heterogeneous data which is a nontrivial issue. However, much work has been proposed to overcome the dimensional issues in extracted features such as implementation of normalization techniques after extraction. Careful observation and experimental analysis need to be performed in order to improve the performance of identification. Too much of normalization will diminish the originality characteristic of an individual from different modality images. Thus, another process is needed to produce a more discriminative, reliable, unique and informative feature representation to represent these inherently multiple continuous features into standardized discrete features (per individual). This leads to the multi-matched score fusion discretization approach introduced in this paper which is explored in the context of forensic identification of different modalities for distinguishing a true identity of a person.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. The discretization algorithm</head><p>Discretization is a process whereby a continuous valued variable is represented by a collection of discrete values. It attracted a lot of interest from and work in several different domains <ref type="bibr" target="#b12">[13]</ref>, <ref type="bibr" target="#b13">[14]</ref>, <ref type="bibr" target="#b14">[15]</ref>. The discretization method introduced here is based on discretization defined in <ref type="bibr" target="#b15">[16]</ref>.</p><p>Given a set of features, the discretization algorithm first computes the size of interval, i.e., it determines its upper and lower bounds. The range is then divided by the number of features which then gives each interval upper and lower approximation. The number of intervals generated is equal to the dimensionality of the feature vectors, maintaining the original number of extracted features from different extraction methods in this study. Subsequently, a single representation value for each interval, or cut, is computed by taking the midpoint of the lower approximation,Approx lower and upper approximation, Approx upper interval. Algorithm 1 shows the discretization steps discussed above.</p><p>Algorithm </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Processing and extraction of Signature and Fingerprint</head><p>For signature, the input image is first binarized by adaptive thresholding, followed by morphology operations (i.e., remove and skel) to get the gray level of clean and universe of discourse signature image (UOD) as illustrated in Fig. <ref type="figure" target="#fig_0">1</ref>. The UOD of signature is extracted using geometry based extraction approach <ref type="bibr" target="#b16">[17]</ref>, which is based on 3x3 window concept. The process is done on individual window instead of the whole image to give more information of the signature image icludes the positions of different line structures.  Unimodal extraction and the discretization step are illustrated in Table <ref type="table" target="#tab_1">I</ref> for signature data for individual 1, and Table <ref type="table" target="#tab_1">II</ref> for the fingerprint data for the same individual.</p><p>In each of these tables, the feature values are divided into predefined number of bins, which is based on the number of features for each modality image.</p><p>In the top portion of these tables, for each bin, the lower and upper values are recorded in columns two and three respectively, and bin, RepV alue, the average of lower and upper values, is recorded in column four. Max and Min values are highlighted in bold face.In the bottom portion of the table, the discretized features for signature and fingerprint are displayed. These tables shows an example of how the actual feature sets from individual are discretized.</p><p>As it can be seen from the Table <ref type="table" target="#tab_1">I</ref>, the feature values, 35.259 occurs for every column of the nine features for the signature data of the same individual.This means that the first individual is uniquely recognized by this discriminatory value. A similar discussion holds for Table <ref type="table" target="#tab_1">II</ref>, where the set of discriminatory values for fingerprint data for first individual, obtained from four different images is 104.</p><p>The selected features are the representation values (Discriminatory features, DF of an individual) that describe the unique characteristics of an individual which will be used for matching process. In matching module, the distance between the discretized values with the stored feature values are computed by Euclidean Distance equation as defined in <ref type="bibr" target="#b0">(1)</ref>.</p><formula xml:id="formula_0">ED bp = N ∑ i=1 ( Df bp,i − Df (r) bp,i )<label>(1)</label></formula><p>Where Df bp,i represents ith discretized feature of new modality image meanwhile Df (r) bp,i defines the ith discretized feature of reference modality image in stored template and bp represents either behavioral or phisiological trait of the individual. The ith total number of features extracted from a single modality image is denoted by N. Let X sign = ED sign (x), where X sign = (x 1 , ...x d ) denotes a distance for discretized signature features and Y f inger = ED f inger (y), where Y f inger = (y 1 , ...y d ) is a distance for the discretized fingerprint features. The lowest distance for signature can be denoted as min[ED sign (x)] and lowest distance for fingerprint can be defined as min[ED f inger (y)]. Then, we define the modality features with the lowest distance as match score-1,(M S bp = 1), the second modality features with the second lowest distance as M S bp = 2 and so on. bp here defines either behavioral(i.e., signature) or phisiological(i.e., fingerprint) trait of the individual. Then, the match score, M Sbp is fed to the fusion approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>D. Multi-modality fusion</head><p>After matching, the matched scores of signature and fingerprint are fed to the fusion method. Let X sign =M S sign (1),M S sign (2),...M S sign (n) denotes the computed signature match scores and Y f inger=M S f inger <ref type="bibr" target="#b0">(1)</ref>,M S f inger (2),...M S f inger (n) defines the computed match scores for fingerprint.In this work, the final fused score, F S bp of the individual are computed using Equation ( <ref type="formula" target="#formula_1">2</ref>), where k represents the number of different modalities of an individual. The M S for fingerprint and signature are combined and divided by k to generate a single score which is then compared to a predefined threshold to make the final decision.</p><formula xml:id="formula_1">F S bp = M S sign + M S f inger k<label>(2)</label></formula><p>Fusion approaches, namely Mean, M eanF S bp and Min, M inF S bp fusion as defined in (3) and ( <ref type="formula" target="#formula_3">4</ref>) are chosen for comparisson to show the efficiency of the proposed method on multi-modalities identification.</p><formula xml:id="formula_2">M eanF S bp = (xM S sign + yM S f inger )/2<label>(3)</label></formula><formula xml:id="formula_3">M inF S bp = min(M S sign , M S f inger )<label>(4)</label></formula><p>Finally, the F S bp is forward to next phase for identification.</p><p>In identification process of one-to-many matching (1:M), F S bp is compared with the predefined identification threshold, η in order to identify the individual from M individuals. In this work, the identity of a person is identified if,</p><formula xml:id="formula_4">F S bp ≤ η<label>(5)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. EXPERIMENTAL RESULTS</head><p>The performance of this work is performed using ROC curve which consists of Genuine Acceptance Rate (GAR) of a system mapped against the False Acceptance Rate (FAR). In this work, GAR is equal to 1-FRR. Fig. <ref type="figure" target="#fig_0">1</ref> shows the performance of Unimodal identification for signature and fingerprint. Discretization is applied in this experiment. No normalization and fusion methods are implemented. The performance of the identification for both discretized signature and fingerprint and non-discretized dataset is compared.  From ROC graph, clearly defines that the use of discretization on the unimodal dataset enhances the overall performance of identification significantly over the performance of identification without discretization. Due to efficiency of the discretization method on unimodal identification, thus, the same technique is applied to multimodal identification in order to improve the accuracy of identification on multiple modalities. Fig. <ref type="figure" target="#fig_1">2</ref> and Fig. <ref type="figure" target="#fig_2">3</ref> below shows the performance of ROC graph for two different fusion methods namely Mean fusion rule and Min method with the implementation of Z-Score normalization and matched scores fusion based discretization approach on multiple modalities. From the ROC graph depicted in Fig. <ref type="figure" target="#fig_1">2</ref>, it can be seen that the implementation of the proposed method based discretization on the multi-modalities fusion of signature and fingerprint shows a better performance than the standard signature and fingerprint identification system. At FAR of 0.1%, 1.0%, and 10.0%, the implementation of the proposed method which is based on discretization has a GAR of 96.9%, 98.9%, and 99.9% respectively, where the performance is better than the Z-score normalization and Mean fusion on signature and fingerprint modalities, 93.5%, 93.7%, and 96.4%. Fig. <ref type="figure" target="#fig_2">3</ref> shows the GAR performance on Min fusion based Zscore normalization and the proposed multi-matched score based discretization. Again, in Fig. <ref type="figure" target="#fig_2">3</ref>, interestingly, the proposed method based on discretization on signature and fingerprint modalities yields the best performance over the range of FAR. At 0.1%, 1.0%, and 10.0% of FAR, the Min fusion method works the best with proposed method, 95.0%, 97.99%, and 99.40% respectively. Therefore, it can be summarized that the used of discretization and proposed fusion of fingerprint and signature modalities generally performs well over the use of normalization and conventional fusion approaches for personal identification. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. CONCLUSION</head><p>A key to successful multimodal based system development for forensic identification, is an effective methodology organization and fusion process, capable to integrate and handle important information such as distinctiveness characteristic of an individual. In this paper, the match scores discretization is proposed and implemented on different modality datasets of an individual. The experiments are done on signature and fingerprint datasets, which consist of 156 students (both female and male) where each student contributes 4 samples of signatures and fingerprint.</p><p>Ten features describing the bifurcation and termination points of fingerprint, were extracted using Minutia based extraction approach whereas signature is extracted using Geometry based extraction approach. In matching process, each template-query pair feature sets is compared using Euclidean distance. Two fusion approaches namely Mean and Min fusion are performed to seek for the efficiency of the proposed method in Multimodal identification. The experimental results show that the proposed multi-matched scores discretization perform well on multiple set of individual traits, consequently improving the identification performance.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Examples of preprocessed signature image (a)Original image (b)Binarized image (c)Skeletonized image (d)UOD.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Examples of preprocessed fingerprint image (a)Original image (b)Binarized image (c)Thinned image (d)Minutia Points (e)False Minutia removed (f)ROI.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Performance of uni-modality identification.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 4 .Fig. 5 .</head><label>45</label><figDesc>Fig. 4. Performance of Multi-modality fusion methods for signature and fingerprint.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Find the Approx lower and Approxupper Compute the midpoints of all Approx lower and Approxupper end for</figDesc><table><row><cell>Form a set of all discrete values, Dis F eatures:</cell></row><row><cell>for 1 to numb extracted f eature do</cell></row><row><cell>for each bin do</cell></row><row><cell>if (feature in range of interval) then</cell></row><row><cell>Dis F eature = RepV alue</cell></row><row><cell>end if</cell></row><row><cell>end for</cell></row><row><cell>end for</cell></row><row><cell>end for</cell></row></table><note>1: Discretization Algorithm Require: Dataset with f continuous features, D samples and C classes; Require: Discretized features, D ′ ; for each individual do Find the M ax and the M in values of D samples numb bin = numb extracted f eature Divide the range of M in to M ax with numb bin Compute representation values, RepV alue:for each bin do</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>TABLE I EXAMPLE</head><label>I</label><figDesc>OF DISCRETIZATION PROCESS FOR SIGNATURE FEATURES OF FIRST INDIVIDUAL</figDesc><table><row><cell cols="4">LOW and UPPER BIN for Individual: 1</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="4">MIN Value 10.8096 MAX Value 98.8273</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Bin</cell><cell>Lower</cell><cell>Upper</cell><cell>RepValue</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>0</cell><cell>10.8096</cell><cell>20.5893</cell><cell>15.69945</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>1</cell><cell>20.5893</cell><cell>30.3691</cell><cell>25.4792</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>2</cell><cell>30.3691</cell><cell>40.1488</cell><cell>35.259</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>3</cell><cell>40.1488</cell><cell>49.9286</cell><cell>45.0387</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>4</cell><cell>49.9286</cell><cell>59.7083</cell><cell>54.8184</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>5</cell><cell>59.7083</cell><cell>69.4881</cell><cell>64.5982</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>6</cell><cell>69.4881</cell><cell>79.2678</cell><cell>74.3779</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>7</cell><cell>79.2678</cell><cell>89.0476</cell><cell>84.1577</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>8</cell><cell>89.0476</cell><cell>98.8273</cell><cell>93.9374</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="2">DISCRETIZED DATA</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>f 1</cell><cell>f 2</cell><cell>f 3</cell><cell>f 4</cell><cell>f 5</cell><cell>f 6</cell><cell>f 7</cell><cell>f 8</cell><cell>f 9</cell><cell></cell><cell>Class</cell></row><row><cell>15.69945</cell><cell>15.69945</cell><cell>35.259</cell><cell>35.259</cell><cell>15.69945</cell><cell>15.69945</cell><cell>15.69945</cell><cell>25.4792</cell><cell>25.4792</cell><cell cols="2">1s Discriminatory</cell></row><row><cell>54.8184</cell><cell>64.5982</cell><cell>93.9374</cell><cell>35.259</cell><cell>15.69945</cell><cell>35.259</cell><cell>54.8184</cell><cell>45.0387</cell><cell>25.4792</cell><cell>1s</cell><cell>Value is</cell></row><row><cell>25.4792</cell><cell>35.259</cell><cell>35.259</cell><cell>25.4792</cell><cell>25.4792</cell><cell>54.8184</cell><cell>35.259</cell><cell>45.0387</cell><cell>45.0387</cell><cell>1s</cell><cell>35.259</cell></row><row><cell>64.5982</cell><cell>35.259</cell><cell>25.4792</cell><cell>25.4792</cell><cell>35.259</cell><cell>74.3779</cell><cell>45.0387</cell><cell>15.69945</cell><cell>15.69945</cell><cell>1s</cell><cell>for 1st ind.</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENT</head><p>This work is supported by The Ministry of Higher Education (MOHE) under Research University Grant (GUP) and My-brain15. Authors would especially like to thank Universiti Teknologi Malaysia, Skudai Johor Bahru MALAYSIA for the support and Soft Computing Research Group (SCRG) for their excellent cooperation and contributions to improve this paper.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Computational methods for handwritten questioned document examination</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">N</forename><surname>Srihari</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
	<note type="report_type">National Criminal Justice Research Report</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Evaluation of rarity of fingerprints in forensics</title>
		<author>
			<persName><forename type="first">C</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Srihari</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="1207" to="1215" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Role of automation in the examination of handwritten items</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">N</forename><surname>Srihari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Singer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Frontiers in Handwriting Recognition (ICFHR), 2012 International Conference on. IEEE</title>
				<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="619" to="624" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">An efficient clustering-based retrieval framework for real crime scene footwear marks</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kasiviswanathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">N</forename><surname>Srihari</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Granular Computing, Rough Sets and Intelligent Systems</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="327" to="360" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Forensic identification reporting using automatic biometric systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Gonzalez-Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ortega-Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-L</forename><surname>Sanchez-Bote</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Biometric Solutions</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="169" to="185" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Bayesian analysis of fingerprint, face and signature evidences with automatic biometric systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Gonzalez-Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fierrez-Aguilar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ramos-Castro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ortega-Garcia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Forensic science international</title>
		<imprint>
			<biblScope unit="volume">155</biblScope>
			<biblScope unit="issue">2-3</biblScope>
			<biblScope unit="page" from="126" to="140" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Application of plant dna markers in forensic botany: Genetic comparison of¡ i¿ quercus¡/i¿ evidence leaves to crime scene trees using microsatellites</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">J</forename><surname>Craft</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Owens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Ashley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Forensic science international</title>
		<imprint>
			<biblScope unit="volume">165</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="64" to="70" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Footwear print retrieval system for real crime scene marks</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">N</forename><surname>Srihari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kasiviswanathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Corso</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computational Forensics</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="88" to="100" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Introduction to biometrics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Jain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ross</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Handbook of Biometrics</title>
		<imprint>
			<biblScope unit="page" from="1" to="22" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Introduction to multibiometrics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Ross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Nandakumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jain</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Handbook of Biometrics</title>
		<imprint>
			<biblScope unit="page" from="271" to="292" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A survey of unimodal biometric methods</title>
		<author>
			<persName><forename type="first">N</forename><surname>Solayappan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Latifi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2006 International Conference on Security and Management</title>
				<meeting>the 2006 International Conference on Security and Management</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="57" to="63" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Information fusion in biometrics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Ross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jain</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern recognition letters</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">13</biblScope>
			<biblScope unit="page" from="2115" to="2125" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Discretization: An enabling technique</title>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Hussain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">L</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dash</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data mining and knowledge discovery</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="393" to="423" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Pendiskretan set kasar menggunakan taakulan boolean terhadap pencaman simbol matematik</title>
		<author>
			<persName><forename type="first">R</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Darus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M H</forename><surname>Shamsuddin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Bakar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal Teknologi Maklumat &amp; Multimedia</title>
		<imprint>
			<biblScope unit="page" from="15" to="26" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Feature discretization for individuality representation in twins handwritten identification</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">O</forename><surname>Mohammed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Shamsuddin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Computer Science</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="1080" to="1087" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Invariants discretization for individuality representation in handwritten authorship</title>
		<author>
			<persName><forename type="first">A</forename><surname>Muda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shamsuddin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Darus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Forensics</title>
		<imprint>
			<biblScope unit="page" from="218" to="228" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Off-line signature verification based on geometric feature extraction and neural network classification</title>
		<author>
			<persName><forename type="first">K</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Yan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="9" to="17" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
