<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Coupled Feedback Attention Networks 1</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Rong</forename><surname>Wang</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">College of Mathematics and Computer Science</orgName>
								<orgName type="institution">Zhejiang Normal University</orgName>
								<address>
									<settlement>Jin Hua, Zhejiang</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Chunjiang</forename><surname>Duanmu</surname></persName>
							<email>duanmu@zjnu.cn</email>
							<affiliation key="aff1">
								<orgName type="department">College of Physics and Electronic Information Engineering</orgName>
								<orgName type="institution">Zhejiang Normal University</orgName>
								<address>
									<addrLine>Jin Hua</addrLine>
									<settlement>Zhejiang</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Internet of Things and Cloud Computing Technology</orgName>
								<orgName type="laboratory">AIoTC2022@International Conference on Artificial Intelligence</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Coupled Feedback Attention Networks 1</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">ACFAF673E99CF3FF228D51654B512F59</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:46+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In their daily lives, people frequently need to obtain images with a high dynamic range and resolution. Due to technological equipment limitations, high dynamic range images are produced by multi-exposure fusion (MEF) of low dynamic range images, while high resolution images are frequently obtained by super-resolution (SR) of low resolution images. MEF and SR are often analyzed separately. This research examines existing approaches and proposes a coupled feedback network attention network and its method to address the issue that current models cannot achieve high dynamic range and high resolution simultaneously.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>High dynamic range (HDR) images contain a broader dynamic range and richer texture features compared to typical low dynamic range (LDR) images and low resolution (LR) images, and high resolution (HR) images can enhance object detection accuracy. Technical methods to obtain HDR images and HR images, respectively, include single image super resolution (SISR) and multi-exposure image fusion (MEF).</p><p>By fusing two LDR images, the extreme exposure image fusion method creates an HDR image. Ma et al. <ref type="bibr" target="#b11">[8]</ref> provided a quick approach for fusing multiple exposure images that improved the initial weights using a guided filter. Later, Xu et al <ref type="bibr" target="#b9">[7]</ref> proposed a unified unsupervised fusion method that overcomes the fusion barrier of most images by constraining the similarity between the fused image and the original image.</p><p>With the continuous development of deep neural networks, many CNN-based methods have been proposed in the field of SISR. RCAN <ref type="bibr" target="#b5">[4]</ref> introduces an attention mechanism to further improve the reconstruction quality. SRFBN <ref type="bibr" target="#b2">[2]</ref> introduces a feedback structure to optimize shallow features through iteration to produce deeper features.</p><p>The above MEF and SISR methods are used to solve the LDR and LR problems, respectively, but in real life, people often need to see HDR and HR images on cell phones or TVs, so the joint MEF and SR methods are necessary. This paper proposes a coupled feedback attention network-based image exposure fusion and super-resolution method, which can effectively suppress the superposition of redundant information in cyclic iterations, improve the quality of parameter sharing as well as exposure feature propagation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Coupled Feedback Attention Network</head><p>In order to solve the propagation of redundant features and enhance the propagation of useful features in the coupled feedback network, this paper combines the coupled attention mechanism and feedback mechanism, and proposes an image exposure fusion and image super-resolution method based on the coupled feedback attention network.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Basic network structure</head><p>The structure of the coupled feedback network is shown in Fig. <ref type="figure" target="#fig_0">1</ref>. The shallow features 𝐹 and 𝐹 go through T rounds of iteration by the coupled feedback attention module in the upper and lower network, respectively. The feedback features in each iteration combine the feedback features in the other network and the shallow features in this network, together as the input of the next iteration, to achieve the refinement fused features. The coupled-feedback attention layer contains multiple coupled-feedback blocks and an attention module.</p><p>The extraction process of shallow features 𝐹 and 𝐹 of LR images can be expressed as</p><formula xml:id="formula_0">𝐹 = 𝑓 (𝐼 ) 𝐹 = 𝑓 (𝐼 )</formula><p>where 𝑓 contains two convolutional layers Conv (3,4×m) and Conv(1,m), which are used to extract LR features and compress LR features, respectively. The extracted shallow features are first passed through SRB to obtain the deep features 𝐺 and 𝐺 , which can be expressed as</p><formula xml:id="formula_1">𝐺 = 𝑓 (𝐹 ) 𝐺 = 𝑓 (𝐹 )</formula><p>where 𝑓 is the super-resolution module (SRB) operation. Next, the deep exposure features of the two sub-networks are deeply fused after several iterations. At each iteration, the feedback features of the previous iteration are coupled and the shallow features 𝐹 and 𝐹 of the respective networks are together as the input of this iteration, and the feedback features 𝐶 and 𝐶 of the t-th iteration can be expressed as </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Coupled Feedback Attention Module</head><p>This section specifically describe the specific iterative process of the coupled feedback block and channel attention module.</p><p>As shown in Fig. <ref type="figure">2</ref>, the coupled feedback attention structure mainly contains iterative convolutional and deconvolutional layers constituting the CFB, and channel attention gates.</p><p>According to 3.1, in the upper sub-network, the inputs of the coupled feedback attention module are 𝐺 , 𝐺 , 𝐹 . firstly, the channel compression is performed through the convolutional layer Conv(1,m) to obtain the input 𝐿 (0) of the coupled feedback attention module.</p><formula xml:id="formula_2">𝐿 (0) = 𝑓 ([𝐺 , 𝐺 , 𝐹 ])</formula><p>Next, multiple working groups consisting of convolutional and deconvolutional layers, the HR feature 𝐻 (𝑛) of the n-th working group in the t-th iteration can be expressed as</p><formula xml:id="formula_3">𝐻 (𝑛) = 𝑓 ([𝐿 (0), 𝐿 (1), … , 𝐿 (𝑛 − 1)])</formula><p>where 𝑓 is the deconvolution layer Deconv (3,m). The HR features are generated by upsampling the LR features jointly from the first n-1 workgroups. Similarly, LR features 𝐿 (𝑛) can be expressed as</p><formula xml:id="formula_4">𝐿 (𝑛) = 𝑓 ([𝐻 (1), 𝐻 (2), … , 𝐿 (𝑛 − 1)])</formula><p>where 𝑓 is the convolutional layer Conv(3,m). The output of the final N-th working group is generated by the joint LR features of the previous N working groups passing through the convolution layer Conv(1,m) as follows.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝐺 = 𝑓 (𝐿 (1), 𝐿 (2), … , 𝐿 (𝑁)])</head><p>The above describes the iterative process of the extreme high exposure branch, and the iterative process of the extreme low exposure branch is the same.</p><p>The feedback features 𝐺 and 𝐺 are output from each iteration, go through the channel attention module CA for feature optimization. The CA in this paper consists of three steps, which are global information compression, scaling and excitation, and recalibration.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>1) Global information compression</head><p>In order to obtain the global information of each channel, this paper represents the feature values of each channel by global averaging pooling:</p><formula xml:id="formula_5">𝑔 = 1 𝐻 × 𝑊 𝐺 (𝑖, 𝑗) 𝑔 = 1 𝐻 × 𝑊 𝐺 (𝑖, 𝑗)</formula><p>where 𝐺 (𝑖, 𝑗) and 𝐺 (𝑖, 𝑗) are the values at each position in the output extreme exposure feature, and compresses the multiple channels into a one-dimensional feature tensor.</p><p>2) Squeeze and excitation In order to more fully explore the dependencies between individual channels, the paper introduces a gate mechanism for learning the nonlinear mapping between each channel and uses a sigmoid activation function to avoid the formation of adversarial relationships between channels, which can be expressed as</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝑠 = 𝜎(𝑊 𝛿(𝑊 𝑔 )) 𝑠 = 𝜎(𝑊 𝛿(𝑊 𝑔 ))</head><p>Where 𝑊 and 𝑊 are the convolutional layer weights.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3) Recalibration</head><p>The original input features 𝐺 individual channels are scaled by the channel attention weight matrix just learned, thus enhancing useful features and suppressing useless features: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>SR fused image</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>LR coupled features LR coupled features LR coupled features LR coupled features</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>SR fused image</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>SR fused image</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>SR fused image</head><p>Figure <ref type="figure">2</ref> Coupled feedback attention structure</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Loss Function</head><p>The method in this paper mainly achieves image super-resolution and image multi-exposure fusion, so the model in this paper uses a hierarchical loss function for optimization, and the loss function is expressed as</p><formula xml:id="formula_6">𝐿 = 𝜆 𝐿 𝐼 , 𝐼 + 𝜆 𝐿 𝐼 , 𝐼 + 𝜆 (𝐿 𝐼 , 𝐼 + 𝐿 𝐼 , 𝐼 )</formula><p>Where 𝐼 and 𝐼 are the HR standard images with extreme exposure, and 𝐼 is the HDR, HR standard image, which is the target to be achieved in the final fusion image. 𝜆 , 𝜆 , {𝜆 } are the weight coefficients of each loss part. In this paper, we set 𝜆 = 𝜆 = {𝜆 } = 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Experiment and Analysis</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Experiment Establishment</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>1)Experimental setup</head><p>In this paper, the training model was trained on GeForce GTX 1070Ti.The experiments in this paper mainly use SICE <ref type="bibr" target="#b7">[5]</ref> dataset, which contains 589 high-quality reference images and their corresponding image sequences, and only extremely exposure are used in this paper.</p><p>2</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>)Comparison Method</head><p>The network model proposed in this paper achieves both image super-resolution and image exposure fusion, we combine the current image super-resolution method and the image exposure fusion method as a comparison method. The image super-resolution methods are DBPN <ref type="bibr" target="#b4">[3]</ref> , RCAN <ref type="bibr" target="#b5">[4]</ref> , SRFBN <ref type="bibr" target="#b2">[2]</ref> , and SwinIR <ref type="bibr" target="#b12">[9]</ref> , and the main image exposure fusion methods are MGFF <ref type="bibr" target="#b14">[10]</ref> , FAST SPD-MEF <ref type="bibr" target="#b8">[6]</ref> , MEF-Net <ref type="bibr" target="#b11">[8]</ref> , and U2Fusion <ref type="bibr" target="#b9">[7]</ref> . We combined SR methods and MEF methods, and changed the order of SR methods and MEF methods, i.e., SR+MEF or MEF+SR, to generate 32 comparison methods. The CF-Net <ref type="bibr" target="#b0">[1]</ref> was also selected for comparison.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Objective evaluation</head><p>In order to verify the effectiveness of the method in this paper under magnification factor of 2, we use the SICE dataset and compare it with other advanced methods. These comparison methods are combined by SR method and MEF method. Table <ref type="table" target="#tab_0">1</ref> shows the results of our method with the comparison methods for magnification factor of 2 under three metrics.</p><p>In Table <ref type="table" target="#tab_0">1</ref>, highlighting the first value of the fusion quality index in bold and the second ranked value in underline. From Table <ref type="table" target="#tab_0">1</ref>, we can see that the method of this paper has the best fusion effect, ranking first among 34 methods in metrics. PSNR index is improved by 0.25 dB, SSIM by 0.0028, and MEF-SSIM by 0.0005 compared to the second place CF-Net method. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Subjective evaluation</head><p>Fig. <ref type="figure" target="#fig_3">3</ref> visually depicts the fused images produced by this paper and other advanced methods at magnification of factor 2. From the experimental results, it can be seen that compared with SR+MEF and MEF+SR methods, the method in this paper has a great improvement in details, and compared with the coupled feedback network, this paper alleviates the phenomenon that there is redundant information in the image due to the coupled feedback mechanism.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusion</head><p>Based on the powerful image reconstruction property of feedback mechanism and the property that channel attention mechanism can distinguish the importance of features. In this paper, a coupled feedback attention network is proposed to solve the image super-resolution problem and image exposure fusion problem simultaneously. The experimental results show that the algorithm in this paper retains</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1</head><label>1</label><figDesc>Figure 1 Coupled feedback attention network</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Where</head><label></label><figDesc>𝑠 and 𝑠 are the channel attention weights of the previous iteration.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3</head><label>3</label><figDesc>Figure 3 Comparison of different methods of "landscape" under 2×</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>comparison of the fusion results under the magnification factor of 2</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">Super Resolution + Image Fusion</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Methods</cell><cell></cell><cell>MGFF[10]</cell><cell></cell><cell cols="3">FAST SPD-MEF[6]</cell><cell></cell><cell>MEF-Net[8]</cell><cell></cell><cell></cell><cell>U2Fusion[7]</cell><cell></cell></row><row><cell>Combinations</cell><cell>PSNR</cell><cell>SSIM</cell><cell>MEF-</cell><cell>PSNR</cell><cell>SSIM</cell><cell>MEF-</cell><cell>PSNR</cell><cell>SSIM</cell><cell>MEF-</cell><cell>PSNR</cell><cell>SSIM</cell><cell>MEF-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>SSIM</cell><cell></cell><cell></cell><cell>SSIM</cell><cell></cell><cell></cell><cell>SSIM</cell><cell></cell><cell></cell><cell>SSIM</cell></row><row><cell>DBPN[3]</cell><cell>17.47dB</cell><cell>0.7434</cell><cell>0.9121</cell><cell>17.30dB</cell><cell>0.7615</cell><cell>0.8976</cell><cell>17.26dB</cell><cell>0.7660</cell><cell>0.8888</cell><cell>17.83dB</cell><cell>0.7423</cell><cell>0.8807</cell></row><row><cell>RCAN[4]</cell><cell>17.39dB</cell><cell>0.7406</cell><cell>0.9114</cell><cell>17.34dB</cell><cell>0.7618</cell><cell>0.8974</cell><cell>17.24dB</cell><cell>0.7653</cell><cell>0.8882</cell><cell>17.85dB</cell><cell>0.7409</cell><cell>0.8804</cell></row><row><cell>SRFBN[2]</cell><cell>17.48dB</cell><cell>0.7425</cell><cell>0.9130</cell><cell>17.34dB</cell><cell>0.7601</cell><cell>0.8983</cell><cell>17.29dB</cell><cell>0.7641</cell><cell>0.8895</cell><cell>17.84dB</cell><cell>0.7402</cell><cell>0.8811</cell></row><row><cell>SWinIR[9]</cell><cell>17.44dB</cell><cell>0.7436</cell><cell>0.9113</cell><cell>17.26dB</cell><cell>0.7618</cell><cell>0.8968</cell><cell>17.23dB</cell><cell>0.7667</cell><cell>0.8881</cell><cell>17.82dB</cell><cell>0.7436</cell><cell>0.8802</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="4">Image Fusion + Super Resolution</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Methods</cell><cell></cell><cell>DBPN[3]</cell><cell></cell><cell></cell><cell>RCAN[4]</cell><cell></cell><cell></cell><cell>SRFBN[2]</cell><cell></cell><cell></cell><cell>SWinR[9]</cell><cell></cell></row><row><cell>Combinations</cell><cell>PSNR</cell><cell>SSIM</cell><cell>MEF-</cell><cell>PSNR</cell><cell>SSIM</cell><cell>MEF-</cell><cell>PSNR</cell><cell>SSIM</cell><cell>MEF-</cell><cell>PSNR</cell><cell>SSIM</cell><cell>MEF-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>SSIM</cell><cell></cell><cell></cell><cell>SSIM</cell><cell></cell><cell></cell><cell>SSIM</cell><cell></cell><cell></cell><cell>SSIM</cell></row><row><cell>MGFF[10]</cell><cell>17.27dB</cell><cell>0.7161</cell><cell>0.9144</cell><cell>17.18dB</cell><cell>0.7122</cell><cell>0.9135</cell><cell>17.38dB</cell><cell>0.7218</cell><cell>0.9158</cell><cell>17.19dB</cell><cell>0.7135</cell><cell>0.9131</cell></row><row><cell>Fast SPD-</cell><cell>17.26dB</cell><cell>0.7554</cell><cell>0.8954</cell><cell>17.24dB</cell><cell>0.7533</cell><cell>0.8949</cell><cell>17.31dB</cell><cell>0.7557</cell><cell>0.8962</cell><cell>17.21dB</cell><cell>0.7546</cell><cell>0.8944</cell></row><row><cell>MEF[6]</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>MEF-Net[8]</cell><cell>17.25dB</cell><cell>0.7636</cell><cell>0.8886</cell><cell>17.23dB</cell><cell>0.7624</cell><cell>0.8882</cell><cell>17.27dB</cell><cell>0.7630</cell><cell>0.8892</cell><cell>17.20dB</cell><cell>0.7629</cell><cell>0.8878</cell></row><row><cell>U2Fusion[7]</cell><cell>17.81dB</cell><cell>0.7384</cell><cell>0.8843</cell><cell>17.82dB</cell><cell>0.7368</cell><cell>0.8837</cell><cell>17.85dB</cell><cell>0.7395</cell><cell>0.8850</cell><cell>17.76dB</cell><cell>0.7374</cell><cell>0.8835</cell></row><row><cell>CF-Net[1]</cell><cell cols="2">PSNR=21.24dB</cell><cell></cell><cell></cell><cell></cell><cell cols="2">SSIM=0.8140</cell><cell></cell><cell></cell><cell></cell><cell cols="2">MEF-SSIM=0.9332</cell></row><row><cell>Ours</cell><cell cols="2">PSNR=21.49dB</cell><cell></cell><cell></cell><cell></cell><cell cols="2">SSIM=0.8168</cell><cell></cell><cell></cell><cell></cell><cell cols="2">MEF-SSIM=0.9337</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>the detailed information of edges, region boundaries and textures of the original image sequence.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">References</head></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Deep coupled feedback network for joint exposure fusion and image super-resolution</title>
		<author>
			<persName><forename type="first">X</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">T</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Xu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J</title>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="3098" to="3112" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Feedback Network for Image Super-Resolution</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m">IEEE/CVF Conference on Computer Vision and Pattern Recognition</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Deep back-projectinetworks for single image superresolution</title>
		<author>
			<persName><forename type="first">M</forename><surname>Harris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Shakhnarovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ukita</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="4323" to="4337" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note>J</note>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Image Super-Resolution Using Very Deep Residual Channel Attention Networks</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">L</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">P</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Li</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m">European Conference on Computer Vision</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Learning a deep single image contrast enhancer from multi-exposure images</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Gu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="2049" to="2062" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note>J</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Fast multi-scale structural patch decomposition for multi-exposure image fusion</title>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">D</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">W</forename><surname>Yong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="5805" to="5816" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note>J</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">U2Fusion: A unified unsupervised image fusion network</title>
		<author>
			<persName><forename type="first">X</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Jiang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J</title>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="502" to="518" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Deep guided learning for fast multi-exposure image fusion</title>
		<author>
			<persName><forename type="first">K</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Duanmu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Fang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="2808" to="2819" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note>J</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">SwinIR: Image Restoration Using Swin Transformer</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Z</forename><surname>Cao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">L</forename><surname>Sun</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m">IEEE/CVF International Conference on Computer Vision Workshops</title>
				<imprint>
			<publisher>Electr Network</publisher>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Multi-scale guided image and video fusion: a fast and efficient approach</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Bavirisetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Zhao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J]. Circuits Systems and Signal Processing</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="5576" to="5605" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
