0% found this document useful (0 votes)
15 views9 pages

ieee2019

This paper presents a novel medical image super resolution method utilizing improved generative adversarial networks (GANs) to enhance the details of low-resolution medical images. The proposed method incorporates an improved squeeze and excitation block within a simplified EDSR model and employs a new fusion loss to optimize the training process. Experimental results demonstrate that this approach outperforms existing state-of-the-art methods, particularly at high upscaling factors, by effectively reconstructing high-resolution images from low-resolution inputs.

Uploaded by

rashna.kc10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views9 pages

ieee2019

This paper presents a novel medical image super resolution method utilizing improved generative adversarial networks (GANs) to enhance the details of low-resolution medical images. The proposed method incorporates an improved squeeze and excitation block within a simplified EDSR model and employs a new fusion loss to optimize the training process. Experimental results demonstrate that this approach outperforms existing state-of-the-art methods, particularly at high upscaling factors, by effectively reconstructing high-resolution images from low-resolution inputs.

Uploaded by

rashna.kc10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Received September 3, 2019, accepted September 24, 2019, date of publication October 1, 2019, date of current version October

17, 2019.
Digital Object Identifier 10.1109/ACCESS.2019.2944862

Medical Image Super Resolution Using Improved


Generative Adversarial Networks
XINYANG BING , WENWU ZHANG, LIYING ZHENG, AND YANBO ZHANG
School of Computer Science and Technology, Harbin Engineering University, Harbin 150001, China
Corresponding author: Liying Zheng ([email protected])
This work was supported by the National Natural Science Foundation of China under Grant 61771155.

ABSTRACT Details of small anatomical landmarks and pathologies, such as small changes of the microvas-
culature and soft exudates, are critical to accurate disease analysis. However, actual medical images always
suffer from limited spatial resolution, due to imaging equipment and imaging parameters (e.g. scanning time
of CT images). Recently, machine learning, especially deep learning techniques, have brought revolution
to image super resolution reconstruction. Motivated by these achievements, in this paper, we propose a
novel super resolution method for medical images based on an improved generative adversarial networks.
To obtain useful image details as much as possible while avoiding the fake information in high frequency,
the original squeeze and excitation block is improved by strengthening important features while weakening
non-important ones. Then, by embedding the improved squeeze and excitation block in a simplified EDSR
model, we build a new image super resolution network. Finally, a new fusion loss that can further strengthen
the constraints on low-level features is designed for training our model. The proposed image super resolution
model has been validated on the public medical images, and the results show that visual effects of the
reconstructed images by our method, especially in the case of high upscaling factors, outperform state-of-
the-art deep learning-based methods such as SRGAN, EDSR, VDSR and D-DBPN.

INDEX TERMS Generative adversarial network, medical image reconstruction, squeeze and excitation
block, super resolution.

I. INTRODUCTION contrast [3] Super Resolution (SR) reconstruction techniques


Details of small anatomical landmarks and pathologies are then came to be popular in the community of medical
critical to accurate disease analysis. For example, small images resolution enhancement. Based on sparse represen-
changes of the microvasculature around a tumor are an tation, Yang et al proposed a regularized single image SR
important biomarker for cancer diagnosis [1], and unapparent method for medical images [4]; Rueda et al reconstructed
soft exudates are important pathologies for retinal condition a high-resolution version of a low-resolution brain MR
diagnosis [2]. However, many actual medical images suffer image [5]; Wei et al proposed a medical image SR algo-
from the limited spatial resolution, due to imaging equipment rithm [6] with good Peak Signal to Noise Ratio (PSNR) and
and imaging parameters (e.g. scanning time of CT images). visual effect. Recently, based on the random forest model
Such low resolution of medical images impedes the accurate selection strategy, Dou et al proposed an SR method for
detection or segmentation of small anatomical landmarks and obtaining more information from a low resolution medical
pathologies, and impedes the accurate diagnosis of some image [7]. Based on multi-kernel support vector regression,
serious diseases at its early stage. Jebadurai and Peter proposed an SR algorithm for retinal
In the past 30 years, a large amount of work has been images [8]. Though these methods are more effective than tra-
reported for improving the resolution of actual medical ditional interpolation-based techniques, they are still unable
images. Early resolution enhancement methods, such as to restore high quality images in the case of high upscaling
basic cubic interpolations and its variants, usually suffer factor.
from the great loss of sharp-edged details and high local Motived by the tremendous achievements of deep learn-
ing in computer vision, some new SR techniques have
The associate editor coordinating the review of this manuscript and been reported, too. Based on the VGG-net, Kim et al. pre-
approving it for publication was Kathiravan Srinivasan . sented a highly accurate SR method with Very Deep CNN

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/
145030 VOLUME 7, 2019
X. Bing et al.: Medical Image Super Resolution Using Improved Generative Adversarial Networks

FIGURE 1. The original SE block(a) and the improved SE block(b).

(VDSR) [9]. Dong et al. first introduced Convolutional The remainder of the paper is organized as follow.
Neural Networks (CNNs) to single image SR, and proposed Section II describes the method for improving SE block.
SRCNN model [10]. Based on the basic structure of CNNs, Section III gives the details of the proposed GAN-based SR
an SR method for grayscale medical images is proposed model. Section IV presents performance assessments fol-
in [11]. He et al. proposed Residual neural Network (ResNet) lowed by some concluding remarks in Section V.
to make the training procedure of SR model more easier [12].
Tai et al. proposed a 52-layer recursive network to further II. IMPROVED SE BLOCK
improve the SR performance of the ResNet [13], and Lim The SE building block shown in Fig. 1(a) is proposed by
et al. removed unnecessary modules in the ResNet while Hu et al. [20]. The basic function of the SE block is to adap-
expanding the model size [14], and achieved significant tively recalibrate channel-wise feature responses by explic-
improvement. Zhang et al. [15] adopted effective residual itly modelling interdependencies between channels. First,
dense block in SR model. They then further explored a deeper by using the global pooling (1), the SE block squeezes each
network with channel attention [16], and achieved the state- feature map.
of-the-art PSNR performance. H W
Recently, due to the good performance of generative 1 XX
yc = xc (i, j) (1)
adversarial networks (GANs) in producing very realis- HW
i=1 j=1
tic images, GAN-based image SR models are emerging
and still growing. For example, SRGAN [17], Neural where yc represents the squeezed feature corresponding to the
Enhance [18]and ESRGAN [19] are all GAN-based SR c-th feature map xc . H and W are the height and the width of
models. Specifically, Mahapatra et al. proposed a medical xc , respectively. Then, the squeezed features are fed to a fully
image SR algorithm using progressive generative adversarial connected 3-layer neural network, whose input layer has the
networks (P-GANs) [2] same dimension as that of the output layer.
Though, as mentioned above, there are so many meth- The activation function of the original SE block is the
ods have been reported, medical image SR is still an open following Sigmoid function:
problem, and the reconstruction results are still unsatisfied s = Eorg (y) = σ (W2 δ(W1 y)) (2)
for high upscaling factors. Therefore, in this paper, we pro-
pose a new medical image SR method based on the GAN where s = [s1 , s2 . . . sc ] is the scale vector of original feature
framework. We first improve the original Squeeze and Exci- maps, and Eorg(·) means the original activation function.
tation (SE) block [20] by strengthening important features y = [y1 , y2 . . . yc ] is an input feature vector. σ and δ are
while weakening non-important ones. Then, after simplifying respectively the Sigmoid function and the ReLU function W 1
the original EDSR [14], we embed the improved SE block and W 2 are weights of the input layer and the output layer,
in the simplified EDSR model. Finally, we design a new respectively.
fusion loss that can further strengthen the constraints on low- The final output of an SE block is obtained with (3).
level features to train the proposed image SR model. Our
x˜c = xc · sc (3)
experimental results on two medical image datasets show that
the strategies of embedding the improved SE block and using where ‘‘·’’ means elementwise product.
the fusion loss benefit the proposed GAN-based SR model On one hand, the activation function in (2) doesn’t thor-
with better visual effect than several state-of-the-art models, oughly utilize the response of the hidden layer. On the other
such as EDSR, VDSR, SRGAN, and D-DBPN, especially for hand, Eorg in (2) ranges from 0 to 1. In the case that multiple
high upscaling factors. SE blocks are embedded in a network, such Eorg of (0,1) will

VOLUME 7, 2019 145031


X. Bing et al.: Medical Image Super Resolution Using Improved Generative Adversarial Networks

FIGURE 2. The proposed SR model. I LR : Low Resolution(LR) images, I HR : High Resolution(HR) images.

FIGURE 3. The generator of our SR model. ISE: Improved SE block.

make the responses of middle layers very small, and thus will TABLE 1. Average PSNR on test dataset with different wMSE .
greatly degrade the performance of the network. Therefore,
as shown in Fig. 1(b), we substitute the activation function in
(2) with (4) in this paper, and get an improved SE block.
s = Eimp (y) = {k1 × [σ (W2 δ(W1 y))]+k2 × σ (y)} × 2 (4)
where k1 and k2 are positive numbers and k1 + k2 = 1, and A. THE GENERATOR AND THE DISCRIMINATOR
they control the contribution of the input layer and the output As shown in Fig. 3, the EDSR model proposed by
layer of the SE block, respectively. Lim et al. [14] are simplified to serve as the generator of our
Such improvement on an SE block is beneficial from the GAN-based image SR model. After simplification, the new
following aspects: EDSR has 16 Resblocks and 64 kernels, and other parameters
i) The residual manner in (4) utilizes both the inputs and are the same as those in the original EDSR. We then embed
the outputs of the 3-layer network, and only fine-tuning on the improved SE blocks in the convolutional layers of the
weights is required. Thus, the difficulties in the training pro- simplified EDSR.
cess is alleviated. The discriminator of our SR model is shown in Fig. 4.
ii) Eimp (·) in (4) ranges from 0 to 2 rather than (0,1). It consists of 8 main convolutional layers with the increasing
Therefore, the problem of feature weakening caused by per- kernels from 64 to 512 as those in VGG [21]. We then
forming many multiplications with a scale less than 1 can be embed the improved SE block in each convolutional layer
effectively alleviated. to improve the accuracy of the discriminator. Next, a fusion
layer that fuses the features of the last three convolutional
III. SUPER RESOLUTION METHOD WITH layers together is added to the discriminator. By doing so,
GAN AND IMPROVED SE the discriminator can pay more attention to the low frequency
As shown in Fig. 2, our image SR model is built based on the features, and the freedom of our SR model can be reduced,
GAN framework and the improved SE blocks. Specifically, too. Finally, the classification is completed by sequentially
the improved SE blocks are respectively embedded in the gen- performing global pooling, convolution structure and Sig-
erator and the discriminator, and a fusion layer is appended moid activation function. Here, the convolution structure con-
to the discriminator. sists of two layers with 1 × 1 kernels.

145032 VOLUME 7, 2019


X. Bing et al.: Medical Image Super Resolution Using Improved Generative Adversarial Networks

FIGURE 4. The discriminator of our SR model. ISE: Improved SE block.

FIGURE 5. Example SR results for a test image in DRIVE database.

B. LOSS FUNCTION where w1 , wRG and wMSE are positive real numbers. They
In this paper, we propose a new loss function for training are hyper-parameters that control the contribution of each
our GAN-based SR model shown in Figs. 2–4. As given individual loss.
in (5), the new proposed loss function combines L1 In (5), LVGG contributes to higher-level semantic contents
loss (L1 ), relativistic adversarial loss(LRG ) [22], percep- rather than pixel-level structures in the feature space and it is
tual loss(LVGG ) [19], and Mean Square Error loss (LMSE ) closely related to the perceptual similarity. The second term
[10], [23] together. L1 encourages the network to get information from ground
truth images. Although both LVGG and L1 lead high PSNR
LFusion = LVGG + w1 L1 + wRG LRG + wMSE LMSE (5) of the reconstructed image, a lot of high-frequency details

VOLUME 7, 2019 145033


X. Bing et al.: Medical Image Super Resolution Using Improved Generative Adversarial Networks

TABLE 2. Training configuration for training the proposed model.

FIGURE 6. Some details of Fig. 5.

are probably lost by adopting these two losses. Therefore, 20 test images, while STARE consists of 397 images. The
the third term LRG is adopted to enforce the network to images in STARE are randomly divided into two parts, part
produce sharp and clear images. The last term LMSE is used A and part B. Part A includes 20 images, and part B includes
to minimize the MSE between the generated images and the the other 377 images.
corresponding ground truth. The training dataset consists of 397 retina images, 20 of
which come from the training images in DRIVE and others
IV. EXPERIMENTAL RESULTS AND ANALYSIS come from part B images in STARE. The test dataset consists
The proposed GAN-based medical image SR model has of 40 retina images that are independent from the training
been implemented in PyTorch 0.4.1 on Ubuntu 16.04 with images. 20 of them come from the test images in DRIVE
CUDA 8, CUDNN 5.1, and NVIDIA 1080Ti. All experiments and others come from part A in Stare. All images are first
were performed on two retina image datasets, DRIVE [24] resized to 1024 × 1024 pixels that serve as reference High
and STARE [25]. DRIVE consists of 20 training images and Resolution (HR) images.

145034 VOLUME 7, 2019


X. Bing et al.: Medical Image Super Resolution Using Improved Generative Adversarial Networks

FIGURE 7. Example super resolution results for a test image in STARE database.

TABLE 3. PSNR and SSIM of different models (the best results are TABLE 4. The PSNR of original SE and improved SE.
in bold).

0.8 and 0.2, respectively.The proposed SR model is trained


with the fusion loss in (5). Here, according to previous
work [19], we fix the value of w1 and wRG in (5) while vary
wMSE from 0 to 10. The average corresponding PSNR of the
proposed model on all test images with scaling factor of 4 are
listed in Table 1.
From Table 1, one can notice that the proposed SR model
cannot be successfully trained with wMSE = 0.01. In our
A. TRAINING DETAILS experiments, we find that the model is unstable with small
To further augment the training dataset, we randomly choose wMSE (e.g. 0–0.01). From this point of view, wMSE should not
one of the following operations on the HR images during be too small. In this paper, according to Table 1, we choose
the training: rotated by 90◦ , 180◦ , or 270◦ , flip horizon- wMSE = 0.5.
tally, or zero-mean. The corresponding Low Resolution (LR) In terms of dimension reduction ratio in the improved SE
images are obtained by down-sampling each high resolution block, we set it to 16 that is the same with the original
image with scaling factors 4, 8 and 16. SE block [20]. The ADAM optimizer [26] is adopted for
Under the constraint that the output of improved SE training our SR model, and the training configuration is listed
block should contribute more than the input layer, param- in Table 2. Our models has been trained with 106 updates and
eters k1 and k2 in (4) are experimentally determined as batch size of 16.

VOLUME 7, 2019 145035


X. Bing et al.: Medical Image Super Resolution Using Improved Generative Adversarial Networks

FIGURE 8. Some details of Fig. 7.

B. EVALUATION ON MEDICAL IMAGES in Fig. 6 and Fig. 8. Since our model is more competitive on
In this section, we evaluate our image SR model on 40 test high upscaling factors, Fig. 9 is presented to show more visual
images. The traditional model Bicubic and the state-of- results of scaling factor of 16.
the-art SR models including EDSR [14], D-DBPN [27], From Table 3, one can see that in terms of PSNR and SSIM,
VDSR [9], and SRGAN [17] have been chosen for com- our model outperforms the traditional SR method Bicubic,
parison. The parameter settings accompanied to each com- the state-of-the-art models VDSR and SRGAN. Moreover,
pared model are the same as those in their original though our model is a light weight network and has much less
paper. layers than EDSR and D-DBPN (e.g. the number of layers of
Similar to [14], the last 10 images of the training dataset EDSR is almost twice as large as ours), it performs just lightly
have been selected as training validation set on which the worse than EDSR and D-DBPN for scaling factors 4 and
evaluation is conducted. The objective metrics PSNR and 8, and is superior to them for high scaling factors (e.g. 16).
structural similarity index (SSIM) for above mentioned mod- Specifically, our model significantly superiors to the state-
els are listed in Table 3, and some visual results are shown of-the-art EDSR on scaling factor 16 with the improvement
in Figs. 5–9. Here the sample images in Fig. 5 and Fig. 7 are margin of 8.09dB(PSNR) and 0.0301(SSIM). The major rea-
from DRIVE database and STARE database, respectively. son is that improved SE blocks embedded in our model can
To show more details, the zoomed in small areas in the recon- effectively strengthen important features while weaken non-
structed images in Fig. 5 and Fig. 7 are respectively shown important ones.

145036 VOLUME 7, 2019


X. Bing et al.: Medical Image Super Resolution Using Improved Generative Adversarial Networks

FIGURE 9. More results on scaling factor 16.

Table 4 listed the PSNR of the model with original SE for high upscaling factors (e.g. 16). For example, all com-
blocks and improved SE blocks. Here, the same network pared SR models except ours cannot clearly reconstructed the
structure as that shown in Figs. 2–4 is adopted, except that thin blood vessels pointed out with a green arrow in Fig. 5 and
the improved SE blocks in the generator and the discriminator Fig. 7 in the case of upscaling factor of 16. Similar results
are substituted to the original SE blocks. We can see that can be seen in Fig. 6 and Fig. 8. Specifically, Fig. 8 and
our improvement strategy on SE blocks benefits the model Fig. 9 show that for scaling factor of 16 the small blood
with higher PSNR and higher SSIM, especially in the case vessel is lost from the image reconstructed by Bicubic, EDSR,
of medium and high upscaling factors (e.g. 8 and 16). From VDSR, D-DBPN, or SRGAN. Our model can still reconstruct
the results in Table 4, we can see that it is the improved SE such small blood vessel, though very blurry. Figs. 5–9 further
blocks make our model have higher PSNR and SSIM for high illustrate that though the PSNR and SSIM of our model are
upscaling factors. lower than the models without adding adversarial loss, such
Figs. 5–9 illustrate that our model can reconstruct SR reduction of PSNR or SSIM doesn’t degrade the visual effects
images with more visual details than other methods especially of reconstructed images. The major reason is that the new

VOLUME 7, 2019 145037


X. Bing et al.: Medical Image Super Resolution Using Improved Generative Adversarial Networks

fusion loss in (5) can effectively drive the model to produce [18] M. S. Sajjadi, B. Scholkopf, and M. Hirsch, ‘‘EnhanceNet: Single image
images more similar to the ground truth ones. super-resolution through automated texture synthesis,’’ in Proc. IEEE Int.
Conf. Comput. Vis., Oct. 2017, pp. 4491–4500.
[19] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy,
V. CONCLUSION ‘‘Esrgan: Enhanced super-resolution generative adversarial networks,’’ in
In this paper, by embedding improved SE blocks in the gen- Proc. Eur. Conf. Comput. Vis. (ECCV), Sep. 2018, pp. 1–16.
[20] J. Hu and L. G. Shen Sun, ‘‘Squeeze-and-excitation networks,’’ in Proc.
erator and the discriminator of the GAN, and by using new IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 7132–7141.
fusion loss, we have presented an effective light weight med- [21] K. Simonyan and A. Zisserman, ‘‘Very deep convolutional networks for
ical image SR model. The experimental results on two retina large-scale image recognition,’’ 2014, arXiv:1409.1556. [Online]. Avail-
able: https://arxiv.org/abs/1409.1556
image datasets have shown that our model outperforms state- [22] A. Jolicoeur-Martineau, ‘‘The relativistic discriminator: A key element
of-the-art SR methods including EDSR, SRGAN, VDSR and missing from standard GAN,’’ 2018, arXiv:1807.00734. [Online]. Avail-
D-DBPN in terms of visual effects and is comparative to able: https://arxiv.org/abs/1807.00734
[23] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop,
existing image SR models in terms of PSNR and SSIM. D. Rueckert, and Z. Wang, ‘‘Real-time single image and video super-
Moreover, our method can reconstruct images with more resolution using an efficient sub-pixel convolutional neural network,’’ in
detail structures for higher scaling factors. Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 1874–1883.
[24] DRIVE: Digital Retinal Images for Vessel Extraction. Accessed: 2004.
[Online]. Available: http://www.isi.uu.nl/Research/Databases/DRIVE/
REFERENCES [25] STructured Analysis of the Retina. Accessed: 2000. [Online]. Available:
[1] F. Lin, J. D. Rojas, and P. A. Dayton, ‘‘Super resolution contrast ultrasound http://cecas.clemson.edu/~ahoover/stare/
imaging: Analysis of imaging resolution and application to imaging tumor [26] D. P. Kingma and J. Ba, ‘‘Adam: A method for stochastic opti-
angiogenesis,’’ in Proc. IEEE Int. Ultrason. Symp. (IUS), Sep. 2016, mization,’’ 2017, arXiv:1412.6980. [Online]. Available: https://arxiv.
pp. 1–4. org/abs/1412.6980
[2] D. Mahapatra, B. Bozorgtabar, and R. Garnavi, ‘‘Image super-resolution [27] M. Haris, G. Shakhnarovich, and N. Ukita, ‘‘Deep back-projection net-
using progressive generative adversarial networks for medical image anal- works for super-resolution,’’ in Proc. IEEE Conf. Comput. Vis. Pattern
ysis,’’ Comput. Med. Imag. Graph., vol. 71, pp. 30–39, Jan. 2019. Recognit., Jun. 2018, pp. 1664–1673.
[3] T. M. Lehmann, C. Spitzer, and K. Gonner, ‘‘Survey: Interpolation methods
in medical image processing,’’ IEEE Trans. Med. Imag., vol. 18, no. 11, XINYANG BING was born in 1995. She received
pp. 1049–1075, 1999. the B.S. degree in computer science and tech-
[4] S. Yang, Y. Sun, Y. Chen, and L. Jiao, ‘‘Structural similarity regularized nology from Harbin Normal University, in 2017.
and sparse coding based super-resolution for medical images,’’ Biomed. She is currently pursuing the Ph.D. degree in
Signal Process. Control, vol. 7, no. 6, pp. 579–590, Nov. 2012. computer science and technology with Harbin
[5] A. Rueda, N. Malpica, and E. Romero, ‘‘Single-image super-resolution Engineering University, under the supervision of
of brain MR images using overcomplete dictionaries,’’ Med. Image Anal., Prof. L. Zheng.
vol. 17, no. 1, pp. 113–132, Jan. 2013. Her research interests include image analysis
[6] S. Wei, X. Zhou, W. Wu, Q. Pu, Q. Wang, and X. Yang, ‘‘Medical image and deep learning.
super-resolution by using multi-dictionary and random forest,’’ Sustain.
Cities Soc., vol. 37, pp. 358–370, Feb. 2018.
[7] Q. Dou, S. Wei, X. Yang, W. Wu, and K. Liu, ‘‘Medical image super- WENWU ZHANG was born in 1992. He received
resolution via minimum error regression model selection using random
the B.S. degree in computer science and
forest,’’ Sustain. Cities Soc., vol. 42, pp. 1–12, Oct. 2018.
technology from the Northeastern University at
[8] J. Jebadurai and J. D. Peter, ‘‘Super-resolution of retinal images using
multi-kernel SVR for IoT healthcare applications,’’ Future Gener. Comput.
Qinhuangdao and the M.S. degree in computer
Syst., vol. 83, pp. 338–346, Jun. 2018. science and technology from Harbin Engineering
[9] J. Kim, J. K. Lee, and K. M. Lee, ‘‘Accurate image super-resolution using University.
very deep convolutional networks,’’ in Proc. IEEE Conf. Comput. Vis. He is currently with The 716 Institute, China
Pattern Recognit. (CVPR), Jun. 2016, pp. 1646–1654. Shipping Heavy Industry Group. His research
[10] C. Dong, C. C. Loy, K. He, and X. Tang, ‘‘Image super-resolution using interests include image analysis and machine
deep convolutional networks,’’ IEEE Trans. Pattern Anal. Mach. Intell., learning.
vol. 38, no. 2, pp. 295–307, Feb. 2016.
[11] H. Liu, J. Xu, Y. Wu, Q. Guo, B. Ibragimov, and L. Xing, ‘‘Learning LIYING ZHENG was born in 1976. She received
deconvolutional deep neural network for high resolution medical image the B.S., M.S., and Ph.D. degrees in control theory
reconstruction,’’ Inf. Sci., vol. 468, pp. 142–154, Nov. 2018. and control engineering from Harbin Engineering
[12] K. He, X. Zhang, S. Ren, and J. Sun, ‘‘Deep residual learning for image University, in 1999, 2001, and 2003, respectively,
recognition,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), where she has been a Professor with the College of
Las Vegas, NV, USA, Jun. 2016, pp. 770–778. Computer Science and Technology, since 2013.
[13] Y. Tai, J. Yang, and X. Liu, ‘‘Image super-resolution via deep recursive She has authored one book, eight inventions,
residual network,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. and more than 30 articles. Her research interests
(CVPR), Honolulu, HI, USA , Jul. 2017, pp. 2790–2798. include machine learning and pattern recognition.
[14] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, ‘‘Enhanced deep residual
networks for single image super-resolution,’’ in Proc. IEEE Conf. Comput.
Vis. Pattern Recognit. Workshops, Jul. 2017, pp. 136–144. YANBO ZHANG was born in 1996. She received
[15] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, ‘‘Residual dense network
the B.S. degree in computer science and technol-
for image super-resolution,’’ in Proc. IEEE Conf. Comput. Vis. Pattern
ogy from the Changchun University of Technol-
Recognit., Jun. 2018, pp. 2472–2481.
[16] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, ‘‘Image super- ogy, in 2018. She is currently pursuing the M.S.
resolution using very deep residual channel attention networks,’’ in Proc. degree in computer science and technology with
Eur. Conf. Comput. Vis. (ECCV), Sep. 2018, pp. 286–301. Harbin Engineering University, under the supervi-
[17] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, sion of Prof. L. Zheng.
A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, ‘‘Photo-realistic single Her research interests include video prediction
image super-resolution using a generative adversarial network,’’ in Proc. and deep learning.
IEEE Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 4681–4690.

145038 VOLUME 7, 2019

You might also like