close

Вход

Забыли?

вход по аккаунту

?

j.image.2018.08.003

код для вставкиСкачать
Accepted Manuscript
Automatic blur type classification via ensemble SVM
Rui Wang, Wei Li, Rui Li, Liang Zhang
PII:
DOI:
Reference:
S0923-5965(18)30014-6
https://doi.org/10.1016/j.image.2018.08.003
IMAGE 15431
To appear in:
Signal Processing: Image Communication
Received date : 4 January 2018
Revised date : 6 August 2018
Accepted date : 8 August 2018
Please cite this article as: R. Wang, W. Li, R. Li, L. Zhang, Automatic blur type classification via
ensemble SVM, Signal Processing: Image Communication (2018),
https://doi.org/10.1016/j.image.2018.08.003
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to
our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form.
Please note that during the production process errors may be discovered which could affect the
content, and all legal disclaimers that apply to the journal pertain.
Automatic Blur Type Classification via Ensemble SVM
Rui Wanga, Wei Lia, Rui Lia, Liang Zhangb
a. Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, School of Instrumentation
Science and Opto-electronics Engineering, Beihang University, Beijing, 100191, China
b. University of Connecticut, Department of Electrical and Computer Engineering, 371 Fairfield Way, U-4157, Storrs,
Connecticut 06269, United States
Abstract: Automatic classification of blur type is critical to blind image restoration. In this paper, we propose an original
solution for blur type classification of digital images using ensemble Support Vector Machine (SVM) structure. It is
assumed that each image is subject to at most one of three blur types: haze, motion, and defocus. In the proposed
technique, 35 blur features are first calculated from image spatial and transform domains, and then ranked using the
SVM-Recursive Feature Elimination (SVM-RFE) method, which is also adopted to optimize the parameters of the Radial
Basis Function (RBF) kernel of SVMs. Moreover, Support Vector Rate (SVR) is used to quantify the optimal number of
features to be included in the classifiers. Finally, the bagging random sampling method is utilized to construct the
ensemble SVM classifier based on a weighted voting mechanism to classify the types of blurred images. Numerical
experiments are conducted over a sample dataset to be called Beihang Univ. Blur Image Database (BHBID) that consists
of 1188 simulated blurred images and 1202 natural blurred images collected from popular national and international
websites (Baidu.com, Flicker.com, Pabse.com, etc.). The experiments demonstrate the superior performance of the
proposed ensemble SVM classifier by comparing it with single SVM classifiers as well as other state-of-the-art blur
classification methods.
Key words: Blur image classification; Support vector machine-recursive feature elimination (SVM-RFE);
Support vector rate (SVR); Feature ranking; Feature selection; Ensemble SVM classifier.
Address all correspondence to: Wei Li, Beihang University, School of Instrumentation Science and
Opto-electronics Engineering, Laboratory of Precision Opto-Mechatronics Technology, No. 37 Xueyuan Road,
Haidian District, Beijing, China, 100191; Tel:+8601082338896; Email: liwei_beihang@buaa.edu.cn
1. Introduction
Image blur, a form of bandwidth reduction of an ideal image owing to imperfect formation process, is a
major source of image degradation. It can be caused by, e.g., blur-incurring point spread function (PSF) in
simulated environment, out-of-focus of the imaging system, as well as target motion during the signal capturing
process [1]. In addition, natural weather (e.g., haze pictures are blurred images incurred under natural
foggy/smoggy weather.) interference can also result in blurred images during outdoor imaging [2].
Research regarding blur phenomena in digital image can be roughly categorized into three main branches:
blur detection, blur classification, and image restoration. Although the topics of image blur analysis have attracted
much attention in recent years, the most focused topic in existing studies is image restoration, i.e., deblurring. In
the current literature, various techniques have been developed for restoration of the blurred images. These
techniques can be further categorized as non-blind and blind methods. The non-blind methods [3-6] require prior
knowledge of the blur kernel parameters, whereas, the blurring operators are assumed to be unknown in advance
in the blind methods [7]. In real applications, deblurring a blurred image without the knowledge of its point spread
function (PSF) using blind methods is much more common and more challenging. The current literature offers
several methods to tackle this challenge. For instance, single-channel blind deconvolution within Bayesian
framework is proposed in [8]. Other methods employing either the single scattering model or the multiple
scattering model have been attempted to resolve the haze removal problem. These include single scattering
model-based guided filtering method in [9] and multiple scattering model-based remote sensing image restoration
method in [10].
In addition to the image deblurring problem, blur detection and blur classification which are critical to image
deblurring issue have become increasingly attractive in the field of image processing. From articles [11-13], the
information of image blur type or blur parameters are necessary for blur image recovery, which are obtained from
the blur detection and blur classification. However, though the blur detection and blur classification is significant
to the image deblurring, the current research on blur detection and classification is relatively under-explored and
the existing results are still far from practical application. On the other hand, automatic image blur detection and
classification are not only useful but also critical to learning image information, which are indispensable in image
segmentation, depth recovery, and image retrieval, especially, the restoration of blurred photographs. Between
these two topics, blur detection aims at determining whether an image is clear, locally (non-uniform,
spatial-varying) blurred, or globally (uniform, spatial-invariant) blurred. For locally blurred images, segmenting
the image into blurry and non-blurry areas will be carried out during blur detection as well. However, it can be
shown that, without knowing at least the type of the blur, the deblurring filters will not perform up to expectations.
Moreover, if an incorrect blur model is assumed during blur classification, the image will be rather distorted than
restored after the deblurring operation. Therefore, the objective of blur classification is to determine the type of
the blurred areas according to their characteristics extracted from the original digital images. Clearly, blur
detection can be viewed as a preparation step for blur classification. For globally blurred/non-blurred images, blur
detection can be viewed as a special case of blur classification. This paper is intended to contribute to the area of
blur classification.
We know from recent literature that there are several blur detection and classification methods based on the
descriptors of blurs. For example, based on the observation that blurry regions are more invariant to low pass
filtering, Rugna et al [14] introduce a learning method to classify blurry or non-blurry regions in a single input
image. Unfortunately, this method just segments the blurry region from the whole picture, and no follow-up work
is carried out to identify the exact blur types. Another example is the Bayes classifier applied to automatic
detection of locally blurred regions in an image and identification of the blur types based on selected blur features
(e.g., local power spectrum slope, local autocorrelation congruency) [15]. A similar method based on the alpha
channel feature, which has different circularity of blur extension, has been proposed by Su et al. [16]. In addition,
some researchers have investigated the application of the neural network architecture to blur type classification
and parameter identification. A simple single-layered neural network based on multi-valued neurons is presented
by Aizenberg et al. [17] to identify four blur types: defocus, rectangular, motion, and Gaussian. In a recent study,
another learning-based method using a pre-trained deep neural network (DNN) and a general regression neural
network (GRNN) is developed by Yan and Shao [18] to classify three blur types (Gaussian, motion, and defocus)
and then estimate their respective blur parameters. In addition, Ciancio et al. [19] develop a blur assessment
algorithm based on multiple weak features to boost the performance of the final feature descriptor, and show that
the combined features work better than individual features under most circumstances. As a powerful tool in
classification problems with small samples and nonlinearity, Support Vector Machine (SVM) [20] has also been
applied to blur image assessment for both artificially-distorted images and naturally-blurred ones by deriving and
applying blur features from the power spectrum in the frequency domain [21]. Specifically, in [22], Laplace
distribution-based probabilistic SVM classifiers are proposed to detect and localize the blurred regions without
using prior knowledge about the image by implementing feature extraction in the discrete wavelet space. In [23],
an SVM classifier based on Sobel operator and local variance is developed for blur identification. Experimental
results demonstrate the feasibility and validity of the proposed method. Finally, in [24], the spatial features and
spectrum features are extracted to train a multi SVM for classifying the degraded images and evaluating the
quality of the image degradation.
Inspired by the preponderance of SVM in practical applications [20-24], in this paper, we propose a method,
which constructs an SVR-based ensemble SVM classifier by a unique voting weight strategy to identify four kinds
of images: motion, defocus, haze, as well as clear images. It should be noted that, while multiple blur types may
exist in one picture, in the current paper, we only consider the case where a picture contains no more than one blur
type. Extensions of the research to images containing multiple types of blurs will be pursued in future work. Note
also that while motion blur can be caused by the movement of either the object (resulting in locally blurred images)
or the camera (resulting in globally blurred images), this paper considers the latter. For the former, one can first
extract the blurred region and then apply the methods developed in this paper.
Our work contributes to the field of blur image classification techniques in the following three ways:

To the best of our knowledge, a voting weight-based ensemble SVM classifier is applied to blur image
classification for the first time in this paper, and its performance is justified using extensive experiments.

The Support Vector Rate (SVR)-based SVM-RFE method is adopted for the first time to implement the
ranking of multi-characteristic blur features and our experiments prove that SVR can successfully
estimate the significance of the extracted blur features.

Although the haze blur images are very common in real applications, it is considered for the first time in
blur image classification.
The rest of the paper is organized as follows. In Section 2, we overview different types of blur features and
their extraction; Section 3 details our SVR-based Ensemble SVM classification approach, including a brief review
on SVM theory, SVM-RFE-based blur feature selection, and ensemble SVM classifier design. In Section 4,
numerical experiments based on simulated datasets and real-world datasets, as well as the comparisons with the
state-of-art approaches, are described. Discussions of the numerical experiments are presented in Section 5.
Section 6 concludes the paper.
2. Blur feature extraction
Blur feature extraction is the basis for blur image classification and it directly determines whether the
classifier can accurately identify the blur images. Except the recent rise of very effective machine learning
techniques, such as deep convolutional networks, most other classifiers require manual feature extraction. In
general,brainstorming the variables and metrics that might be better features as well as exploring different
combinations of variables and metrics are carried out first. The criteria to select 35 features are based on the
analyzing the causes of three kinds of blur types. It is found that the fuzzy effect of the motion blurred image
mainly exists in the direction of motion, while the blur of the defocus can be regarded as the diffusion effect in all
directions, and the attenuation of the edge gradient is larger. As for image blurring caused by haze can be regarded
as noise interference caused by atmospheric particles, and the edge details of scenes are reduced, but they are still
legible. In this section, we overview and define different types of blur features, including statistical features,
texture features and image quality metrics in the spatial domain, and spectrum features as well as local power
spectrum features in the frequency domain. These features will be later used in the proposed ensemble-SVM
classifier.

Statistical features
The statistical features extracted from an image measure coarseness, contrast, directionality, etc. In this paper,
mean and standard deviation of gray value and dark channel intensity are picked as the statistical features.
Specifically, these features are defined as follows:
1)
Mean:

1
M N
M
N
 I (i, j )
(1)
i 1 j 1
where I (i, j ) represents the gray value or the dark channel intensity of the image, and M  N is the image size.
2)
Standard deviation:

1
M N
M
N
 [ I (i, j )   ]
2
(2)
i 1 j 1
where  is defined in equation (1).
It should be noted both gray value and the dark channel value are very effective in handling haze blurred
images. Indeed, haze blurred images usually present a gray/white color due to the interference of natural fog,
while other types may be rich in color and the distribution of gray values is uneven. As a result, the gray scale
distribution scope of a haze blurred image is relative narrower than the other kinds of blurred pictures, whose gray
values usually show a more random fluctuation pattern. The dark channel value, proposed by He et al [25], is
defined as follows:
J dark  min (
min ( J
y x c{r , g ,b}
c
( y ))) (3)
where J c ( y ) is the color channel map,  x is the area of pixel x. The dark channel values of clear, motion, and
defocus images are shown to be relatively smaller than those of the haze images under the same light condition,
and, thus, may be effective in identifying and classifying such blurs.
In addition, the variance of saturation [26] of color image in HSV color space is also extracted as another
statistical feature for blur classification. Therefore, a total of 5 statistical features are adopted.

Texture features
Texture feature [26], which gives us information about the spatial arrangement of color or intensities in an
image, is one of the most commonly used features in computer vision and image processing. Moreover, texture
feature can also show the iterative or alternative characteristics of not only the gray level between image pixels
but also the image color in space. Therefore, in our work, correlation, inertia, entropy, and energy [27] are
extracted through calculating gray level co-occurrence matrix (GLCM) of blur images in 0°, 45°, 90°, and 135°
directions. This leads to a total 4  4 = 16 texture features to be used in the proposed ensemble SVM classifier.

Image quality metrics
Image quality metrics are typically used to evaluate the different properties of a picture [28]. Indeed, it can be
shown that some objective image quality metrics possess certain correlation with the various image blur types,
often appearing as a uniform set of frequencies or some representative marks along special directions in (a certain
region of) an image. Taking an example of motion blur images, the blur effect mainly exists only along the motion
direction, nevertheless, perpendicular to the motion direction, the blur effect can be negligible. By survey analysis,
we discover that the defocus blur can be regarded as the diffusion effect in all direction, and its edge gradient
attenuation is relatively larger. Also, as the cause of haze blurred images can be attributed to the disturbance of
atmospheric particles, the edges of scene details are decreased (i.e., smoothed) but may still be legible. On the
other hand, these edges in clear image are usually sharp over the whole picture. Hence, three features, namely, the
gray mean grads, peak signal to noise ratio [29], and image Michelson contrast [30] are selected to characterize
the gradient information, noise level, and visual effect of different images. Their computational formulas are as
follows:
1)
Image Michelson contrast:
contrastM 
Lmax  Lmin
Lmax  Lmin
(4)
where Lmax is the maximum pixel gray value of one image and Lmin is the minimum pixel gray value of the
image.
2)
Gray mean grads (GMG):
GMG 
M 1 N 1
1
[ I (i, j  1)  I (i, j )]2  [ I (i  1, j )  I (i, j )]2


( M  1)( N  1) i 1 j 1
2
(5)
where I(i, j) is pixel (i, j)’s gray value, and M, N indicate the length and width of the image respectively.
3)
Peak signal to noise ratio (PNSR):
 2552 
PSNR  10 log10 

 MSE 
Here, MSE is mean square error [31] which can be assumed as the standard deviation of a single image.

Spectrum and Transform features
(6)
Stable spectrum feeatures, whicch usually ddescribe the intrinsic feaature of an image, can indicate thee
mages in the frequency
f
doomain. As an illustration, Fig.1 shows a set of images and theirr
differences oof various im
correspondinng spectrum using
u
the pre-processing oof Fast Fourieer transform (FFT)
(
methodds.
Obviouusly, there aree dramatic diifferences am
mong the specctral distributtions of the ffour kinds off images. Thee
spectrum off the defocuss blurred imaage appears as a concen
ntric annulus that concenttrate at the center
c
of thee
spectrum (Fig.1(f)); whille the spectru
um of the mottion blurred image presentts parallel strripes with alteernating lightt
and shade
((a) haze
(e) haze
(b) defocus
(f) defocus
(c) motion
n
(g) motion
(d) clear
(h) clearr
Fig..1 Fourier transform spectrum diagram. (a-d): original imagees, (e-h): spectru
rums.
(Fig.1(g)); cclear image spectrum asssumes an irrregular quadrrilateral patteern in Fig.1(hh). The uniq
que spectrum
m
distribution of haze blurrred image, frrom a certain perspective, looks like a four-point sttar. From thiss knowledge,,
o the four types of imagees can be deriived. In our case,
c
we furthher process these
t
spectrall
the spectral parameters of
formed from the
t real pictur
ures containin
ng noise, by morphologica
m
al erosion and
d Canny edgee
images, whiich are transfo
us spectrum clusters.
c
detection meethod for a beetter fit to gett the main striipes of variou
6000
data
fitted curve
y (p ix e ls )
4000
2000
0
-2000
0
(a) haze
(b) defocus
(c) motion
n
200
x(pixelss)
400
600
(d) cleear
Fig
g.2 Linear fittingg curves of resp
pective spectrum
ms
In ordeer to quantifyy the differen
nces of the m
main stripes obtained,
o
the most straighhtforward way is to applyy
linear fittingg to the spectrrums and seleect the fittingg slope, interccept, and fittin
ng error of thhe fitted line as part of thee
blur spectrum
m features. By
B comparing
g the four kinnds of spectru
um fitting cu
urves shown iin Fig.2, we can find thatt
only the sloppe value of motion
m
blur spectrum
s
mayy change dep
pending on th
he motion dirrection, while other slopee
values generrally remain the
t same, i.e.. almost zero . On the otheer hand, the value
v
of fitting
ng error displaays relativelyy
large differeences for diffeerent types off spectrum.
Clearlyy, linear fitting cannot characterize tthe behavior of the specctrum of a bblurred image very well..
Therefore, pprojection traansform toolls are necesssary to meassure the distrribution of tthe spectrum
m in differentt
directions. T
To accomplishh this, the Raadon transform
m approach is applied to transform the spectrums off f ( x, y ) inn
Fig.1 (e)-(h)) into the trannsformed imaage H (r , ) to obtain thee distribution of the spectrra in the speccific directionn
by computinng the line inttegral along a set of lines ccharacterized
d by different values of r , . The form
mula of Radonn
transform caan be expressed as followss:
H (r, )  



 
f ( x, y )  ( r  x cos( )  y sin( ))dxdy (7))
Take thhe Radon trannsform diagraam of the mootion blur im
mage as an exaample, whichh is demonstrrated in Fig.33
(a). During tthe Radon traansform, Gau
ussian fitting iis conducted along and peerpendicular tto the fitting line
l directionn,
respectively, to estimate the projectio
on distributionn of the specctrums from both
b
directionns: normal an
nd tangential..
ur images aboove are show
wn in Fig.3 (b))(c).
The Gaussiaan fitting resuults for the fou
600
600
hazy
defocus
motion
clear
500
400
(a)
400
300
300
200
200
100
100
0
-400
-200
0
0
200
400
hazy
defocus
motion
clear
500
0
-40
00
-200
(b)
0
200
400
(c)
R
transform
m and Gaussian fitting curve diagram.
Fig.3 Radon
(a) Radon traansform diagram
m for the motio
on blur spectrum
m; (b) Gaussian
n fitting results of Radon transsform for four kinds
k
of blur
images in direection of along the spectrum fitting
fi
direction;; (c) Gaussian fitting
f
results off Radon transfoorm of four kind
d blur images
in taangential directiion.
It can bbe observed from
f
(b)(c) in
n Fig.3 that thhe fitting curv
ves for the clear and haze images havee larger widthh
and amplitudde in both noormal and tang
gential directtions, while th
he fitting curv
ves of defocuus and motion
n blur image
show narrow
wer width and lower amplitude. Basedd on this observation, we select the wiidth and amp
plitude of thee
Gaussian fittting curve as part of the sp
pectrum featuures to measu
ure the density
y degree of sppectral distrib
butions.
Finallyy, note that Hough
H
transformation [32] can be used
d to detect straight lines annd curves in an image. Inn
the case of continuous space,
s
Hough
h transform ccan be regard
ded as a speccial case of th
the Radon traansform. Thee
basic principple of Houghh transformatiion is to use tthe duality beetween point and line to trransform a giiven curve inn
a binary imaage under thee original spaace into a pooint in the parameter space through thee curve repreesentation. Inn
this way, thee detection prroblem of thee given curvee in the origin
nal image can
n be transform
med into searrching for thee
correspondinng peak pixell point in the parameter sppace. In our im
mplementatio
on, Hough traansformation is utilized too
detect the nuumber of straaight lines in the spectrum
m image by co
ounting the number of peaak pixels in th
he respectivee
parameter sppace of the diverse
d
spectru
ums shown aas Fig.1 (e-h)). From the Fig.4,
F
it is obvvious that thee clear imagee
has the largeest number of
o straight lin
nes since its H
Hough transfform spectrum
m has the moost highlight points (peakk
value).
-600
-600
-400
-400
-400
-200
-200
-200
-200
0
0
0

-600
-400

-600


0
200
200
200
200
400
400
400
400
600
600
600
-80
-60
-40
-20
0

20
40
60
80
-80
(a) haze
-60
-40
-20
0

20
40
60
80
(b) defocus
600
-80
-60
-40
-20
0

20
40
60
80
-80
-60
(c) motion
-40
-20
0

20
40
60
80
(d) clear
Fig.4 Hough parameter space of four types of spectrums

Local power spectrum features
Inspired by reference [15], the power spectrum is selected as another blur feature in the frequency domain to
further enrich the group of potential features for blur classification. Its expression is given by:
S(u, v) 
1
| F (u, v) |2
M N
(8)
where F(u,v) represents the Fourier transform of the original M N image. Let u=f  cos , v=f  sin . Then, the
power spectrum S(f, ) under two dimensional polar coordinates can be attained. In addition, by calculating the
sum of S(f, ) in different directions, power spectrum S(f) as a function of frequency f can be derived [33]:
S ( f )   S ( f , )  A / f 
(9)

where A is an amplitude scaling factor of different orientation and α is the exponent of frequency, which is
commonly referred to as the slope of the local power spectrum. The logarithmic form of formula (9) is:
log S ( f )  log A   log f
(10)
Clearly, formula (10) shows the linear relationship between the amplitude and the frequency of spectrum images
under the logarithmic coordinate. Similar to the method used to define spectrum and transform features, the slope,
intercept, and fitting error of linear fitting curves of local power spectrum in the form of expression (10) are
selected as features for blur classification.
Summarizing the above discussions, a total of 35 blur features are selected in this paper to characterize the
differences between the four types of images described above (i.e., haze, defocus, motion, clear). These features
are outlined in Table.1.
Table.1 Selected features for blur classification
Feature categories
Characteristics description
Corresponding index
Mean and variance of gray value and dark channel distribution,
Statistical
1-5
and variance of saturation channel.
Energy, inertia, entropy and correlation of the gray level
Spatial
Texture
domain
Image quality
6-21
co-occurrence matrix in 0°,45°,90°,135°directions.
Contrast, PSNR, GMG.
22-24
metrics
Slope, intercept, and fitting error of linear fitting curve of
Spectrum
25-27
spectrums
Radon
Amplitude and width of Gaussian fitting of the Radon
Transform
transform
transform curve in both normal and tangential directions
domain
Hough
28-31
Number of highlight points in Hough transform parameter space
32
Transform
Local power
Slope, intercept, and fitting error of linear fitting curve of
spectrum
local power spectrum.
33-35
On the other hand, it should be noted that selecting more features does not necessarily guarantee higher
accuracy in classification tasks. Therefore, an important problem is to identify a subset of most informative
features that can best capture the blur behavior in images. In this paper, this will be accomplished by designing an
ensemble SVM classifier with support vector rate (SVR)-based feature ranking and selection mechanism, and is
detailed next.
3. Design of the Ensemble SVM Classifier with SVR-based Feature Ranking and
Selection
The proposed SVR-based Ensemble SVM classifier consists of two phases; namely, classifier design phase,
and blur feature selection and ranking phase. Each of them is discussed below.
3.1 SVM-based Classification and Overall Framework of the Ensemble SVM Classifier
Support Vector Machine (SVM) is a class of machine learning method developed by Vapnik et al. in the
early 1990s [34]. By projecting data into the feature space and then finding the separating hyperplane that
maximizes the margin between the data, SVM can transform a nonlinear separable problem into a linear separable
problem with different kernel functions [35]. Due to its advantage of enhanced generalization properties and its
efficiency without direct dependence on the dimension of the classified entities, SVM-based method is widely
used to solve classification problems. However, the original SVM is designed for binary classification tasks,
which is not directly applicable to multi-class classification problems. Since the blur classification problem
studied in this paper requires identification of four different types of images (haze, defocus, motion, and clear), a
modified SVM method should be used. In this paper, we investigate both one-against-one (OAO) and
one-against-all (OAA) methods. The implementation of the multi-class SVM classifiers is accomplished using
LIBSVM [36], which is an efficient open-source library tool supporting multi-class classification.
Moreover, it should be noted that, if a single classifier is used to solve multi-class classification problems,
unbalanced sets, sometimes even rare category, are likely to appear. In addition, applying sophisticated models
with high dimensional noise to classification problems using single SVMs may lead to limited reliability of the
features. As a result, the prediction accuracy of the SVM classifiers will decrease. Due to these technical barriers,
single SVM classifier may fail in tackling complex computer vision problems such as the four-type classification
one addressed in this paper. To overcome these challenges and limitations of single classifiers, ensemble-based
classifiers have shown strong potential in various domains [37, 38]. Therefore, the ensemble SVM approach is
employed in our classifier design.
3.1.1 The mathematical principle of SVMs
The method of SVM was initially introduced to solve two-class problems. The core idea is to find an
optimized hypothetical hyperplane to distinguish the positive and negative samples. The optimization of
hypothetical hyperplane is achieved through the structural risk function:
N

1
2
min
(
)
||
||

W
W
C



 i
2

i 1
 s.t. y [ W  K (x , x )  b]    1; i  1, 2, ... , N
i
i
j
i

(11)
where W is the weight vector and b is the bias, both of which are determined only by the training samples. The
regular parameter C is a penalty factor, which can balance the model complexity and empirical risk. In addition,
ξi’s are positive parameters called slack variables, which represent the distance between the misclassified sample
and the optimal hyperplane. Function K (xi ,x j ) is the kernel function, and here K (xi ,x j ) is the Gaussian radial
2
2
basis function kernel (RBF) [33], which can be expressed as K ( xi , x j )  exp(- || xi - x j || / 2 ) .
To enable multi-class classification using binary classifiers, one-against-one (OAO) and one-against-all
(OAA) methods [36] are common solutions. Take the four-category classification task as an example: the
respective classification model of OAA and OAO are illustrated in Fig.5.
L2
L4
E
4
L1
4
2
4
4
D
3
1
F
4
2
2
2
L24
2
2
4
3
1
1
L13
4
E
1
2
2
A
1
L3
3
3
4
3
B
(a) OAA method
3
L12
1
1
1
4
2
4
1
L14
L23
3
3
3
L34
(b) OAO method
Fig.5 Illustration of multi-class SVM classification methods
Note that lines L1, L2, L3, L4 in Fig.5 (a) and L12, L13, L14, L23, L24, L34 in Fig.5 (b) are hyperplanes to
distinguish different groups of samples. In this figure, A, B, D, E and F are misclassified regions, referred to as
dead zones. As shown in the figure, OAA tends to create larger dead zones than OAO. As far as the computation
efficiency is concerned, for a K-category classification problem, while the number of optimal hyperplanes of OAO
is K(K-1)/2, greater than OAA’s K hyperplanes for K > 3, the computational efforts required by OAO are actually
less than the OAA method in model training. Therefore, the OAO method is employed for constructing the
proposed SVM classifier in this paper. In the numerical experiments carried out in this paper, such OAO-based
multi-class SVMs are implemented using LIBSVM.
Finally, as it follows from expressions (11), the optimization problem of SVM requires a pair of parameters,
C and  . Specifically, penalty factor C characterizes the tradeoff between the complexity and classification
accuracy of the classifier, while kernel width γ controls the radial effect range of the kernel. The commonly used
optimization method of SVM classifier is cross-validation accompanied with grid-search [36]. In our
implementation, a 10-fold cross-validation is adopted to enhance the performance of the proposed SVM classifier.
3.1.2 Design of the ensemble SVM classifier
As mentioned the above, although the trained single SVM classifier may exhibit great classification
performance under some test datasets, it cannot guarantee the same level of performance on other test datasets,
especially when the number of samples is significantly increased. Moreover, for applications with feature
selection methods based on a single classifier, evaluation results are subject to instability. On the other hand, the
ensemble SVM technique can combine a set of single classifiers into a more accurate, stronger one to enhance the
generality and robustness of the SVM classifier. The implementation of the ensemble method relies on two factors:
How to construct each of the member classifiers in the ensemble and how to fuse the member classifiers to form a
strong classifier. This is achieved as follows. First, after calculating the blur features described in Section 2 for all
images in the entire training dataset (see Part A of Fig.6), the bagging-based random sampling method [39] will
be applied repeatedly to obtain a group of member classifiers. Specifically, to obtain member classifier SVM-i, ¾
of the original samples are randomly selected to form the training dataset (Tri), while the rest is used as the
temporary validation dataset (Tei) to evaluate its performance. Then, without investigating the optimal number of
members for an integrated classifier, we directly employ 10 different classifier generated during 10 random
sampling of the 10-fold cross-validation SVM Recursive Elimination (SVM-RFE) method as the base learners
setting. Thus, an ensemble SVM classifier with 10 member is constructed in this paper. During this step, the
member classifiers will select features based on the Support Vector Rate (SVR)-based ranking criterion (see
Section 3.2 for details). Clearly, since the member classifiers are constructed using different random samples of
the original dataset, the resulting classifiers may have different personalities and, thus, may have different
classification decisions for the same image. Finally, to fuse the classification decisions from all member classifiers
and form the ensemble SVM classifier, the outputs of the member classifiers are integrated based on a weighted
voting mechanism. The design process of our SVR based ensemble SVM classifier is shown in Part B of Fig.6.
After the ensemble classifier is obtained, it can be used for classification tasks beyond the training dataset, as
illustrated in Part C of Fig.6.
surplus
Input examples
th
1
Feature extraction
Training dataset Tr
3
4
Validation dataset Te1 Training Feature
Feature subset F1 and training
dataset Tr1 ranking Optimal parameter (c1,γ1 )
surplus
bagging
3
4
Training Feature
Feature subset F2 and training
dataset Tr2 ranking Optimal parameter (c2,γ2 )
. . .
Part A:
Feature extraction
surplus
Member classifier
SVM‐2
Weight A2
Datasets
Training Feature
Feature subset Fn and training
dataset Trn ranking Optimal parameter (cn,γn )
Member classifier
SVM‐n
Ensemble classifier
. . .
nth
Random
sampling
. . .
3
4
Weight A1
Validation dataset Te2
. . .
2th
Member classifier
SVM‐1
Weight An
Classification
result
Validation dataset Ten
Fig.6. The overall architecture of the proposed blur type classification system: SVR-based Ensemble SVM Classifier
From the Part B of Fig.6, after a member classifier has been trained, the remaining 1/4 samples in the
training dataset are used as a temporary validation dataset (Tei) to evaluate the performance of this member SVM
classifier. In our work, the classification accuracy (CA), defined in equation (12), is used to evaluate the
performance of a classifier:
CA 
Ncorrect
100% (12) Nall
where Ncorrect is the number of the correctly classified samples and Nall is the total number of samples. Thus, the
classification accuracy (CAi) for each of the n member SVM classifiers can be calculated by testifying each
member classifier on its corresponding validation dataset Tei.. Furthermore, the voting weight Ai of each member
classifier can be derived by normalizing CAi as follows:
Ai 
CAi
, i  1, 2,..., n  CAi
(13)
Clearly, since better classifiers (i.e., the ones with greater CAi) are assigned with greater voting weights Ai,
the constructed ensemble classifier can effectively overcome the potential issues of the evenly weighted voting
method [40]. The details of constructing the ensemble SVM classifier are summarized in Algorithm 1. In this
paper, the ensemble SVM classifier is consisted of n = 10 member classifiers to ensure the balance between the
diversity of the member classifier and the simplicity of the integrated classifier structure.
Algorithm 1 The integrating process of the SVR based ensemble SVM classifier
Support that n member classifiers have been trained.
Input: Test sample image
Output: The class label of the sample image. Class labels (j):1-haze, 2-defocus, 3-motion, 4-clear
Begin:
Send the sample image x to all n member SVM classifiers and save the classification result Pre as a 1D
1)
array with n elements representing the obtained class labels;
Let W=[] represent the voting result vector and let Ai denote the voting weight of member classifier SVM-i.
2)
The calculation of the voting result wj of class j is as follows:
for j=1:1: 4
for i=1:1: n
if j ==Pre[i]
wj = wj+ Ai
end if
end for
W[j]= wj
end for
Let wt =max(W), t  {1,2,3,4} and the classification result of the ensemble classifier is t, i.e., sample x
3)
belongs to the class t;
End
3.2. Feature ranking and selection
In the implementation of the proposed ensemble SVM classifier, a critical part is the selection of a subset of
blur features that maximize the performance of each member classifier. As we know, the irrelevant variables in
the extracted features will slow down computation during training and prediction processes as well as increase
data gathering and storage cost. Moreover, they can even lead to some disrupting effects on the concept to be
learned.
To enhance the generalization performance of the SVM-based classifier, we will discuss an effective feature
ranking and selection method to eliminate irrelevant variables from 35 extracted features presented in Section 2.
Traditional feature selection algorithms usually fall in the categories of wrapper methods and filter methods
[41]. The filter methods are typically computationally less expensive but do not consider the interactions among
the features. As a result, the optimality of the selected feature subset cannot be guaranteed. On the contrary, the
wrapper methods can evaluate features iteratively and jointly and, thus, are effective in capturing interactions
among multiple features. Due to these advantages, the wrapper method is used for feature selection during
construction of the proposed ensemble SVM classifier. Among available methods in the literature, the
SVM-Recursive Feature Elimination (RFE) approach [42] is regarded as an effective wrapper method for feature
selection in single SVM classifier training. In our previous study [43], we have successfully applied SVM-RFE
with correlation filter to screening features of medical images. In this paper, RFE is accomplished as part of the
RBF-SVM classification algorithm using the Support Vector Rate (SVR) metric to rank all 35 extracted features.
Here, SVR is defined as follows: SVR =
N sv
N total
 100%
(14)
where, Nsv is the number of support vectors (i.e., the samples on the supporting plane) and Ntotal is the number of
total training samples. It is commonly known that fewer support vectors can reduce the computational load of
SVM and improve training efficiency. In our experiments, it can be learned from the Fig.9 that smaller SVR (i.e.,
fewer support vectors) also tends to have better classification performance of the SVM classifier.
To better illustrate the ranking process, the flow chart of whole feature ranking is provided in Fig.7. Initialize
feature set S with 35 features and assume R is the ranked feature set. Remove one feature in S and use the
remaining 34 features to train a SVM classifier, which is initialized by empirical parameters to calculate the
support vector rate(SVR). This allows us to evaluate the contribution of the removed feature to the SVM classifier.
Repeat this for all 35 features, and the feature leading to the biggest SVR after removal is obtained and placed into
the ranked set R. This feature indicates that it is not a support vector and far away from the hyperplane of SVM
classifier and is easy to be classified. After the first feature has been picked out, the selection of second feature is
conducted among the remaining 34 features using the same method. Once the second feature is picked out, place it
into set R behind the first ranked feature. Repeat the same process, until all 35 features are ranked. The features
near front of the ranked set R show a more important rule in classifier construction than the ones towards the end.
Start
Initial: Original feature set S, Ranked feature set R=Φ S==Φ Ranked result R
Y
N
i=1
End
Si=S\fi
S=S\ft
Calculate the SVRi in set Si i<=Length (S)
Y
i=i+1
N
ft=arg Max{SVRi} R=[R ft]
Fig.7 The flow chart of SVR-based SVM-RFE feature ranking
After all 35 blur features have been ranked, SVR and CA will be used simultaneously to determine the
optimal number of features for training single SVM classifier. Generally speaking, higher CA implies that the
classifier possesses better prediction accuracy, while a lower SVR implies stronger property of the classifier.
Therefore, the values of SVR and CA can well reflect the classifier’s performance as functions of the number of
selected features. During the implementation of this step, one can study the behavior of SVR and CA as functions
of the number of selected features, and choose the one that optimizes both criteria simultaneously or achieves the
best balance between the two. An example is given in Section 4 through numerical experiments.
4. Experiiment Results and Analysis
4.1 Dataseets used in experiments
e
Simulaated blurred
d image data
aset: In this paper, we use a dataset with a total of 1188 sam
mple images..
Among all ssamples, 908 are subject to simulated bblurs and 280
0 are blur-free, clear imagges. In the exp
periment, thee
training dataaset has 418 blurred imag
ges (158 hazze blur, 129 defocus blur,, and 131 mootion blur) and
a 200 clearr
images. Thee remaining 210
2 blurred images (91 hhaze blur, 60
0 defocus blu
ur, and 59 m
motion blur) and 80 clearr
images are uused as the testing
t
dataseet. In the sim
mulated dataseets, only the motion blur and defocuss blur imagess
were generaated by speccified blur keernels, the otther sampless (i.e., haze blur
b
and cleear images) were
w
directlyy
downloadedd from the sam
me websites as
a those in naatural blurred image dataseets.
(a) defocus
(b) haze
(ee) defocus
(c) motion
(f) haze
(d)cleear
n
(g) motion
(h)cleear
Fig.88 Parts of blur samples. (a)-(d) come from thee simulated blurr dataset, (e)-(f)) come from thee natural blur daataset.
Naturaal blurred im
mage datasetts: The traininng dataset off natural blurrred image dat
ataset consistss of 210 hazee
blur, 190 deefocus blur, 190
1 motion blur,
b
and 213 clear imagess for a total of
o 803 samplles. The testin
ng dataset off
natural blurrred images coontains 127 haze
h
blur, 80 ddefocus blur,, 86 motion blur, and 106 clear images for a total off
399 samplees. These saamples are collected
c
froom famous national and
d internationnal websites: Baidu.com,,
Flicker.com and Pabse.ccom. A few samples in aabove dataseets are illustrrated in Fig.88. All of afo
orementionedd
samples
are
included
in
our
database
denoted
as
BHBID,
which
can
be
found
online
at
http://doip.buaa.edu.cn/info/1092/1073.htm.
4.2 SVM classifier setup
In the numerical experiments, the proposed ensemble SVM classifier is implemented by a 10-member SVM
classifiers (see Subsection 3.1) and integrated through the weighted voting method described in Subsection 3.2.
The feature ranking and selection process discussed in Subsection 3.2 is carried out as follows. First, we evaluate
SVR and CA of the member classifiers as functions of the number of features included based on the simulated
training datasets. The results are illustrated in Fig.9. As one can see from the figure, SVR decreases rapidly when
the number of selected features increases from 1 to about 10 and, as the blue elliptical region indicates, reaches
the lowest values when 9-20 features are selected and used by the member classifiers. Then, SVR starts to
increase slightly, when more than 20 features are used in the member classifiers. Similar observations can be
made for CA as well. Specifically, CA shows significant growth when the number of features is under 5. Then, a
plateau is reached in the red elliptical region, after which a slight decline occurs for larger number of features.
Hence, combining the behavior of both SVR and CA, it can be concluded that including 10-15 features in a
member classifier can lead to optimal classification performance for each member SVM classifier under the
training sets used in this experiment. In this paper, we choose 13 as the total number of features used in each
member classifier and provide the ranked indices of the selected features of each member classifier in Table2.
0.08
100
0.07
0.06
80
SVM-1
0.05
SVM-2
70
SVM-5
0.04
60
SVM-10
0.03
0.02
Classification Accuracy (CA)
Support Vector Rate
SVR
90
50
0
5
10
15
20
25
30
35
40
Feature Numbers
Fig.9. SVR and CA of member classifiers as functions of the number of features included in each classifier. The bottom four curves
are about the relationship between SVR and the number of features while the top four curves are about the relationship between CA
and the number of features. Here, CAs are evaluated based on the validation datasets Tei (i =1, 2,…, 10), which consist of the
remaining samples after the ith random sampling in the training dataset.
From Table 2, we can see that the feature subset selected by each member classifier doesn’t necessarily
contain the same features due to the randomized training datasets. However, some features with strong
discriminating power, such as the mean of dark channel (feature index 3) and the Gaussian fitting amplitude and
width (feature indices 28-31) are selected by all member classifiers. There are also features (feature indices: 18,
22, 30, 31, 35) selected by majority (over 60%) of all member classifiers. Clearly, this observation implies that
these are the key features that have the ability to characterize the spatial or frequency domain of an image in blur
type identification. After the feature subsets are obtained, 10-fold cross-validation and grid-search method are
applied to optimizing parameter pair (C, γ) for each member classifier. Clearly, the unique characteristics of the
member classifiers should result in different optimal (C,  ) pair as well. In the last column of Table2, the voting
weight Ai of each member SVM classifier is calculated based on equation (13) in Subsection 3.2. The maximum
and minimum voting weights are 0.1034 (SVM-3) and 0.0978 (SVM-9), respectively. According to the table, the
contribution rate of those members with high voting weights, such as SVM-3, SVM-6, SVM-4, SVM-5and
SVM-1, to the designed ensemble classifier is larger.
Table 2 The optimal feature subsets and voting weights for each member classifier after training
Member classifiers
Ranked feature indices
Voting Weight
SVM-1
30
3
28
21
22
29
35
33
8
19
27
32
7
0.1001
SVM-2
31
3
21
28
30
35
22
33
20
6
27
32
5
0.0984
SVM-3
30
3
28
22
9
33
31
35
8
29
18
14
11
0.1034
SVM-4
30
3
28
21
1
31
35
10
26
33
8
16
12
0.1012
SVM-5
29
28
3
17
1
35
33
21
23
18
19
16
12
0.1006
SVM-6
3
30
28
10
22
35
32
31
18
14
11
15
29
0.1029
SVM-7
3
30
13
28
1
22
8
26
17
6
12
18
33
0.0984
SVM-8
3
30
28
24
1
34
31
33
18
9
4
25
17
0.0984
SVM-9
31
3
28
13
22
35
33
32
12
7
25
16
18
0.0978
SVM-10
30
3
21
28
22
31
35
33
23
2
27
26
16
0.0989
5. Results and analysis
As described in Section 3.1.2, classification accuracy, which is defined in formula (12), is utilized as the
criterion to evaluate the performance of each member classifier. In this section, it is used again to evaluate the
performance of the ensemble classifier. Specifically, we first study the effects of feature ranking and selection on
the classifier’s performance. Specifically, two ensemble SVM classifiers were constructed: one with member
classifiers each having 13 optimally selected features, and the other with member SVM classifiers each having all
35 features. Our experiments show that 90% (9 out of 10) of the 13-feature member classifiers exhibit superior
performance to all 35-feature member classifiers. The comparison results of the ensemble SVM classifiers based
on the simulated testing dataset (i.e., the testing dataset of simulated blurred image dataset) are summarized in
Table3. As one can see, the classification accuracy of the ensemble classifier is significantly lower without feature
ranking and selection (i.e., when all 35 features are used by all member classifiers). In addition, as suggested by
the last column of Table3, the average classification time of a single blurred image using 13 optimized features is
only about 59% (0.054s) of that (0.092s) with all 35 features. In other words, the feature ranking and selection
reduces not only increases classification accuracy but also reduced computational complexity. These observations
justify the inclusion of the SVR-based SVM-RFE and SVR-CA-based joint feature selection in the proposed
ensemble SVM classifier.
Table3 Comparison of the ensemble SVM classifiers under different feature sets.
Image type
Haze blur
Defocus blur
Motion blur
Clear
Total samples
Time
Features unselected(35)
90.7692%
96.4674%
70%
92.6829%
89.3617%
0.092s
Features selected(13)
95.3486%
97.3333%
93.3333%
96.3415%
95.7447%
0.054s
CA (%)
Next, we compare the performance of the ensemble classifier with that of individual member classifiers. The
results are summarized in Fig.10. For convenience, the ensemble classifier is denoted as “SVM-0” in the figure.
As shown in Fig.10(a), the ensemble classifier (highlighted in the green elliptical region of the figure) has the
highest accuracy to classify all four types of blue in the entire dataset (see the red curve). On the other hand,
individual member classifiers may exhibit superior in classifying a certain type of blur and fail dramatically in
other cases. For instance, member classifier SVM-8 has almost perfect performance recognizing haze blur images,
even higher than the ensemble classifier. On the other hand, its classification accuracy for motion blur images is
under 70% and its overall accuracy in classifying all four types of blur images falls under 80%. Thus, as the
ensemble classifier takes advantage of the power from all member classifiers through the voting-based integration,
the resulting overall accuracy is also enhanced over individual member classifiers. In addition to classification
accuracy, we also evaluate the “stability” performance of the classifiers. To accomplish this, the standard
deviation of the classification accuracy of the ensemble classifier and each individual member classifier is shown
in Fig.10 (b). It can be observed that the ensemble SVM classifier has the lowest classification accuracy standard
deviation, which implies a robust performance in dealing with the blur type classification task. Therefore, based
on the experiment results, we claim that the ensemble classifier generally outperforms the member classifiers in
terms of both classification accuracy and stability.
x1e-4
100
0.9
75
0.8
Variance
Classification Accuracy
1
0.7
Haze dataset
Defocus dataset
Motion dataset
Total dataset
0.6
0.5
0.4
0
ensemble
1
2
3
4
5
6
SVM-i
7
8
9
(a) Classification accuracy (CA) comparison
10
50
25
0
0
ensemble
1
2
3
4
5
6
SVM-i
7
8
9
10
(b) Stability comparison
Fig.10 The performance comparison among the ensemble classifier and member classifiers.
Finally, we compare our method with several state-of-the-art methods using the simulated blurred image
dataset and the natural blurred image dataset. In the comparison Table4, the classification accuracy data of Naïve
Bayes classifier [15] and two-step way [16] as well as single-layered NN [17] and DNN [18] are quoted from the
original data reported in the original references. Specifically, with handcrafted features, the classification accuracy
of the Naïve Bayes classifier and two-step way method, both constructed with handcrafted features, is evaluated
based on respective natural blurred image datasets adopted in references [15] and [16]. The classification accuracy
of the single-layered neural network classifier, constructed based on the handcrafted blur feature of Fourier
spectrum amplitude, is obtained by testing on a simulated blurred image dataset in reference [17]. The accuracy of
the learned feature-based DNN, which is capable of estimating the blur parameters accurately, is evaluated in
reference [18] based on a simulated dataset generated by blurring the original clear images derived from Berkeley
segmentation dataset (200 images) and Pascal VOC 2007 dataset. In addition to these benchmark accuracy
numbers taken from the literature, we also implement and evaluate the performance of several commonly used
classification methods using the same simulated and natural blurred image datasets described in Section 4.1.
These classifiers include k-nearest neighbor (KNN), Random Forest, Bayes, and Softmax, all of which are trained
with the same 35 blur feature vectors described in Section 2. Specifically, the hyper-parameter of Random Forest,
i.e., decision tree number, is set to 100, and the rest of the hyper-parameters of Random Forest, KNN, Bayes and
Softmax are set to their default values in MATLAB. The comparison results are given in Tabel.5. In this table,
CA_1 indicates the accuracy on the simulated blurred image dataset and CA_2 indicates the accuracy on the
natural blurred image dataset. Moreover, to distinguish the accuracy results obtained using our datasets from the
ones cited from the references, the experiment results are in italic in the table.
Table4 Comparison with state-of-the-arts blur classification methods.
Method
Blur features
CA_1 (%)
CA_2 (%)
Bayes [15]
Local autocorrelation Congruency et al. four blur features
65.45% [15]
Two step way [16]
Alpha channel and singular value features
88.78% [16]
Single-layers NN [17]
Fourier spectrum amplitude
94%-97% [17]
DNN [18]
Learned features
95.20% [18]
KNN
72.76%
62.02%
Random Forest
87.83%
84.12%
80.00%
72.75%
Softmax
89.66%
72.91%
Proposed
89.96%
86.10%
95.74%
93.42%
Bayes
Proposed
35 blur features
13 optimal blur features
It can be observed from Table4 that the classification accuracy of the methods tested are generally higher on
the simulated blurred image dataset than on the natural blurred image dataset. The reason can be attributed to the
fact that the simulated blurred images are usually polluted by a single factor, while the natural blurred images may
be contaminated by noise and multiple factors. Therefore, the natural blurred images are more difficult to be
classified. In addition, our proposed ensemble SVM classifier outperforms the other blur classification approaches
based on the numerical experiment results both on the simulated blurred image dataset and natural blurred image
dataset and are on-par with the single-layers NN and DNN according to the accuracy reported in [17] and [18].
Furthermore, it should be noted that, while single-layered NN achieves a considerable performance, the Fourier
spectrum amplitude is the sole blur feature used in classifier construction, which may lead to poor generalization
ability. Moreover, it is widely known that the large-scale training samples and the redundant time consumption of
DNN are inevitable for model training, which may lead to impracticability in many applications. On the other
hand, our proposed ensemble SVM classifier is constructed based on optimized multi-handcrafted features, and,
thus, possesses enhanced generalization ability and remarkable classification accuracy under small-scale blur
image datasets. Also, the time and facility costs of model training of the proposed classifier are apparently much
lower than DNN method.
6. Conclusions
In this paper, an ensemble SVM classifier is designed to identify four types of images: defocus blur, haze blur,
motion blur and clear image. To accomplish this, SVR-based SVM-RFE theory is implemented to rank 35
extracted image features, which include the statistic features, texture features, image quality metrics, spectrum
features and local power spectrum features, for blur classification. Based on random sampling and the dual criteria
of support vector rate (SVR) and the classification accuracies (CA), 13 most critical features are selected to train
each individual member SVM classifier. Then, the ensemble SVM classifier is constructed by integrating a total
of 10 trained member classifiers based on appropriate weights assigned according to their respective classification
accuracy. By comparison with state-of-the-art blur classification methods, the ensemble classifier demonstrates
superior performance to other handcrafted-based methods and is head-to-head with the learned-based Deep
Learning method. However, the proposed method requires smaller number of samples, relative lower time
consumption and lower cost of equipment for training than those required by the Deep Learning method. This is
due to the fact that SVM is a powerful classification tool, especially for small samples. As shown in the numerical
experiments, the generalization and accuracy of SVM classification is influenced by the number of features
included and by the ensemble-based methods. The experiment results show that the proposed strong
ensemble-SVM classifier has excellent accuracy and stability compared to single SVM classifiers and, thus, can
achieve complex computer vision tasks. Using the convolution neural network(CNN) to achieve more stable and
accurate blur image classification under the big-sample data is our future research direction.
Acknowledgements
This work was supported in part by a grant from National Natural Science Foundation of China (61673039).
References
[1] J.H. Elder, S.W. Zucker, Local scale control for edge detection and blur estimation, Ieee T Pattern Anal.
1998, 20(7), pp. 699-716.
[2] R. Wang, R. Li, H. Sun, Haze removal based on multiple scattering model with superpixel algorithm,
Signal Process. 2016, 127, pp. 24-36.
[3] D. Krishnan, R. Fergus, Fast image deconvolution using hyper-laplacian priors, supplementary material,
in:Neural Information Pro-cessing Systems Conference. 2009.
[4] S. Cho, S. Lee, Fast motion deblurring, Acm T Graphic. 2009, 28 (5), pp. 145.
[5] D. Zoran, Y. Weiss, From learning models of natural image patches to whole image restoration, in:2011
International Conference on Computer Vision. 2011, pp. 479-486.
[6] Hong, H., & Shi, Y. Fast Deconvolution for Motion Blur Along the Blurring Paths. Canadian Journal of
Electrical and Computer Engineering. 2017, 40(4), pp. 266-274
[7] M.S. Almeida, L.B. Almeida, Blind and semi-blind deblurring of natural images, Ieee T Image Process.
2010, 19 (1), pp. 36-52.
[8] A.C. Likas, N.P. Galatsanos, A variational approach for Bayesian blind image deconvolution, Ieee T Signal
Process. 2004, 52 (8), pp. 2222-2233.
[9] K. He, J. Sun, X. Tang, Guided image filtering, in:European conference on computer vision. 2010, pp.
1-14.
[10] S. Tao, H. Feng, Z. Xu, Image degradation and recovery based on multiple scattering in remote sensing and
bad weather condition, Opt Express. 2012, 20(15), pp. 16584-16595.
[11] Li, Y. J., & Di, X. G. Image mixed blur classification and parameter identification based on cepstrum peak
detection. In Control Conference (CCC), 2016 35th Chinese. IEEE, 2016, pp. 4809-4814
[12] Wang, R., Li, W., Qin, R., & Wu, J. Blur image classification based on deep learning. In Imaging Systems
and Techniques (IST), 2017 IEEE International Conference on. IEEE, 2017, pp. 1-6
[13] Yang, D., & Qin, S. Restoration of degraded image with partial blurred regions based on blur detection and
classification. In Mechatronics and Automation (ICMA), 2015 IEEE International Conference on. IEEE,
2015, pp. 2414-2419
[14] J. Da Rugna, H. Konik, Automatic blur detection for meta-data extraction in content-based retrieval context,
in:Electronic Imaging 2004. International Society for Optics and Photonics. 2003, pp. 285-294.
[15] R. Liu, Z. Li, J. Jia, Image partial blur detection and classification, in:2008 IEEE Conference on Computer
Vision and Pattern Recognition. 2008, pp. 1-8.
[16] B. Su, S. Lu, C.L. Tan, Blurred image region detection and classification, in:Proceedings of the 19th ACM
international conference on Multimedia. 2011, pp. 1397-1400.
[17] I.N. Aizenberg, C. Butakoff, V.N. Karnaukhov, Blurred image restoration using the type of blur and blur
parameter identification on the neural network, in:Electronic Imaging 2002.International Society for Optics
and Photonics. 2002, pp. 460-471.
[18] Y. R., S. L., Image Blur Classification and Parameter Identification Using Two-stage Deep Belief
Networks, in:BMVC. 2013.
[19] A. Ciancio, A.L.N.T. da Costa, E.A. da Silva, No-reference blur assessment of digital pictures based on
multifeature classifiers, Ieee T Image Process. 2011, 20 (1), pp. 64-75.
[20] A.V. Prishchepov, V.C. Radeloff, M. Dubinin, The effect of Landsat ETM/ETM+ image acquisition dates
on the detection of agricultural land abandonment in Eastern Europe, Remote Sensing of Environment.
2012, 126, pp. 195-209.
[21] E. Mavridaki, V. Mezaris, No-reference blur assessment in natural images using fourier transform and
spatial pyramids, in:2014 IEEE International Conference on Image Processing (ICIP). 2014, pp. 566-570.
[22] V. Kanchev, K. Tonchev, O. Boumbarov, Blurred image regions detection using wavelet-based histograms
and SVM, in:IDAACS (1). 2011, pp. 457-461.
[23] J.P. Qiao, J. Liu, A SVM-based blur identification algorithm for image restoration and resolution
enhancement, Lect Notes Artif Int. 2006, 4252, pp. 28-35.
[24] C. Charrier, O.Lézoray, G. Lebrun. Machine learning to design full-reference image quality assessment
algorithm. Signal Processing: Image Communication, 2012, 27(3), pp. 209-219.
[25] H. Kaiming, S. Jian, T. Xiaoou. Single image haze removal using dark channel prior, in: 2009 IEEE
Conference on Computer Vision and Pattern Recognition, pp. 1956-1963.
[26] Y. Maret, F. Dufaux, T. Ebrahimi. Adaptive image replica detection based on support vector classifiers.
Signal Processing: Image Communication, 2006, 21(8), pp. 688-703.
[27] H. Wang, X.H. Guo, Z.W. Jia, et al., Multilevel binomial logistic prediction model for malignant
pulmonary nodules based on texture features of CT image, Eur J Radiol. 2010, 74 (1), pp. 124-129.
[28] X.-Y. Luo, D.-S. Wang, P. Wang, et al., A review on blind detection for image steganography, Signal
Process. 2008, 88 (9), pp. 2138-2157.
[29] Z. Wang, A. Bovik, Mean squared error: love it or leave it, A new look at signal fidelity measures,
IEEE
Signal Processing Magazine. 2009, 26 (1), pp. 98-117.
[30] P.J. Bex, W. Makous, Spatial frequency, phase, and the contrast of natural images, JOSA A. 2002, 19 (6),
pp. 1096-1106.
[31] Rehman A, Gao Y, Wang J, et al. Image classification based on complex wavelet structural similarity.
Signal Processing: Image Communication, 2013, 28(8), pp. 984-992.
[32] M. Turker, D. Koc-San, Building extraction from high-resolution optical spaceborne images using the
integration of support vector machine (SVM) classification, Hough transformation and perceptual grouping,
International Journal of Applied Earth Observation and Geoinformation. 2015, 34, pp. 58-69.
[33] D. Tolhurst, Y. Tadmor, T. Chao, Amplitude spectra of natural images, Ophthal Physl Opt. 1992, 12 (2), pp.
229-232.
[34] V.N. Vapnik, V. Vapnik, Statistical learning theory, Wiley New York, 1998.
[35] X.-Y. Wang, H.-Y. Yang, C.-Y. Cui, An SVM-based robust digital image watermarking against
desynchronization attacks, Signal Process. 2008, 88 (9), pp. 2193-2205.
[36] X.-X. Niu, C.Y. Suen, A novel hybrid CNN–SVM classifier for recognizing handwritten digits, Pattern
Recogn. 2012, 45 (4), pp. 1318-1325.
[37] M. Fernandez-Delgado, E. Cernadas, S. Barro, and D. Amorim. Do we need hundreds of classifiers to solve
real world classification problems? JMLR. 2014, 15, pp.3133–3181.
[38] Zhang, Cha, and Yunqian Ma, eds. Ensemble machine learning: methods and applications. Springer
Science & Business Media, 2012.
[39] I. Guyon, J. Weston, S. Barnhill, et al., Gene selection for cancer classification using support vector
machines, Mach Learn. 2002, 46 (1-3), pp. 389-422.
[40] L. Zhang, M. Song, X. Liu, et al., Fast multi-view segment graph kernel for object classification, Signal
Process. 2013, 93 (6), pp. 1597-1607.
[41] D. Tuia, F. Pacifici, M. Kanevski, Classification of very high spatial resolution imagery using mathematical
morphology and support vector machines, Ieee T Geosci Remote. 2009, 47 (11), pp. 3866-3879.
[42] M. Atas, Y. Yardimci, A. Temizel, A new approach to aflatoxin detection in chili pepper by machine vision,
Comput Electron Agr. 2012, 87, pp. 129-141.
[43] R. Wang, R. Li, Y. Lei, et al., Tuning to optimize SVM approach for assisting ovarian cancer diagnosis
with photoacoustic imaging, Bio-medical materials and engineering. 2015, 26 Suppl 1, pp. S975-981.
Highlight
Our work contributes to the field of blur image classification techniques in the
following three ways:
1)
To the best of our knowledge, a voting weight-based ensemble SVM classifier is
applied to blur image classification for the first time in this paper, and its
performance is justified using extensive experiments.
2)
The Support Vector Rate (SVR)-based SVM-RFE method is adopted for the first
time to implement the ranking of multi-characteristic blur features and our
experiments prove that SVR can successfully estimate the significance of the
extracted blur features.
3)
The haze blur images is considered for the first time in blur image classification.
Документ
Категория
Без категории
Просмотров
2
Размер файла
1 513 Кб
Теги
003, image, 2018
1/--страниц
Пожаловаться на содержимое документа