An Overview of Traffic Sign Detection Methods: Karla Brki C
An Overview of Traffic Sign Detection Methods: Karla Brki C
Karla Brkić
Department of Electronics, Microelectronics, Computer and Intelligent Systems
Faculty of Electrical Engineering and Computing
Unska 3, 10000 Zagreb, Croatia
Email: [email protected]
Abstract—This paper reviews the popular traffic sign detection in mass production. To illustrate the problem of false alarms,
methods prevalent in recent literature. The methods are divided consider the following: one hour of video shot at 24 frames
into three categories: color-based, shape-based, and learning- per second consists of 86400 frames. If we assume that in
based. Color-based detection methods from eleven different
works are studied and summarized in a table for easy reference. the video under consideration traffic signs appear every three
Three shape-based detection methods are presented, and a recent minutes and typically span through 40 frames, there are a
method based on Hough transform is studied in detail. In the total of 800 frames which contain traffic signs and 85600
section on learning-based detection, we review the Viola Jones frames which do not contain any signs. These 85600 frames
detector and the possibility of applying it to traffic sign detection. without traffic signs will be presented to our detection system.
We conclude with two case studies which show how the presented
methods are used to design complex traffic sign detection systems. If our system were to make an error of 1 false positive per 10
images, we would still be left with 8560 false alarms in one
I. I NTRODUCTION hour, or two false alarms every second, rendering the system
Recent increases in computing power have brought com- completely unusable for any serious application! To make the
puter vision to consumer-grade applications. As computers problem even harder, we cannot expect the vehicle on which a
offer more and more processing power, the goal of real- commercial traffic sign detection system will be deployed to be
time traffic sign detection and recognition is becoming fea- equipped with a very high-resolution camera or other helpful
sible. Some new models of high class vehicles already come sensors, as the addition of such sensors increases production
equipped with driver assistance systems which offer automated costs.
detection and recognition of certain classes of traffic signs.
Traffic sign detection and recognition is also becoming inter-
esting in automated road maintenance. Every road has to be
periodically checked for any missing or damaged signs, as
such signs pose safety threats. The checks are usually done
by driving a car down the road of interest and recording any
observed problem by hand. The task of manually checking
the state of every traffic sign is long, tedious and prone to
human error. By using techniques of computer vision, the task Fig. 1: Labeled examples from the PASCAL visual object
could be automated and therefore carried out more frequently, classes challenge 2009. [1]
resulting in greater road safety.
To a person acquainted with recent advances in computer This paper presents an overview of basic traffic sign de-
vision, the problem of traffic sign detection and recognition tection methods. Using the presented methods as commercial
might seem easy to solve. Traffic signs are fairly simple stand-alone solutions is impossible, as they fail to provide
objects with heavily constrained appearances. Just a glance at the required true positive and false positive rates. However,
the well-known PASCAL visual object classes challenge for combining the methods has a synergistic effect, so they are
2009 indicates that researchers are now solving the problem commonly used as building blocks of larger detection systems.
of detection and classification of complex objects with a lot In this paper, the traffic sign detection methods are divided into
of intra-class variation, such as bicycles, aeroplanes, chairs or three categories: color-based methods, shape-based methods
animals (see figure 1). Contemporary detection and classifi- and methods based on machine learning. After introducing the
cation algorithms will perform really well in detecting and methods, we present two traffic sign detection systems which
classifying a traffic sign in an image. However, as research use them.
comes closer to commercial applications, the constraints of the
problem change. In driver assistance systems or road inventory A. Traffic sign classes - the Vienna convention
systems, the problem is no longer how to efficiently detect and Before investigating common traffic sign detection methods,
recognize a traffic sign in a single image, but how to reliably it is useful to briefly review the data on which these methods
detect it in hundreds of thousands of video frames without any operate, i.e. the classes of traffic signs. In 1968, an interna-
false alarms, often using low-quality cheap sensors available tional treaty aiming to standardize traffic signs across different
countries, the so-called Vienna Convention on Road Signs and • Sensor type: high resolution or low resolution camera,
Signals, was signed [2]. To date there are 52 countries which grayscale or color? Multiple cameras? Other sensors?
have signed the treaty, among which 31 are in Europe. The • Processing requirements: should the signs be detected in
Vienna Convention classifies road signs into seven categories, realtime or is offline processing acceptable?
designated with letters A-H: danger warning signs (A), priority • Acceptable true positive and false positive rates: deter-
signs (B), prohibitory or restrictive signs (C), mandatory signs mined by the nature of the problem.
(D), information, facilities, or service signs (F), direction, The nature of the problem, the availability of sensors and
position, or indication signs (G) and additional panels (H). the target application determine which method to use. For
Examples of Croatian traffic signs for each of the categories example, color-based detection is pointless if we are working
are shown in figure 2. with a grayscale camera. On the other hand, it might be very
useful if we are trying to detect traffic signs in high resolution
color images taken in broad daylight with a high quality
camera. Shape-based detection might not work if we are using
a camera with interlacing. Learning-based approaches might
Fig. 2: Examples of traffic signs. From left to right: a danger be a perfect solution if we have a lot of labeled data, but if
warning sign, a prohibitory sign, a priority sign, a mandatory no labeled data is available we cannot use them.
sign, an information sign, a direction sign and an additional
panel. II. C OLOR - BASED DETECTION METHODS
In the RGB color model, colors are specified as mixtures intensity. The first two components are hue and saturation. In
of red, green and blue components. Figure 31 illustrates how the HS* cylinder, the angle around the central vertical axis
two different shades of orange can be obtained by mixing red, corresponds to hue, the radius to saturation and the height to
green and blue. The differences between RGB components for lightness or value or intensity. In the HS* representation, the
the first and the second color are -31, +24 and +59. The RGB components of two similar colors are numerically much closer,
color model is unintuitive from a human standpoint - a human which is why it is said to be less sensitive to illumination.
might expect to vary just one parameter, namely illumination,
to obtain the second color from the first. It would be hard for B. Color-based segmentation
a human to guess the changes in R, G and B necessary for the
The color of a traffic sign should be easily distinguishable
required change in color. Similarly, it is hard for a computer
from the colors of the environment. After all, traffic signs are
to learn that these two colors are similar based purely on the
specifically designed with this requirement in mind. In order
distances between the numerical values of their R, G and B.
to find the sign of a target color, one segments the image
Several color models were designed to address this problem. based on that color. Image segmentation is a process which
In the mid-1970s researchers in computer graphics developed assigns a label to each pixel of an image so that the pixels
the HSL and HSV color models, which rearrange the RGB with the same labels share similar visual characteristics. The
color space in cylindrical coordinates so that the resulting simplest method of image segmentation is thresholding: every
representation is closer to human visual perception. A very pixel with a value above a certain threshold is marked with
similar model is HSI, commonly used in computer vision. the appropriate label. Various authors have experimented with
HSL, HSV and HSI differ only in the definition of the third color thresholding, especially in the 1990s. High detection
component – L stands for lightness, V for value and I for rates were reported, but the experiments were usually done on
small testing sets. For example, simple thresholding formulas
1 Image adapted from http://en.wikipedia.org/wiki/File:Unintuitive-rgb.png (see table II) are used by Varun et al. [5] and Kuo and Lin
[8]. External factors such as illumination changes, shadows, authors state that the values are multiplied so that the two
adverse weather conditions can greatly impact the success of components can correct each other - if one component is
color-based detection techniques. This significantly reduces wrong, the assumption is that the other one will not be wrong.
the potential of color thresholding as a stand-alone solution Fang et al. [11] classify colors based on their similarity with
for detection. In recent research, color thresholding commonly pre-stored hues. The idea is that the hues in which a traffic sign
finds is purpose as a preprocessing step to extract regions of appears are stored in advance, and the color label is calculated
interest [14], [15]. as the similarity measure with all available hues, so that the
Influences of daily illumination changes are recognized classification which is most similar is chosen.
by Benallal and Meunier [3]. They present an interesting Paclik et al. [10] present approximative formulas for con-
experiment in which they observe the color of a red STOP verting RGB to HSI. The desired color is then obtained by
through 24 hours. They show that the red color component is choosing an appropriate threshold for hue, while black and
dominant between approximately 6.30 am and 9 pm. During white are found by thresholding the saturation and intensity
that time, the differences δRG and δRB between the red color components.
component and the green and blue components remain high, Gao et al. [13] use the CIECAM97 color model. The
the red color having a value of approximately 85 above the images are first transformed from RGB to CIE XYZ values,
green and the blue components. Based on this experiment, they and then to LCH (Lightness, Chroma, Hue) space using the
propose formulas for color segmentation intended to correctly CIECAM97 model. The authors state that the lightness values
segment the red, green and blue signs (see table II). are similar for red and blue signs and the background, so
Estevez and Kehtarnavaz [4] present an algorithm for de- only hue and chroma measures are used in the segmentation.
tecting and recognizing a small subset of traffic signs which Authors consider four distinct cases: average daylight viewing
contain red components. The first stage of their algorithm is conditions, as well as conditions during sunny, cloudy and
color segmentation, used to localize red edge areas. The for- rainy weather. Using the acceptable ranges, sign candidates
mula for the segmentation, given in table II, relies on a tunable are segmented using a quad-tree approach, meaning that the
parameter α which can be tuned to varying sensitivities based image is recursively divided into quadrants until all elements
on intensity levels, in order to avoid illumination sensitivity. are homogeneous or the predefined grain size is reached.
Average intensity values are obtained by sparsely sampling the For an another view on traffic sign detection by color, see a
top line of the image, usually corresponding to the sky. From review paper by Nguwi and Kouzani [16], in which the color-
these values one can speculate about the weather conditions based detection methods are divided into seven categories.
and choose the proper value of α. The exact values of α chosen
are not given in the paper. III. S HAPE - BASED DETECTION
Broggi et al. [6] propose a way of overcoming the color Several approaches for shape-based detection of traffic
dependency on the light source. The default way to determine signs are recurrent in literature. Probably the most common
the light source color is to find a white object in the scene approach is using some form of Hough transform. Approaches
and compute the difference between the image white and the- based on corner detection followed by reasoning or approaches
oretical white (RGB values 255, 255, 255). In road sequences based on simple template matching are also popular.
one cannot count on having a white reference point, but the Generalized Hough transform is a technique for finding
road is usually gray. Broggi et al. therefore find a piece of arbitrary shapes in an image. The basic idea is that, using
road (it is unclear whether this is an automated procedure or an edge image, each pixel of the edge image votes for where
it needs to be done by hand) and estimate the light source the object center would be if that pixel were at the object
color by assuming that the road should be gray. They then boudary. The technique originated early in the history of
perform chromatic equalization, similar to gamma correction computer vision. It was extended and modified numerous
but with the linearization of gamma function. times and there are many variants. Here we will present
Ruta et al. [7] use color-based segmentation as a starting work by Loy and Barnes, as it was intended specifically for
stage in traffic sign recognition. They first segment the image traffic sign detection and was used independently in several
based on fixed thresholds (which are not listed in the paper), detection systems. Loy and Barnes [17] propose a general
and then enhance the obtained colors using formulas shown regular polygon detector and use it to detect traffic signs. The
in table II. detector is based on their fast radial symmetry transform, and
Escalera et al. [12] present an approach for detecting red the overall approach is similar to Hough transform. First the
in HSI color space. The input image is first converted from gradient magnitude image is built from the original image.
RGB to HSI. For each pixel, values of hue and saturation The gradient magnitude image is then thresholded so that the
are re-calculated so that the range of saturated red hues is points with low magnitudes, which are unlikely to correspond
emphasized. This is done by using a lookup table described to edges, are eliminated. Each remaining pixel then votes for
in table II. The authors assume that the values of hue and the possible positions of the center of a regular polygon. One
saturation are scaled to the range of 0 to 255. The resulting pixel casts its vote at multiple locations distributed along a line
hue and saturation are then multiplied and the result is upper which is perpendicular to the gradient of the pixel and whose
bounded by 255. Thus the response image is obtained. The distance to the pixel is equal to the expected radius of the
regular polygon (see figure 4). Notice that there are actually Finally, the vote image and the norm of the equiangular
two lines which satisfy these requirements, one in the direction image are combined to produce the overall response. Compu-
of the gradient and the other in the opposite direction. Both tational complexity of this method is O(N kl), where l is the
can be used if we don’t know in advance whether signs will be maximum length of the voting line, N is the number of pixels
lighter or darker than the background. The length of the voting in an image and k is the number of radii being considered. The
main weakness of the approach is that the radius of the sought
polygon should be known in advance, which is not always easy
to accomplish. This can be solved by trying out multiple radii,
but it might be too expensive in terms of processing time.
Another interesting approach in finding shapes of interest
is to use a corner detector and then hypothesise about the
locations of regular polygons by observing the relations be-
Fig. 4: Locations on which a pixel votes for the object center. tween corners. Paulo and Correia [18] detect triangular and
The parts of the line which are black indicate negative votes. rectangular signs by first applying the Harris corner detector
Image from [17]. to a region of interest, and then searching for existence of
corners in six predefined control areas of the region. The shape
line is bounded by the expected radius of the regular polygon. is determined based on the configuration of the control regions
The votes towards the end of the line have negative weights, in which corners are found. Control areas are shown in figure
to minimize the influence of straight lines in an image which 6.
are too long to be polygon edges. The resulting vote image is
labeled Or .
In addition to the vote image, another image called equian-
gular image is built. The proposed procedure favors equian-
gular polygons by utilizing the following property: if the
gradient angles of edge pixels of an n-sided regular polygon
are multiplied by n, the resulting angles will be equal (see
figure 5). For instance, cosider an equiangular triangle for Fig. 6: Predefined control areas from [18]. The shape is
which we sample one gradient angle value at each side. determined by the existence of corners in the control areas.
Suppose that we obtain gradient values of 73◦ , 193◦ and
313◦ . The gradients are spaced at 360◦ /n = 120◦ . Then Gavrila [19] uses distance transform based template match-
73◦ × 3 = 219◦ and 193◦ × 3 = 579◦ , 579◦ − 360◦ = 219◦ . ing for shape detection. First, edges in the original image
Similarly 313◦ × 3 = 939◦ , 939◦ − 2 × 360◦ = 219◦ . For are found. Second, a distance transform (DT) image is built
each pixel which voted for the polygon center, a unit vector is (see figure 7). A DT image is an image in which each pixel
constructed. The slope of the unit vector is made equal to the represents the distance to the nearest edge. To find the shape
gradient angle of the pixel multiplied by the number of sides of of interest, the basic idea is to match a template (for instance,
the sought regular polygon. The pixel then again casts its vote a regular triangle) against the DT image. In order to find the
on locations determined by the voting line, except that this optimal match, the template is rotated, scaled and translated.
time the vote takes the form of the constructed unit vector. One might consider attempting to match the template with the
The votes are cast in a new image called the equiangular raw edge image instead, but by matching with DT image the
image. Each point in this image represents a vector which resulting similarity measure is much smoother. In Gavrila’s
is the sum of all contributing votes. The votes coming from extension of this basic idea, the edges are differented by
edges of equiangular polygons will have the same slope, so the orientation, so that separate DT images are computed for
magnitudes of vote vectors in equiangular polygon cenroids distinct edge orientation intervals and templates are separated
should be the largest. into parts based on orientations of their edges. The overall
match measure is a sum of match measures between DT
images and templates of specific orientations. Gavrila also
uses a template hierarchy, with the idea that similar templates
are grouped into prototypes, and, once the prototype has been
found, the process finds the best template within the prototype.
This saves computation costs.
IV. D ETECTION BASED ON MACHINE LEARNING
In the approaches outlined above, the prior knowledge of
the problem (the expected color and shape of a traffic sign) is
Fig. 5: Multiplying the gradient angles of a triangle by 3. The manually encoded into the solution. However, this knowledge
resulting angles are equal. Image from [17]. could also be discovered using machine learning.
Fig. 7: Building the distance transform image. From left to right: the original image, the edge image and the distance transform
image. The template for which the DT image is searched is a simple triangle. Images from [19].
Research by Viola and Jones [20] presented a significant cascade follow the same numerical limitations. Each stage is
milestone in computer vision. Viola and Jones developed an trained so that the false positives of the previous stage are
algorithm capable of detecting objects very reliably and in labeled as negatives and added to the training set. Hence,
real time. The detector is trained using a set of positive and subsequent stages are trained to correct the errors of previous
negative examples. While originaly intended for face detection, stages, while preserving the high true positive rates. Using the
various other researchers have succesfully applied the detector cascade enables faster processing, as obvious false positives
to a lot of other object classes. Among others, traffic signs are discarded early on.
were successfully detected. The process of the detection is carried out by sliding a
The detector of Viola and Jones is an attentional cascade detection window across the image. Within the window, the
of boosted Haar-like classifiers. It combines two concepts: (i) response of the cascade is calculated. After completing one
AdaBoost and (ii) Haar-like classifiers. Haar-like classifiers pass over the image, the size of the detection window is
are built using simple rectangular features which represent increased by some factor (OpenCV defaults to 1.2, meaning
differences of sums of specific pixels in an image. Each feature that the scale of the window will be increased by 20%). The
is paired with a threshold, and the decision of the so-built window size is increased until some predefined size is reached.
classifier is determined by comparing the value of the feature Increasing the detection window by a smaller percentage yields
with the threshold. Four feature types used in the original paper better detection rates, but increases the total processing time.
are shown in figure 8. Viola and Jones propose a very fast
method of computation for such features which utilizes the
so-called integral image. The value of each feature can be
computed in less then ten array references.
R EFERENCES
Fig. 12: Pruning the 3D hypotheses using the MDL principle.
The shorter set of hypotheses (right) is preferred over the [1] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn,
and A. Zisserman, “The PASCAL Visual Object Classes
longer one (left). Image from [15]. Challenge 2009 (VOC2009) Results,” http://www.pascal-
network.org/challenges/VOC/voc2009/workshop/index.html.
Timofte et al. [15] present a complex system for traffic sign [2] Inland transport comitee, Convention on road signs and signals. Eco-
nomic comission for Europe, 1968.
detection and recognition. For the data acquisition they use a [3] M. Benallal and J. Meunier, “Real-time color segmentation of road
van with eight high-resolution cameras, two on the each side of signs,” Electrical and Computer Engineering, 2003. IEEE CCECE 2003.
the van. The detection is carried out in two phases: (i) single- Canadian Conference on, vol. 3, pp. 1823–1826 vol.3, May 2003.
[4] L. Estevez and N. Kehtarnavaz, “A real-time histographic approach
view phase and (ii) multi-view phase. In the single-view phase, to road sign recognition,” Image Analysis and Interpretation, 1996.,
the image is first thresholded to find the colors of interest. Proceedings of the IEEE Southwest Symposium on, pp. 95–100, Apr
Also, a tranformation similar to generalized Hough transform 1996.
[5] S. Varun, S. Singh, R. S. Kunte, R. D. S. Samuel, and B. Philip, “A
is used for finding the shapes of interest. This step is very fast, road traffic signal recognition system based on template matching em-
and yields very few false negatives. To verify that the extracted ploying tree classifier,” in ICCIMA ’07: Proceedings of the International
candidates are traffic signs, the Viola-Jones detector is ran on Conference on Computational Intelligence and Multimedia Applications
(ICCIMA 2007). Washington, DC, USA: IEEE Computer Society, 2007,
the obtained bounding boxes. For additional verification they pp. 360–365.
employ an SVM classifier which operates on normalized RGB [6] A. Broggi, P. Cerri, P. Medici, P. Porta, and G. Ghisio, “Real time road
channels, pyramids of HOGs and discriminative Haar-like signs recognition,” Intelligent Vehicles Symposium, 2007 IEEE, pp. 981–
986, June 2007.
features selected by AdaBoost. To recognize the class of the [7] A. Ruta, Y. Li, and X. Liu, “Detection, tracking and recognition of traffic
traffic sign, six one-vs-all SVMs are used, each corresponding signs from video input,” Oct. 2008, pp. 55 –60.
to one class of traffic signs (triangular, triangular upside-down, [8] W.-J. Kuo and C.-C. Lin, “Two-stage road sign detection and recogni-
tion,” Multimedia and Expo, 2007 IEEE International Conference on,
circular with red rim, circular blue, rectangular, diamond- pp. 1427–1430, July 2007.
shaped). In the multi-view phase, first the data collected during [9] G. Piccioli, E. D. Micheli, P. Parodi, and M. Campani, “Robust
the single-view phase is integrated into hypotheses. Every pair method for road sign detection and recognition,” Image and Vision
of detections taken from different views is considered. The Computing, vol. 14, no. 3, pp. 209 – 223, 1996. [Online]. Avail-
able: http://www.sciencedirect.com/science/article/B6V09-3VVCMCX-
pair is checked for geometrical consistency (the position of 4/2/0f2793e7828195ecb68735a80a9ef904
the hypothesis is backprojected to 2D and checked against the [10] P. Paclı́k, J. Novovičová, P. Pudil, and P. Somol, “Road sign classification
image candidates) and visual consistency (pairs of detections using laplace kernel classifier,” Pattern Recogn. Lett., vol. 21, no. 13-14,
pp. 1165–1173, 2000.
with the same basic shape are favored). Next, the set of all [11] C.-Y. Fang, S.-W. Chen, and C.-S. Fuh, “Road-sign detection and
hypotheses is pruned using the minimum description length tracking,” vol. 52, no. 5, pp. 1329–1341, Sep. 2003.
(MDL) principle. The idea is to find the smallest possible set [12] A. D. L. Escalera, J. M. A. Armingol, and M. Mata, “Traffic sign
recognition and analysis for intelligent vehicles,” Image and Vision
of 3D hypotheses which matches the known camera positions Computing, vol. 21, pp. 247–258, 2003.
and calibrations and is supported by detection evidence. For an [13] X. Gao, L. Podladchikova, D. Shaposhnikov, K. Hong, and N. Shevtsova,
illustration, see figure 12. In the end, the set of 2D observations “Recognition of traffic signs based on their colour and shape features
extracted using human vision models,” Journal of Visual Communication
forming a 3D hypothesis is classified by an SVM classifier. and Image Representation, vol. 17, no. 4, pp. 675–685, 2006.
The majority of votes determines the final type assigned to the [14] A. Ruta, Y. Li, and X. Liu, “Real-time traffic sign recognition from video
hypothesis, i.e. the exact type of the traffic sign. by class-specific discriminative features,” vol. 43, no. 1, pp. 416–430,
2010.
[15] R. Timofte, K. Zimmermann, and L. Van Gool, “Multi-view traffic sign
VII. C ONCLUSION detection, recognition, and 3d localisation,” Snowbird, Utah, 2009, pp.
In this paper, we have presented traffic sign detection 69–76.
[16] Y.-Y. Nguwi and A. Z. Kouzani, “Detection and classification of road
methods which are often used as building blocks of complex signs in natural environments,” Neural Comput. Appl., vol. 17, no. 3,
detection systems. The methods were divided into color-based, pp. 265–289, 2008.
[17] G. Loy, “Fast shape-based road sign detection for a driver assistance
system,” in In IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS, 2004, pp. 70–75.
[18] C. Paulo and P. Correia, “Automatic detection and classification of traffic
signs,” in Image Analysis for Multimedia Interactive Services, 2007.
WIAMIS ’07. Eighth International Workshop on, June 2007.
[19] D. Gavrila, “Traffic sign recognition revisited,” in DAGM-Symposium,
1999, pp. 86–93.
[20] P. Viola and M. Jones, “Robust real-time object detection,” in Interna-
tional Journal of Computer Vision, 2001.
[21] K. Brkić, A. Pinz, and S. Šegvić, “Traffic sign detection as a component
of an automated traffic infrastructure inventory system,” Stainz, Austria,
May 2009.
[22] K. Brkić, S. Šegvić, Z. Kalafatić, I. Sikirić, and A. Pinz, “Generative
modeling of spatio-temporal traffic sign trajectories,” held in conjuction
with CVPR2010, San Francisco, California, Jun. 2010.
[23] S.-Y. Chen and J.-W. Hsieh, “Boosted road sign detection and recog-
nition,” Machine Learning and Cybernetics, 2008 International Confer-
ence on, vol. 7, pp. 3823–3826, July 2008.
[24] X. Baro and J. Vitria, “Fast traffic sign detection on greyscale images,”
Recent Advances in Artificial Intelligence Research and Development,
pp. 69–76, October 2004.
[25] R. Lienhart and J. Maydt, “An extended set of Haar-like features for
rapid object detection,” in IEEE ICIP 2002, 2002, pp. 900–903.
[26] S. Escalera and P. Radeva, “Fast greyscale road sign model matching
and recognition,” Recent Advances in Artificial Intelligence Research
and Development, pp. 69–76, 2004.