Example ANN
Example ANN
1 Introduction
2 Literature
meteorological parameters such as barometric pressure, humidity and wind speed data
achieved 97% accuracy whilst basic ANN implementation yields 94% accuracy. The
proposed model in [9] was developed for temperate climate. However this study is
applied in tropical climate where the behaviour and pattern of the meteorological
parameters is different. Most of the weather forecasting methods in literature are
geo-location dependant. The current advances in ANN methodology for modelling
non-linear and dynamical phenomena are the motivation to investigate the application
of different ANN algorithms for hourly weather forecasting.
x1
w1
w2 y
x2
wi
Adding more perceptron layers in a network topology will increase the ANN’s
ability to recognize more classes of patterns. This additional layer is known as
MLP. MLP works by adding more layers of nodes between the input and output nodes.
The neurons are arranged into an input layer, an output layer and one or more hidden
layers in between. The learning process for MLP is known as BPL [2, 10, 11]. The
process repetitively calculates an error function and adjust the weights to achieve a
minimum error. The transformation of weights depends on the following steps:
(1) Each learning step starts with an input signal from training dataset.
(2) Then the process will determine the output values for each neuron in each network
layer.
(3) Finally, based on the above step, weight is selected to map the input with the
output.
Figure 2 illustrates the propagation of signal traversing through each neuron from
input layer, hidden layer and nally the target. Supposed that the layer representation
has L number of layers, including input, hidden and output layer, while l is representing
the input layer and hidden layer that has N number of nodes in the form of N(l), where
l = (0,1,…, L; l = 0 is the input parameter) and i = 1,…,N(l) is the node that has an
output from a previous layer and it will be the input for the next layer, yl,i is the output
that depends on incoming signals xl,i and parameters a, b, c. Thus the following
equation is the generalization of the output from each node:
After the propagation of signals is completed, the next step is to compare the output
signal with the desired output (z) from the training dataset. In general, the difference is
the error signal dl,i.
Assuming that the training dataset has P entries, the error measured for the Pth entry
of the training dataset is the sum of the squared error:
NðLÞ
X
EP ¼ ðz yl;i Þ2 ð4Þ
k¼1
@ þ Ep
el;i ¼ ð5Þ
@yl;i
Every input parameter that is used to train the network is associated with the output
pattern. The ANN forecasting models use standard supervised learning MLP trained
with BPL algorithm [2, 10, 11]. The entire process of training in MPL-BPL imple-
mentation is summarized in the following steps:
(1) Initialize the weights using the training dataset by mapping the input data to the
desired output data.
(2) Initialize bias by randomly selecting data from training dataset.
(3) Compute the output of neurons, error and update the weights.
(4) Update all weights and bias and repeat step 3 for all training data
(5) Repeat step 3 and step 4 until the error is minimized.
In this study, three BPL training algorithm were applied to evaluate their perfor-
mances, each of them having a variety of different computation and storage require-
ments. Table 1 summarizes the characteristics of the algorithms (Fig. 3).
3 Study Area
Chuping is a small town in Perlis, Malaysia. It has 22,000 hectares of agricultural and
plantation land area. The climate is tropical characterized by sunny days, high tem-
perature and high humidity. In a whole year, the average day period is from 7 a.m. to 7
p.m. The average temperature is 27.5 °C. Hourly meteorological data of atmospheric
pressure, dry bulb temperature, dew point, humidity, wind speed, wind direction,
rainfall amount and rainfall rate are measured and have been used in this study. Three
years of data from January 2012 to December 2014 that consists of 26304 dataset have
been used. Elementary meteorological parameter characteristic for Chuping are sum-
marized in Table 2.
468 N.Z. Mohd-Safar et al.
4 Methodology
Where x is the value before normalization, xmax is the maximum value and xmin is
the minimum value in the dataset.
470 N.Z. Mohd-Safar et al.
The ANN used in the experimental setup was trained using BPL-MLP training
algorithm as shown in Fig. 4.
n
1X
MAE ¼ jyt ^yt j ð9Þ
n t¼1
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n
1X
RMSE ¼ ðyt ^yt Þ2 ð10Þ
n t¼1
The purpose of this study was to evaluate the accuracy of LM, BR and SCG algorithms
use in weather forecasting with focus on atmospheric pressure, temperature and
humidity parameters. The input parameters are from past meteorological data. The
results show that LM is the best forecasting algorithm. Results and comparison are
summarized in Table 4. LM yields low MAE and MSE compared to BR and SCG
algorithms. The correlation coefcient value produced by LM is greater than 0.95 for
pressure, temperature and humidity forecast. Figure 5 shows the training, validation,
testing and overall regression plots for temperature forecast using LM algorithm. LM
Fig. 5. Rgression plot for temperature forecast using ANN with LM algorithm
offered the best accuracy, followed by BR and SCG. In terms of processing time, SCG
is faster [18] compared to LM and BR but it does not produce good convergence. BR
algorithm takes more time compared to LM and SCG during training but it converges
faster. Figures 6, 7 and 8 show the comparison between forecast and observed results
for pressure, temperature and humidity using LM algorithm.
The proposed ANN forecast model has been trained using meteorological data from a
tropical area, North Malaysia. Results from this study has shown that ANN forecast
model with LM algorithm offers signicant potential for weather condition forecasting
for localized tropical region. From the overall performance, an ANN weather fore-
casting model is capable of capturing the dynamic behaviour of atmospheric pressure,
temperature and humidity for one hour forecasting. Modelling for longer time window
forecasting should be implemented and tested. Wind speed and dew point forecasting
from the same dataset will be carried out for an effective weather forecast system. This
study offers a potentially new approach for weather forecasting system for localized
tropical climate. Forecast results from this study are very useful in environmental
condition prediction such as rain occurrences and rain intensity. Furthermore, when
rain forecasting is reliable, it will contribute to an effective water resources manage-
ment, flood prediction, drought mitigation, ecological studies and climate change
impact assessment.
Short-Term Localized Weather Forecasting 475
References
1. Ndzi, D.L., Harun, A., Ramli, F.M., Kamarudin, M.L., Zakaria, A., Shakaff, A.Y.M., Jaafar,
M.N., Zhou, S., Farook, R.S.: Wireless sensor network coverage measurement and planning
in mixed crop farming. Comput. Electron. Agric. 105, 83–94 (2014)
2. Rahul, G.K., Khurana, M.: A comparative study review of soft computing approach in
weather forecasting. Int. J. Soft Comput. Eng. (IJSCE), 2(5), 295–299 (2012)
3. Cheng, B., Titterington, D.M.: Neural networks: a review from a statistical perspective. Stat.
Sci. 9(1), 2–30 (1994)
4. Nikam, V.B., Meshram, B.B.: Modeling rainfall prediction using data mining method: a
Bayesian approach. In: 2013 Fifth International Conference on Computational Intelligence,
Modelling and Simulation, pp. 132–136 (2013)
5. Shahi, A.: An effective fuzzy C-Mean and Type-2 Fuzzy. J. Theor. Appl. Inf. Technol. 5(5),
556–567 (2009)
6. Maqsood, I., Khan, M., Abraham, A.: An ensemble of neural networks for weather
forecasting. Neural Comput. Appl. 13, 112–122 (2004)
7. Paras, S.M., Kumar, A., Chandra, M.: A feature based neural network model for weather
forecasting. Int. J. Comput. Intell. 4(3) (2009)
8. Hayati, M., Mohebi, Z.: Application of articial neural networks for temperature forecasting.
World Acad. Sci. Eng. Technol. 28(2), 275–279 (2007)
9. Hossain, M., Rekabdar, B., Louis, S.J., Dascalu, S.: Forecasting the weather of Nevada: a
deep learning approach. In: Proceedings of the International Joint Conference on Neural
Networks, vol. 2015, pp. 2–7, September 2015
10. LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.R.: Efcient backprop. Lecture Notes in
Computer Science (including subseries Lecture Notes in Articial Intelligence and Lecture
Notes in Bioinformatics) LNCS, vol. 7700, pp. 9–48 (2012)
11. Abraham, A., Philip, N.S., Joseph, K.B.: Will We Have a Wet Summer ? Soft Computing
Models for Long-term Rainfall Forecasting (1992)
12. Foresee, F.D., Hagan, M.T.: Gauss-Newton approximation to Bayesian learning. In:
Proceedings of International Conference on Neural Networks (ICNN 1997) (1997)
13. Pellakuri, V., Rao, D.R., Prasanna, P.L., Santhi, M.V.B.T.: A conceptual framework for
approaching predictive modeling using multivariate regression analysis vs articial neural
network. J. Theor. Appl. Inf. Technol. 77(2), 287–290 (2015)
14. Moré, J.J.: The Levenberg-Marquardt algorithm: implementation and theory. In: Lecture
Notes in Mathematics, Springer, pp. 105–116 (1978)
15. MacKay, D.J.C.: A practical Bayesian framework for backpropagation networks. Neural
Comput. 4(3), 448–472 (1992)
16. MacKay, D.J.C.: Bayesian interpolation. Neural Comput. 4(3), 415–447 (1992)
17. Shewchuk, J.R.: An introduction to the conjugate gradient method without the agonizing
pain. Science 49(CS-94–125), 64 (1994)
18. Møller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural
Netw. 6(4), 525–533 (1993)
19. Schneider, T.: Analysis of incomplete climate data: estimation of mean values and
covariance matrices and imputation of missing values. J. Clim. 14, 853–871 (2001)
20. Josse, J., Pages, J., Husson, F.: Multiple imputation in principal component analysis. Adv.
Data Anal. Classif. 5(3), 231–246 (2011)
476 N.Z. Mohd-Safar et al.
21. Manly, B.: Statistics for Environmental Science and Management. Chapman and
Hall/CRCM, Boca Raton (2000)
22. Howard Demuth, M.B., Hagan, M.: Neural network toolbox user’s guide’ (2012)
23. Hauke, J., Kossowski, T.: Comparison of values of Pearson’s and Spearman’s correlation
coefcients on the same sets of data. Quaestiones Geographicae 30(2), 87–93 (2011)