0% found this document useful (0 votes)
12 views14 pages

Example ANN

Uploaded by

nqfaq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views14 pages

Example ANN

Uploaded by

nqfaq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Short-Term Localized Weather Forecasting

by Using Different Articial Neural Network


Algorithm in Tropical Climate

Noor Zuraidin Mohd-Safar1(&), David Ndzi1, Ioannis Kagalidis1,


Yanyang Yang1, and Ammar Zakaria2
1
School of Engineering, University of Portsmouth, Portsmouth, UK
{noorzuraidin.mohdsafar,david.ndzi,ioannis.kagalidis,
linda.yang}@port.ac.uk
2
School of Mechatronic Engineering, Universiti Malaysia Perlis,
Arau, Malaysia
[email protected]

Abstract. This paper evaluates the performance of localized weather fore-


casting model using Articial Neural Network (ANN) with different ANN
algorithms in a tropical climate. Three ANN algorithms namely, Levenberg-
Marquardt, Bayesian Regularization and Scaled Conjugate Gradient are used in
the short-term weather forecasting model. The study focuses on the data from
North-West Malaysia (Chuping). Meteorological data such as atmospheric
pressure, temperature, dew point, humidity and wind speed are used as input
parameters. One hour ahead forecasted results for atmospheric pressure, tem-
perature and humidity were compared and analyzed and they show that ANN
with Levenberg-Marquardt algorithm performs best.

Keywords: Articial neural network  ANN  Short-term weather forecasting 


Neural network  Soft computing  Tropics  Tropical climate

1 Introduction

Meteorological processes are highly non-linear and complicated to predict at high


spatial resolutions. Weather forecasting provide critical information about future
weather that is important for flooding disaster prediction system and disaster man-
agement. This information is also important to businesses, industry, agricultural sector
[1], government and local authorities for a wide range of reasons. Soft Computing
(SC) techniques such as Articial Neural Network (ANN) can be used to predict the
behaviour of such non-linear conditions [2]. Since the weather processes are non-linear
and follow an irregular trend, ANN is a better technique for analysing and identifying
the structural relationship between the meteorological parameters [3]. Large data from
satellites, radar, weather stations and sensors are processed continuously on a daily
basis. This data is transformed into useful information that are used to forecast the
weather in the next hours or days. Weather forecasting systems use complex computer
algorithm that demand high performance computers and require high resolution spatial
data [4]. However, this study will take advantage of localized meteorological to

© Springer International Publishing AG 2018


Y. Bi et al. (eds.), Proceedings of SAI Intelligent Systems Conference (IntelliSys) 2016,
Lecture Notes in Networks and Systems 16, DOI 10.1007/978-3-319-56991-8_35
464 N.Z. Mohd-Safar et al.

forecast localized weather conditions. Meteorological data from Chuping in


North-West Malaysia is selected for this study. There are many techniques that are used
in weather forecasting ranging from simple observations to highly complex comput-
erized mathematical models. Numerical Weather Prediction (NWP) method is com-
monly used for weather forecasting in Malaysia [5]. NWP forecasting is suitable for
large areas and not for localized forecasting [4]. Furthermore, NWP model is adapted
from non-tropical regions such as Europe and Japan [5]. This study uses atmospheric
pressure, temperature, dew point, humidity and wind speed data from a single weather
station. Different ANN algorithms such Levenberg-Marquardt (LM), Bayesian Regu-
larization (BR) and Scaled Conjugate Gradient (SCG) algorithms have been used to
forecast atmospheric pressure, temperature and humidity for one hour ahead prediction.
The performance of each ANN algorithms is determined by evaluating the mag-
nitude of the error and the correlation coefcient value between observed and fore-
casted values. Results show that LM yields better results compared to BR and SCG.
This paper is organized into six sections. Section 2 presents the related works in
weather forecasting using ANN. Section 3 presents the information of study area and
data availability. Methodology will be discussed in Sect. 4. Result and discussion of
the nding are presented in Sect. 5. Finally, Sect. 6 discusses the conclusion and future
works.

2 Literature

2.1 Related Works


Weather forecasting is one of the challenging problems especially for tropical climate
where the meteorological condition is dynamic and constantly changing. Many studies
have been carried out in weather forecasting using ANN [5–8]. This section summa-
rizes some practical applications of ANN weather forecasting. [6] presents the use of
ANN to provide daily forecasts for temperature, wind speed, and humidity for southern
Saskatchewan, Canada. The results show that empirical statistical modelling is out-
performed by a proposed ANN with radial basis function. [7] obtained temperature and
relative humidity weekly forecasts using ANN time series analysis. The network model
used is a Multilayer perceptron (MPL) feed forward ANN model with back propagation
learning (BPL). The error is less than 3% for 15 weeks temperature and humidity
forecast. [7] proposed that statistical parameters can be used as input parameter in ANN
model for weeks ahead weather forecasting. Daily temperature forecasting model is
presented in [8] using ANN with an additional input. Ten years of meteorological data
from Kermanshah in Iran have been used in the study. The results show that the best
performance of MLP-BPL ANN architecture uses sigmoid transfer function in the
hidden layer, linear function in the output layer and SCG algorithm [8]. Recently, [9]
used deep learning ANN method to forecast air temperature for short-term period
prediction in North-western Nevada, United States. One year of data from 2012 to 2013
was used in the study. Hourly meteorological data such as atmospheric pressure,
temperature, humidity, precipitation and wind speed were also used. The results show
that deep learning ANN by reconstructing the input parameters and combining related
Short-Term Localized Weather Forecasting 465

meteorological parameters such as barometric pressure, humidity and wind speed data
achieved 97% accuracy whilst basic ANN implementation yields 94% accuracy. The
proposed model in [9] was developed for temperate climate. However this study is
applied in tropical climate where the behaviour and pattern of the meteorological
parameters is different. Most of the weather forecasting methods in literature are
geo-location dependant. The current advances in ANN methodology for modelling
non-linear and dynamical phenomena are the motivation to investigate the application
of different ANN algorithms for hourly weather forecasting.

2.2 Articial Neural Network


ANN Structure has a number of interconnected articial neurons. It is a mathematical
model that functions like a neuron of a human brain. A supervised learning ANN must
be trained using a training dataset enabling it to create by itself the patterns and the
rules governing the network. Feed forward ANN was introduced and is known as
perceptron. This model uses a single input perceptron layer with a single output. The
disadvantage of this approach is that a perceptron is not able to train and recognize
many types of patterns. A single neuron consists of two parts; a weighted coefcient
and a transfer function. Figure 1 shows a single neuron that consists of a weighted
input, summing function and transfer function.

x1
w1

w2 y
x2

wi

xi Weighted coefficient Transfer function

Fig. 1. Single neuron

For a neuron receiving n inputs, each input xi (where i = 1…n) is weighted by


multiplying it by a weight, wi. The sum of the wixi products gives the net activation of
the neuron. This activation value is subjected to a transfer function (f) to produce the
neuron’s output y:
!
n
X
y¼f w i xi ð1Þ
i¼1
466 N.Z. Mohd-Safar et al.

Adding more perceptron layers in a network topology will increase the ANN’s
ability to recognize more classes of patterns. This additional layer is known as
MLP. MLP works by adding more layers of nodes between the input and output nodes.
The neurons are arranged into an input layer, an output layer and one or more hidden
layers in between. The learning process for MLP is known as BPL [2, 10, 11]. The
process repetitively calculates an error function and adjust the weights to achieve a
minimum error. The transformation of weights depends on the following steps:
(1) Each learning step starts with an input signal from training dataset.
(2) Then the process will determine the output values for each neuron in each network
layer.
(3) Finally, based on the above step, weight is selected to map the input with the
output.
Figure 2 illustrates the propagation of signal traversing through each neuron from
input layer, hidden layer and nally the target. Supposed that the layer representation
has L number of layers, including input, hidden and output layer, while l is representing
the input layer and hidden layer that has N number of nodes in the form of N(l), where
l = (0,1,…, L; l = 0 is the input parameter) and i = 1,…,N(l) is the node that has an
output from a previous layer and it will be the input for the next layer, yl,i is the output
that depends on incoming signals xl,i and parameters a, b, c. Thus the following
equation is the generalization of the output from each node:

yl;i ¼ fl;i ðxl1;1 wl1;1 ; . . .; xl1; Nðl1Þ; a; b; c; . . .Þ ð2Þ

After the propagation of signals is completed, the next step is to compare the output
signal with the desired output (z) from the training dataset. In general, the difference is
the error signal dl,i.

dl;i ¼ z  yl;i ð3Þ

Assuming that the training dataset has P entries, the error measured for the Pth entry
of the training dataset is the sum of the squared error:

Fig. 2. ANN layer representation


Short-Term Localized Weather Forecasting 467

NðLÞ
X
EP ¼ ðz  yl;i Þ2 ð4Þ
k¼1

Where k is the number of components of z (desired output) and yl,i (predictive


output). The weights for a particular node are adjusted in direct proportion to the error
by propagating it back through all neurons using Gradient descent algorithm. Gradient
descent algorithm nds the weight that will deliver the minimized error. The algorithm
can be simplied into two steps as follows:
(1) Obtain gradient vector.
(2) Calculate the error signal el,i as the derivative of the error measure Ep with respect
to the output node i in layer l in both direct and indirect paths. The ordered
derivative can be express as in the following equation:

@ þ Ep
el;i ¼ ð5Þ
@yl;i

Every input parameter that is used to train the network is associated with the output
pattern. The ANN forecasting models use standard supervised learning MLP trained
with BPL algorithm [2, 10, 11]. The entire process of training in MPL-BPL imple-
mentation is summarized in the following steps:
(1) Initialize the weights using the training dataset by mapping the input data to the
desired output data.
(2) Initialize bias by randomly selecting data from training dataset.
(3) Compute the output of neurons, error and update the weights.
(4) Update all weights and bias and repeat step 3 for all training data
(5) Repeat step 3 and step 4 until the error is minimized.
In this study, three BPL training algorithm were applied to evaluate their perfor-
mances, each of them having a variety of different computation and storage require-
ments. Table 1 summarizes the characteristics of the algorithms (Fig. 3).

3 Study Area

Chuping is a small town in Perlis, Malaysia. It has 22,000 hectares of agricultural and
plantation land area. The climate is tropical characterized by sunny days, high tem-
perature and high humidity. In a whole year, the average day period is from 7 a.m. to 7
p.m. The average temperature is 27.5 °C. Hourly meteorological data of atmospheric
pressure, dry bulb temperature, dew point, humidity, wind speed, wind direction,
rainfall amount and rainfall rate are measured and have been used in this study. Three
years of data from January 2012 to December 2014 that consists of 26304 dataset have
been used. Elementary meteorological parameter characteristic for Chuping are sum-
marized in Table 2.
468 N.Z. Mohd-Safar et al.

Table 1. ANN training algorithm


Algorithm Description
LM Faster training algorithm for networks with moderate size, with memory
reduction capability when the training data set is large
Using approximation method to update network weights and biases
LM algorithm for network training is used in [12, 13]
BR BR training algorithm improves generalization and reduces the difculty of the
determination of the optimum network architecture
BR uses object function such as MSE to improve generalization regularization
technique requires expensive computation to reach optimal level
Detail discussion of BR is described in [14–16]
SCG SCG is designed based on optimization technique in numerical analysis named
Conjugate Gradient Method [17]
Unlike other conjugate gradient approach SCG was designed to avoid time
consuming line search, it may required more iteration to converge but the number
of computation is reduced because line searched is avoided
This technique does not require any user specied parameters and its
computation is faster and inexpensive
Detailed description of the algorithm can be found in [18]

Fig. 3. Map of Malaysia and Chuping

Table 2. Parameter analysis for chuping


Parameter Unit Max Mean Min Median r r2
Pressure HPa 1017.2 1009.6 1001.2 1009.6 2.0 4.0
Temperature oC 38.1 27.5 16.1 26.4 3.2 10.2
Dew Point oC 29.4 24.5 15.3 24.5 1.5 2.2
Humidity % 100 83.7 29 88 12.2 148.3
Wind speed ms−1 5.1 1.1 0 1.1 0.9 0.7
r = Standard Deviation, r2 = variance
Short-Term Localized Weather Forecasting 469

4 Methodology

4.1 Data Cleaning


A common problem in meteorological study is that of missing data due to insufcient
sampling, faulty data acquisition or instrument measurement error. Data cleaning
methods have been used in this study to reduce noisy data. Due to high dimensionality
of the parameters, Principal Component Analysis (PCA) algorithm is a suitable model
to determine the missing data [19]. Imputation, a process of determining the value of
missing data, is used [20]. In one dimensional data, basic statistical analysis such as
minimum, maximum, median, standard deviation and variance (Table 2) is one of the
important analysis to interpret parameter trends and changes [21].

4.2 ANN Design


Designing an ANN model will follow the standard steps from input data to network
trained and nally, use the network [22].
(1) ANN Preliminary implementation
In the design stage, several models were tested in order to nd the optimum ANN
model for each algorithm. Data was divided into training, validating and testing dataset.
In order to avoid overtting and increase generalization, an appropriate amount of the
training dataset was selected and used for cross validation. During the training process,
the errors in training and validation were monitored. When the error in the validation
set increases, the training should be stopped because the point of best generalization
has been reached. The cross validation approach with split-sample training was adopted
for the training of ANN models in this study. Three years of data with hourly observed
parameters were randomly divided into training, validation and test datasets with 70%
for training, 15% for validation and 15% for testing.
(2) Data Normalization
ANNs learn faster and give better performance if the input variables are
pre-processed before it is used to train the network [10]. A normalization method is
used to pre-process input and target data. Data normalization is the process of scaling
the data to fall within a smaller range. The advantage of scaling data is to make all
weighted neurons to remain within a small and predictable range. Normalization
scaling between −1.0 to 1.0 is adopted and normalized data was generated using the
following equation:
 
0 x  xmin
xscale ¼2 1 ð6Þ
xmax  xmin

Where x is the value before normalization, xmax is the maximum value and xmin is
the minimum value in the dataset.
470 N.Z. Mohd-Safar et al.

(3) Input layer, hidden layer and output layer


In the input layer, each neuron received a single input which originated from
meteorological data. However, the hidden layer and the output layer can accept an
arbitrary number of inputs based on the type of chosen interconnection of the neurons.
Two hidden layers with 20 and 10 neurons have been used in ANN training. Table 3
shows Linear (LT) and Hyperbolic Tangent Sigmoid (HT) transfer functions (or acti-
vation functions) that have been used in this study. HT has been used in the rst hidden
layer while LT has been used in the second hidden layer.

Table 3. Transfer function

4.3 Experimental Setup


For the purposes of this study, the input vectors of hourly atmospheric pressure,
temperature, dew point, humidity and wind speed have been used for training. One
hour lag of target vector is used as additional input for each forecast parameter. The
target vectors are the observed metrological parameter (pressure, temperature and
humidity). Each of the experimental setup determines the optimum outcome for one
hour ahead forecast.
In this paper, weather forecasting performance for pressure (P), temperature (T) and
humidity (H) are evaluated. Other parameters such as dew point (DP) and wind speed
(WS) were used as additional input parameters. LM, BR and SCG algorithms have been
used in the same ANN architecture. The accuracy of each algorithm has been analysed.
In general the input parameters, Pinput, and output parameters, Poutput, for the ANN
architecture can be described as follows:

Pinput ¼ ½Ptn ; Ttn ; DPtn ; Htn ; WStn ; PFtn  ð7Þ

Poutput ¼ ½PFt  ð8Þ

Where PF is the metrological parameter observed at time t and n is the hour(s)


before the forecast. In this setup, n = 1 and t = 2.
Short-Term Localized Weather Forecasting 471

The ANN used in the experimental setup was trained using BPL-MLP training
algorithm as shown in Fig. 4.

Fig. 4. Weather forcasting ANN architecture

The structure of ANN forecast model can be summarized as follows:


• Input layer: N number of neurons where N > 0.
• Hidden layer: Two hidden layer with 20 and 10 neurons.
• Output layer: One output layer, where the output for the next one hour parameter
forecast is obtained.
• Training functions: LM, BR and SCG algorithm.
• Transfer functions: HT and LT.
• Training set: 22360 datasets.
• Test set: 3944 datasets.
• Training iterations: 1000 epochs.
• Performance function: MSE = 0.0001.

4.4 Accuracy Performance Indices


The performance of all networks was measured using Mean Absolute Error (MAE),
Root Mean Square Error (RMSE) and correlation coefcient between the observed and
forecasted value. In (9), (10) and (11), yt is the observed value, ^yt is forecasted value
and n is the number of observation.
472 N.Z. Mohd-Safar et al.

n
1X
MAE ¼ jyt  ^yt j ð9Þ
n t¼1
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n
1X
RMSE ¼ ðyt  ^yt Þ2 ð10Þ
n t¼1

The correlation coefcient is a measure of the linear dependency between two


variables. If each variable has n scalar observations, then the Pearson correlation
coefcient [23] is dened as:
P P P
n yt ^yt  ð yt Þð ^yt Þ
R ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P  P ffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P  P ffi ð11Þ
n y2t  ð yt Þ2  n ^y2t  ð ^yt Þ2

The R value is an indication of the relationship between the forecasted and


observed value. If R = 1, this indicates that there is an exact linear relationship. If R is
close to zero, then there is non-linear relationship between them.

5 Results and Discussions

The purpose of this study was to evaluate the accuracy of LM, BR and SCG algorithms
use in weather forecasting with focus on atmospheric pressure, temperature and
humidity parameters. The input parameters are from past meteorological data. The
results show that LM is the best forecasting algorithm. Results and comparison are
summarized in Table 4. LM yields low MAE and MSE compared to BR and SCG
algorithms. The correlation coefcient value produced by LM is greater than 0.95 for
pressure, temperature and humidity forecast. Figure 5 shows the training, validation,
testing and overall regression plots for temperature forecast using LM algorithm. LM

Table 4. Accuray performance result


Algorithm MAE RMSE R
Pressure LM 0.4410 0.5475 0.9618
BR 0.4449 0.5664 0.9614
SCG 0.4772 0.5851 0.9563
Temperature LM 0.6369 0.9168 0.9577
BR 0.6492 0.9270 0.9568
SCG 0.6759 0.9541 0.9542
Humidity LM 2.3684 3.4442 0.9592
BR 2.4086 3.4858 0.9582
SCG 2.6817 3.7069 0.9525
BR = Bayesian Regularization,
LM = Levenberg-Marquardt
SCG = Scaled Conjugate Gradient
Short-Term Localized Weather Forecasting 473

Fig. 5. Rgression plot for temperature forecast using ANN with LM algorithm

offered the best accuracy, followed by BR and SCG. In terms of processing time, SCG
is faster [18] compared to LM and BR but it does not produce good convergence. BR
algorithm takes more time compared to LM and SCG during training but it converges
faster. Figures 6, 7 and 8 show the comparison between forecast and observed results
for pressure, temperature and humidity using LM algorithm.

Fig. 6. Simulation result for pressure using LM algorithm


474 N.Z. Mohd-Safar et al.

Fig. 7. Simulation result for temperature using LM algorithm

Fig. 8. Simulation result for humidity using LM algorithm

6 Conclusion and Future Works

The proposed ANN forecast model has been trained using meteorological data from a
tropical area, North Malaysia. Results from this study has shown that ANN forecast
model with LM algorithm offers signicant potential for weather condition forecasting
for localized tropical region. From the overall performance, an ANN weather fore-
casting model is capable of capturing the dynamic behaviour of atmospheric pressure,
temperature and humidity for one hour forecasting. Modelling for longer time window
forecasting should be implemented and tested. Wind speed and dew point forecasting
from the same dataset will be carried out for an effective weather forecast system. This
study offers a potentially new approach for weather forecasting system for localized
tropical climate. Forecast results from this study are very useful in environmental
condition prediction such as rain occurrences and rain intensity. Furthermore, when
rain forecasting is reliable, it will contribute to an effective water resources manage-
ment, flood prediction, drought mitigation, ecological studies and climate change
impact assessment.
Short-Term Localized Weather Forecasting 475

References

1. Ndzi, D.L., Harun, A., Ramli, F.M., Kamarudin, M.L., Zakaria, A., Shakaff, A.Y.M., Jaafar,
M.N., Zhou, S., Farook, R.S.: Wireless sensor network coverage measurement and planning
in mixed crop farming. Comput. Electron. Agric. 105, 83–94 (2014)
2. Rahul, G.K., Khurana, M.: A comparative study review of soft computing approach in
weather forecasting. Int. J. Soft Comput. Eng. (IJSCE), 2(5), 295–299 (2012)
3. Cheng, B., Titterington, D.M.: Neural networks: a review from a statistical perspective. Stat.
Sci. 9(1), 2–30 (1994)
4. Nikam, V.B., Meshram, B.B.: Modeling rainfall prediction using data mining method: a
Bayesian approach. In: 2013 Fifth International Conference on Computational Intelligence,
Modelling and Simulation, pp. 132–136 (2013)
5. Shahi, A.: An effective fuzzy C-Mean and Type-2 Fuzzy. J. Theor. Appl. Inf. Technol. 5(5),
556–567 (2009)
6. Maqsood, I., Khan, M., Abraham, A.: An ensemble of neural networks for weather
forecasting. Neural Comput. Appl. 13, 112–122 (2004)
7. Paras, S.M., Kumar, A., Chandra, M.: A feature based neural network model for weather
forecasting. Int. J. Comput. Intell. 4(3) (2009)
8. Hayati, M., Mohebi, Z.: Application of articial neural networks for temperature forecasting.
World Acad. Sci. Eng. Technol. 28(2), 275–279 (2007)
9. Hossain, M., Rekabdar, B., Louis, S.J., Dascalu, S.: Forecasting the weather of Nevada: a
deep learning approach. In: Proceedings of the International Joint Conference on Neural
Networks, vol. 2015, pp. 2–7, September 2015
10. LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.R.: Efcient backprop. Lecture Notes in
Computer Science (including subseries Lecture Notes in Articial Intelligence and Lecture
Notes in Bioinformatics) LNCS, vol. 7700, pp. 9–48 (2012)
11. Abraham, A., Philip, N.S., Joseph, K.B.: Will We Have a Wet Summer ? Soft Computing
Models for Long-term Rainfall Forecasting (1992)
12. Foresee, F.D., Hagan, M.T.: Gauss-Newton approximation to Bayesian learning. In:
Proceedings of International Conference on Neural Networks (ICNN 1997) (1997)
13. Pellakuri, V., Rao, D.R., Prasanna, P.L., Santhi, M.V.B.T.: A conceptual framework for
approaching predictive modeling using multivariate regression analysis vs articial neural
network. J. Theor. Appl. Inf. Technol. 77(2), 287–290 (2015)
14. Moré, J.J.: The Levenberg-Marquardt algorithm: implementation and theory. In: Lecture
Notes in Mathematics, Springer, pp. 105–116 (1978)
15. MacKay, D.J.C.: A practical Bayesian framework for backpropagation networks. Neural
Comput. 4(3), 448–472 (1992)
16. MacKay, D.J.C.: Bayesian interpolation. Neural Comput. 4(3), 415–447 (1992)
17. Shewchuk, J.R.: An introduction to the conjugate gradient method without the agonizing
pain. Science 49(CS-94–125), 64 (1994)
18. Møller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural
Netw. 6(4), 525–533 (1993)
19. Schneider, T.: Analysis of incomplete climate data: estimation of mean values and
covariance matrices and imputation of missing values. J. Clim. 14, 853–871 (2001)
20. Josse, J., Pages, J., Husson, F.: Multiple imputation in principal component analysis. Adv.
Data Anal. Classif. 5(3), 231–246 (2011)
476 N.Z. Mohd-Safar et al.

21. Manly, B.: Statistics for Environmental Science and Management. Chapman and
Hall/CRCM, Boca Raton (2000)
22. Howard Demuth, M.B., Hagan, M.: Neural network toolbox user’s guide’ (2012)
23. Hauke, J., Kossowski, T.: Comparison of values of Pearson’s and Spearman’s correlation
coefcients on the same sets of data. Quaestiones Geographicae 30(2), 87–93 (2011)

You might also like