Research Article - (2018) Volume 7, Issue 2

Ozone Hole Area Prediction at Earth's North and South Poles by Marvel Interface

Siddheshwar Chopra1*, Dipti Yadav1 and Anu Nagpal Chopra2
1Department of Physics, AIAS, Amity University, Noida, Uttar Pradesh, India, E-mail: [email protected]
2Department of Management, Asian Business School, Noida, Uttar Pradesh, India, E-mail: [email protected]
*Corresponding Author: Siddheshwar Chopra, Department of Physics, AIAS, Amity University, Noida, Uttar Pradesh, India, Tel: +91 8448396303 Email:

Abstract

This paper explores the possibility of predicting ozone hole area (Maximum Area) at North and South Pole usingArtificial Neural Network (ANN) and then developing the forecasting network by using Graphical User Interface (GUI)named MARVEL. Two models are designed for predictions: a) Ozone hole area prediction at North Pole and b)Ozone hole area prediction at South Pole. For both the models, the number of input parameters is taken as year,month, date, sunspot area, sunspot number and solar mean magnetic field. Here, more than 35 years of data isused for training purpose and then predictions are made from November 23, 2015 to September 30, 2016.Forecasting network (MARVEL) is developed to imbibe the properties of ANN. It can get trained with the most recentdata accessible to the user and then making future predictions for short (one day) and long term (months, years)durations, respectively. From the results, Mean Square Error (MSE) for Model 1 and Model 2 is found to be 6.7166DU and 0.3582 DU, respectively. It can be concluded that with 30 numbers of neurons and input and output transferfunctions as Tangent Sigmoid and pure linear, along with one hidden layer, the forecasting network predictions areplausible and appreciably close to actual observed values. It is to be noted that the change of ozone hole area atpoles have dynamic reasons behind it and sun parameters are not responsible for that. This paper is an attempt topresent the application of Artificial Neural Network of connecting the unrelated parameters and processe

Keywords: Artificial Neural Network; Ozone hole area; Prediction; Earth

Introduction

Motivation to understand about space weather comes from some of its similarities with atmospheric weather. Designs of space weather activities have utilized the experience of meteorological services. Meteorological processes are localized and hence it is easier to make weather forecasts of limited area. However, the space processes take place on planetary scale and there are limited sources to study and make forecasts. Additionally, the space weather processes take place over a wider time scale as compared to the atmospheric weather processes. With the advent of technological advancements in both the terrestrial and space borne activities, space weather forecasts are becoming important day by day due to our dependence on space satellites for various purposes [1].

As space weather is extremely crucial for both the technology and human kind development, several researches are going on to predict them in advance. A variety of satellites, antennas and telescopes are used to study the solar activities. In the past decades, collection of data has increased from terabytes to petabytes. Handling and analysing these large amounts of data require specialized techniques like Artificial Intelligence, as traditional processes of knowledge discovery do not perform well for large databases. Due to the large volumetric data and its complexity, artificial intelligence becomes one of the best hopes for extracting knowledge or meaningful outcomes [2].

Artificial Neural Networks (ANN) is distinguished and powerful algorithms that may be used to build empirical models for forecasts of space weather. These algorithms effectively operate with large volumes of nonlinear and noisy data. The enormous advantage of the nonlinear filtering and neural network models is their speed and accuracy of computation even for the short prediction periods, then a derivation of the same parameters from the massive simulation models [1]. Moreover, imaging techniques have also been successfully used for solar feature recognition [3]. A method for automatic detection of solar flares using Multi-Layer Perceptron (MLP) with backpropagation training technique has been achieved [4]. In another report [5], a back-propagation network consisting of three layers is developed, which accepts solar data (obtained from THEO database) as an input and predicts the flare occurrence. Furthermore, cascaded correlation neural network, support vector machines and radial basis function networks have been utilized for short term solar flare prediction, respectively [6]. In another work, combination of support vector machines and k-nearest neighbour algorithm is used for solar flare forecasting method [7]. ANN based solar flare forecasting method is applied on solar magnetic field data [8]. Prediction of flare with their associated Coronal Mass Ejections (CMEs) is also accomplished by machine learning based system [9]. ANN based predicting network is developed for monthly average solar irradiation at few similar climate locations [10]. Furthermore, fast learning algorithm for enhanced generalization performances for flare forecasting methods has been studied [11]. In [12], a comparison based on the classification performance for the features extracted from solar flares is carried out between Radial Basis Function (RBF), Support Vector Machine (SVM) and MLP methods, respectively. Finally, an extensive wide research is being carried on in ANN based predictions of solar radiation [13-15].

In this paper, we have developed: Two models named “Model 1” and “Model 2”: Model 1- Prediction of Ozone hole area (maximum possible area) at North Pole and Model 2- Prediction of Ozone hole area (maximum possible area) at South Pole of Earth.

Input parameters for the development of both models, for the first time, are taken to be related to our nearest star, Sun (data selection details are provided in Section 2).

A forecasting network Graphical User Interface (GUI) based on ANN, named MARVEL. This forecasting network model is flexible and forecasts two types of predictions: Specific day prediction and long term prediction, depending on the requirement and can even make both the predictions simultaneously too.

Both models have been tested on the developed forecasting network named as MARVEL. Results from the developed forecasting network are discussed.

Methodology and Data Selection

Methodology

A Neural Network (NN) is a parallel, distributed information processing configuration consisting of processing elements (with a local memory and capable of carrying out localized information processing) interconnected with unidirectional signal channels called connections. Each processing element has a single output connection which branches out as many collateral connections as preferred (each carrying the same output signal of any mathematical form desired). All the processing within each processing element must be entirely local, that is, it must only depend upon the current values of the input signals arriving at the processing element through impinging connections and upon the values stored in processing elements local memory [16].

Undoubtedly, back propagation is currently the most widely applied neural network architecture, primarily due to its ability to learn complicated multidimensional mapping. One way to look at this ability is from the words of Werbos et al. [17-19], that, back propagation goes beyond Regression. As shown in Figure 1, the architecture comprises of ‘k’ number of layers, each layer consists of ‘n’ number of neurons or processing units beginning from number one. Here, first layer consists of three number of neurons (depends on number of input variables). This layer accepts input from an outside environment called input layer and distributes them, without modification to the first hidden or intermediate layer. Furthermore, transfer of individual values to the next hidden layer takes place through the hidden layer transfer function. Finally, all individual values from the second hidden layer are added together and passed to the output layer through an output layer transfer function.

Figure

Figure 1: Architecture of network.

Now, if the output is correct up to desired level of error then it is accepted else is passed back to the input layer for further update in the values of weights and biases. It is to be noted that there is no connection between the neurons within the same layer. This entire cycle continues until all the constraints are satisfied along with the correct output. Here, ANN is trained by Levenberg-Marquardt (LM) algorithm which is a modification of Gauss-Newton Method (GN) and steepest descent method. LM is an iterative technique that locates the minimum of a multivariate function that is expressed as the sum of squares of non-linear real valued functions [20,21]. LM algorithm has become the standard technique for non-linear least-squares problems [22] and is widely adopted in various disciplines. Importantly, LM addresses the limitations of several common techniques [23]. As, when the current solution is far away from the local minimum, the algorithm works like a gradient descent method which is slow but guarantees convergence. And when the current solution is close to local minimum, it works as GN method offering faster convergence [24]. However, it is important to note that LM is very efficient when the training networks contain a few hundred weights [25].

Transfer/Activation function

The transfer functions are used to prevent outputs from reaching enormous values that can ‘paralyze’ the ANN structure [26]. Moreover, a suitable transfer function is particularly needed to introduce nonlinearity into the hidden layer network, as it gives the power to capture nonlinear relationship between input and output [27]. Here, tangent sigmoid transfer function is applied to the hidden layer. However, it is to be noted that the use of sigmoid functions at the outputs can limit the range of possible outputs, and this would be undesirable in some cases [28]. Hence, a pure linear function is selected in the output layer. This function calculates the neuron’s output by simply returning the value passed to it.

There are three types of transfer/activation functions that are involved in the architecture, that is, tangent sigmoid, logarithmic sigmoid and pure linear. These are described below:

Tangent sigmoid transfer function: Tan-sigmoid function generates values between -1 and +1 (Figure 2). Mathematically, it can be written as:

Figure

Figure 2: Graphical representation of tangent sigmoid function.

equation

Log-Sigmoid transfer function: Log-sigmoid function generates values between 0 and +1 (not inclusive) (Figure 3). Mathematical expression for this function is:

Figure

Figure 3: Graphical representation of log-sigmoid function.

equation

Pure linear transfer function: If linear output neurons are used in the last layer of the multilayer network, the network outputs can take on any value, unlike that of sigmoid functions (Figure 4). Mathematically, it is written as: f(x)=x.

Figure

Figure 4: Graphical representation of pure linear function.

Data Selection

Three different catalogues are used for training, testing and validation of the developed forecasting neural network. First catalogue for daily sunspot area consists of the day wise total sunspot area subdivided into North Pole area and South Pole area, retrieved from https://solarscience.msfc.nasa.gov/greenwch/daily_area.txt. However, for training purpose, total sunspot area is considered. Second catalogue is from http://www.sidc.be/silso/datafiles constituting the day wise sunspot number and finally the third catalogue is from http://wso.stanford.edu/meanfld/ containing the day wise solar mean field. The data of ozone hole area (maximum area taken under consideration) of Earth’s northern and southern pole is taken from https://ozonewatch.gsfc.nasa.gov/meteorology/NH.html. All data from four different catalogues is taken for training from July 1, 1980 to November 22, 2015 in combinations, which constitutes a large data of 35 years.

In this study, two models are developed by using the catalogues described above. Developed models consist of six inputs and one output, where six inputs being the year, month, date, total sunspot area, sunspot number, and solar mean field respectively. In first model, output is Earth northern pole ozone hole area, whereas in the second model, output is Earth southern pole ozone hole area. Developed models are tested on Graphical User Interface (GUI) called MARVEL, which has been specifically developed for predicting ozone hole area. Once the performance (here, Mean Squared Error (MSE)) measurement produces optimal result for a particular network architecture and training algorithm combination, the network is then used for making future value prediction.

Results and Discussion

The ANN based forecasting network

Developed Graphical User Interface (GUI) MARVEL shown in Figure 5 is based on artificial neural network (ANN). Neural network of six inputs and one output is tested on this GUI. Training algorithm used to train the network is Levenberg-Marquardt. Forecasting network is flexible in nature and adjusts itself as per the inputs i.e. generates text boxes on the GUI (equal to number of parameters entered during training), so that user can enter new input values for prediction of any specific day. This forecasting network can make short term and long-term predictions since this forecasting network is not hard trained for any particular model. This also means that it can be retrained trivially for any model at any time with the most recent data. When the network is trained, Mean Squared Error (MSE) can be checked on the interface of forecasting network.

Figure

Figure 5: The ANN based forecasting network.

As seen from the above Figure 5, all functions are visible simultaneously. In the interface, two types of options are available to the user, namely, Range prediction and Specific prediction. When user chooses to study range prediction, then only Range prediction named panel appears on the interface along with the options for selecting input file, target file for training and selecting another input file containing inputs for predictions. Through range predictions, user can have predicted values for months and even years ahead. Furthermore, when the user opts for specific prediction, then only specific prediction panel appears on the interface along with options of selecting input file and target file for training, respectively. When the input file gets selected for training (equal to the number of training parameters), text boxes appear on the interface. Through these text boxes, the user can write and enter the input parameters like date, month, year, sunspot area, sunspot number and solar mean field in the reverse sequence to get the predicted value of any specific day. Predicted value gets visible on the interface itself in specific day prediction whereas in the range predictions, predictions are saved in the particular file. In addition, the name of file gets visible on the interface along with Mean Squared Error (MSE) achieved during the training.

Effect of number of neurons

During the training of Model 1 and Model 2, relation between the number of neurons and performance in terms of MSE is studied. Results obtained are shown in Table 1, from where it is concluded that, value of Mean Squared Error (MSE) becomes almost consistent and offering not much variations of MSE for both the models, beyond thirty neurons.

N 1 10 20 30 40 50 60 70 80 90 100
I 712.9824 27.6229 13.1556 6.7166 7.1456 7.3481 7.2921 6.93 7.0178 7.1749 7.1014
II 27.064 0.5431 0.471 0.3582 0.3328 0.3927 0.3324 0.398 0.3188 0.3347 0.379

Table 1: Performance of different models for various numbers of neurons. Where N=Number of Neurons, I=Model 1 (MSE), II=Model 2 (MSE).

It is also noted that during the training process, training time of the neural network increases as the number of neurons increases. In order to have least MSE along with lesser training time, numbers of neurons have been set to thirty for both the models, respectively. In other words, thirty neurons offer a good trade-off between accuracy and training time. This number of neurons cannot be varied from MARVEL interface and hence remain fixed.

Other parameters used for creating neural networks are: Levenberg- Marquardt (LM) training algorithm containing one hidden layer, tangent sigmoid transfer function in the input layer, pure linear transfer function in the output layer, maximum number of epochs to be 1000 and the performance is analyzed based on MSE. Importantly, these parameters cannot be changed directly through MARVEL and hence remain fixed.

MODEL 1- Northern ozone hole area predictions

In Figure 6, names of the selected files for training (in .xlsx format) in this paper get visible on the interface simultaneously. Predictions made by MARVEL are saved in a file named “Prediction.xlsx”. Mean Square Error (MSE) is also accessible by the user after training so the user can relate how much error is expected in the predictions.

Figure

Figure 6: Range prediction panel (North Pole).

Figure 7 shows the graphical plot and comparison of the predicted values with the actual values by MARVEL. It can be concluded that the predicted values are fairly close to the actual values.

Figure

Figure 7: Comparisons of actual and predicted values.

On the selection of specific prediction option, the specific prediction panel appears on the interface. This panel is flexible in nature as the numbers of text boxes appearing are same as the number of input training parameters. After selecting dataset files for training, user has to enter input parameters for prediction. Predicted values along with the mean square error value are visible on the interface. Furthermore Figure 8 depicts the specific day prediction. Here, input values for specific day prediction are entered as (starting from bottom) year, month, date, sunspot area, sunspot number and solar mean field respectively. Predicted value comes out to be 305.344 and actual value is 305.94, with the relative error of 0.19%.

Figure

Figure 8: Specific day prediction (North Pole).

MODEL 2- South ozone hole area predictions

The developed MARVEL is tested on Southern Pole Arctic Ozone Hole in this section. When range prediction is selected, the panel of Range Prediction gets activated. Input and target files are selected for training along with the input file for prediction are selected in the same way as selected in prediction of ozone hole area at north pole. Predictions are again saved in file depicted in GUI along with Mean square error shown in Figure 9.

Figure

Figure 9: Range predictions panel (South Pole).

Figure 10 shows the graphical plot of predicted value in comparison with actual values. It is clearly seen that the predicted values are in great match with the actual values.

Figure

Figure 10: Comparison of actual and predicted values.

For specific prediction, the inputs are entered (from bottom) as year, month, day, sunspot area, sunspot numbers, and solar mean field, in the generated text boxes on the interface. For the specific day prediction, prediction is found to be 28.0352 with MSE value of 0.30707 (Figure 11). Predicted value is a good match for the actual value 27.91, with relative error as 0.44%.

Figure

Figure 11: Specific day prediction (South Pole).

Conclusion and Discussion

For the first time, an approach in which some of the input parameters related to Sun, like sunspot area, sunspot number and solar mean field, are used for predicting ozone hole area at North and South Poles through Artificial Neural Network (ANN). In this paper, we have designed two models based on ANN between the just mentioned parameters with North as well as South pole ozone hole area (Maximum area) and have been successfully tested on the developed GUI named MARVEL.

The Mean Square Error (MSE) for Model 1 is 6.7166 Dobson Unit (DU) and for Model 2 is 0.3582 DU. It is already known that the input parameters used for training the developed ANN are not at all related to output parameter in reality [29-31]. Interestingly, ANN has made an appreciable generalized output pattern which has successfully made good predictions of ozone hole area at both poles of the Earth.

Also, predictions are not only for short term (daily) but long term also (months or for years). From Figures 7 and 10, comparison between the predicted and actual values shows that even if predictions deviate from actual values at certain points, then also the trend of predicted values is in good agreement with the actual trend of Earth’s ozone hole area at North and South Pole. It is to be added here that the change of ozone hole area at poles may have dynamic reasons behind it, for which the sun parameters are not responsible.

This paper is an attempt to present the application of Artificial Neural Network (ANN) of connecting the unrelated parameters.

Future Scope

Following number of things can be improved in the future works:

Inclusion of other techniques namely the evolutionary ones like Particle Swarm Optimization (PSO) in the MARVEL GUI.

Inclusion of more formats of datasets, apart from the “.xlsx” format.

Flexibility of modifying the number of neurons from the GUI itself or perhaps automatic determination of optimal number of neurons.

References

Citation: Chopra S, Yadav D, Chopra AN (2018) Ozone Hole Area Prediction at Earth’s North and South Poles by Marvel Interface. J Swarm Intel Evol Comput 7: 169.

Copyright: © 2018 Chopra S, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.