Implemented ANN model
ANN consists of three layers, namely input layer, hidden layer and output layer [30,31,32]. The input layer receives the input and passes it to the hidden layer, which generates the result after performing the appropriate activation function and forwards it to the output layer. ANN needs to be configured depending on the problem type and desired output [33].Appropriate configuration settings include multiple input variables on the outer/input layer, activation functions on the hidden layer, and outputs on the outer/output layer [34]. Algorithm 1 below gives a simple algorithm for ANN.
Algorithm 1: ANN basic working algorithm |
-
In the first step, the input variables from the layer are passed to the hidden layer and assigned some weight values to them based on their importance.
-
The hidden layer has a node called an artificial neuron. All input variables of layer 1 are connected to each neuron in the hidden layer, forming a network.
-
Perform the calculations of steps 4 and 5 in the hidden layer.
-
Apply the transfer function to map the input to the hidden layer by performing the given steps:
- (A)
-
Multiply each input variable by its subsequent weight;
- (two)
-
Add weighted sum;
- (C)
-
Add offset value.
-
Apply the activation function to the output of the transfer function to calculate the result.
-
Repeat steps 3 to 5 for all hidden layers.
-
In the output layer, the final output is achieved.
|
In the current scenario, we have 3 input variables and 1 output variable. According to the basic working algorithm of artificial neural networks, there must be some mapping mechanism that can transfer input values to hidden layer nodes, and transfer hidden layer nodes to output nodes.We accomplish this task using the transfer function shown in equation (1), given by [35].
Where
Once the transfer function is used to compute the hidden layer input, some activation function must be included to compute the predicted output.In our proposed ANN model, we used the activation function described in equation (2) and also used [35].
here seat = Hidden layer input sum yes = output.
In the current scenario, we use three independent variables as input to the first layer of the ANN. These variables are cotton temperature, cotton moisture, and methane content. The input variable values are then passed to the hidden layer where we use an activation function to convert the weighted input into activation/output. In the current scenario, our main goal is to identify combustion. The system must provide output in the form of yes/no.Based on the different classifiers available for ANN, we chose the sequential model [36,37]; The design of the ANN model is shown in Figure 5.
The ANN model is implemented in python using two support libraries: tensorflow and Keras. The basic working algorithm is described in Algorithm 2 below.
Algorithm 2: Python algorithm for implementing ANN. |
classifier.add(Dense(units = 10, input_dim = 3, kernel_initializer = ‘uniform’, activation = ‘relu’))
- 4.
-
definition Second floor by taking enter as output of The first hidden layer.
classifier.add(Dense(unit = 6, kernel_initializer = ‘uniform’, Activate = ‘Read again’))
- 5.
-
definition output layer, take enter as output of The second hidden layer.
classifier.add(Dense(units = 1, kernel_initializer = ‘uniform’, activate=’sigmoid’))
- 6.
-
create Neural Networks use compile Function.
classifier.compile(optimizer=’adam’, loss=’binary_crossentropy’, indicator= [‘accuracy’])
- 7.
-
Split dataset between training and testing.
train_test_split(X, y, test_size = 0.3, random_state = 42)
- 8.
-
train ANN model train data set.
classifier.fit(X_train, training, batch_size = 50, epochs = 100, verbose = 1)
- 9.
-
Confirm ANN model passes predict based on test data
predict = classifier. Prediction(X_Test)
- 10.
-
Adjust the hyperparameters to Get better results.
- 11.
-
Go to step 3.
|
The step-by-step implementation of the entire algorithm is shown in Figure 6.
The working algorithm states that, first, we load the input data set of the ANN model configuration.The input dataset for the current scenario contains three independent predictor variables (i.e., temperature, humidity, and methane), which are encoded in the appropriate ranges [34,36,38,39], and the output/target variable (SC) coded as 0-1. Yes, is coded as 1, indicating the presence of combustion, while no, is coded as 0, indicating the absence of combustion. After all coding was completed, the dataset was prepared in CSV format (see Table 4) and then loaded into Python for ANN model processing.
We then added three subsequent layers to the model and used the classifier compile() function to create an ANN model. The ANN model requires two types of data sets, namely the data set used for model training, and after the training is completed, another data set needs to be used to validate the model. A common approach to this task is to split the entire input dataset through training and testing. We use Python’s random split function test_train_split() to achieve this step, and set the test data set ratio to 0.3, that is, 30% of the data is used for testing and 70% of the data is used for training.
After creating the model, we train the ANN model using the training data set. After the artificial neural network training is completed, we pay close attention to the training accuracy and training loss. The setting of hyperparameters has a great impact on the performance of the model, and there is no rule of thumb to determine the hyperparameters, that is, the number of layers, the number of neurons in each layer, etc.So, to achieve maximum accuracy, we iterate over different hyperparameter values and use an efficient visualization library to set the ANN hyperparameters and epoch accuracy [37,40] Find suitable hyperparameters. We then train the ANN using these hyperparameter values, as shown in Table 5. The important hyperparameters of ANN are units, input dimensions, kernel initializer, activation, optimizer, batch_size and epoch.
When the resulting ANN model is trained with the most appropriate set of hyperparameters, we can validate the ANN model created to make predictions on the test data set.
The ANN model is trained using different epochs and batch sizes to visualize the accuracy, training loss and validation of the ANN model. The training and validation accuracy and loss plots shown in Figure 7 show that accuracy is proportional to epoch size, i.e., increasing epoch size improves accuracy and vice versa. There is an inverse relationship between the training loss and the number of epochs (see Figure 7), i.e. by increasing the epochs, the loss can be minimized and vice versa. Therefore, we set the epoch size to 100 to reduce training loss. We then trained a model with suitable epoch and batch size to obtain maximum accuracy. The final model is shown in Figure 8.