Python-for-Machine-Learning/C3/Artificial-Neural-Networks/English
| Visual Cue | Narration |
| Show Slide: Welcome and Title Slide | Welcome to the Spoken Tutorial on Artificial Neural Networks |
| Show Slide:
Learning Objectives |
In this tutorial, we will learn about
|
| Show Slide: | To record this tutorial, I am using
|
| Show Slide: | To follow this tutorial,
The learner must have basic knowledge of Python. For pre-requisite Python tutorials, please visit this website. |
| Show Slide:
Code-Files |
|
| Show Slide:
Artificial Neural Network |
|
| Show Slide:
Artificial Neural Network |
|
| Show Slide:
Multi-Layer Perceptron |
|
| Show Slide:
ANN Architecture arch.png |
Let’s look at the architecture of an Artificial Neural Network.
The Input Layer receives the data from the dataset. The number of input neurons matches the number of input features. Hidden Layers process these inputs through weighted connections. These layers help the network learn complex patterns. The number of hidden neurons depends on task complexity. |
| Show Slide:
ANN Architecture arch.png |
The Output Layer produces the final prediction or classification.
The number of output neurons matches the number of output classes. Weights are updated during training to optimize performance. |
| Show Slide:
Artificial Neuron Model neuron.png |
Now, let's break this down further by understanding a single neuron.
Each neuron receives inputs along with a bias value. Each input has a weight that determines its importance. The summation function adds the weighted inputs and bias. The activation function decides the neuron’s output. The output is then passed to the next layer in the network. |
| Hover over the files | I have created Artificial neural networks. Ipynb file for the demonstration. |
| Press Ctrl+Alt+T keys
Type conda activate ml Press Enter |
Let us open the Linux terminal by pressing Ctrl, Alt and T keys together.
Activate the machine learning environment as shown. |
| To go to the Downloads folder,
Type cd Downloads Type jupyter notebook |
I have saved my code file in the Downloads folder.
Please navigate to the respective folder of your code file location. Then type, jupyter space notebook and press Enter. |
| Show Jupyter Notebook Home Page:Click on
Artificial Neural Networks.ipynb file |
We can see the Jupyter Notebook Home page has opened in the web browser.
Click the Artificial neural networks dot ipynb file to open it. Note that each cell will have the output displayed in this file. |
| Highlight
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns |
First, we import the necessary libraries for ANN.
Make sure to Press Shift and Enter to execute the code in each cell. We will use the Breast Cancer Wisconsin dataset from sklearn library. The dataset has 30 features describing breast tumor characteristics. The target variable is 0 for malignant tumors and 1 for benign tumors. |
| Highlight
data = load_breast_cancer() |
We load the dataset using the load underscore breast underscore cancer function.
Then, we create a dataframe using a pd dot dataframe for data handling. |
| Highlight
bcancer_df.tail() |
To inspect the dataset, we display the last five rows using the tail function. |
| Only narration
Highlight bcancer_df.shape |
The shape function returns the number of rows and columns in the dataframe. |
| Highlight
print("Feature names:", data.feature_names) |
data dot feature underscore names displays the columns of different tumor characteristics. |
| Highlight
print("Target names:", data.target_names) |
Next we display the class labels of the target variable. |
| Highlight
plt.figure(figsize=(12,6)) plt.show() |
We create a boxplot to compare mean radius across classes.
Malignant tumors are shown in red and benign tumors in green. |
| Show output | Whiskers in a box plot represent the range of data within a certain boundary.
Outliers are values far from the rest of the data and appear as small dots. In this plot, the malignant class has a higher mean radius with more variation. The benign class has a lower mean radius with fewer outliers. |
| Only Narration
Highlight scaler = RobustScaler() df_scaled = pd.DataFrame( scaler.fit_transform(bcancer_df.iloc[:, :-1]), |
Now, let’s preprocess the dataset.
To handle outliers, we normalize the features using Robust Scaler. It scales values based on the median and IQR, making it resistant to outliers. |
| Highlight
X = bcancer_df.drop(columns=["target"]) y = bcancer_df['target'] |
We define X as the feature set by dropping the target column.
The y variable stores the target classes for classification. |
| Highlight
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) |
Next, we split the data into training and testing sets. |
| Highlight
mlp_relu = MLPClassifier(hidden_layer_sizes=(100, 50), activation='relu', max_iter=1000, random_state=42) mlp_relu.fit(X_train, y_train) |
We then initialize the MLP model for classification.
Using the MLPClassifier function, we define mlp underscore relu. It has two hidden layers, the first with 100 neurons and second with 50 neurons. The model uses the Rectified Linear Unit that is ReLU activation function for faster convergence. ReLU helps the MLP to train effectively. The model is trained for a maximum of 1000 iterations to optimize performance. |
| Highlight
y_train_pred_relu = mlp_relu.predict(X_train) |
Once trained, we predict class labels for X_train using our model. |
| Highlight
print("Training Accuracy (ReLU):", format(accuracy_score(y_train, y_train_pred_relu), ".3f")) |
We then calculate the training accuracy using the accuracy underscore score.
The result is formatted to three decimal places and printed. |
| Show output | The MLP model achieves a training accuracy of 92.2% using ReLU activation.
This indicates that the model has learned well from the training data. |
| Highlight
plt.plot(mlp_relu.loss_curve_, label="Training Loss", color="blue") plt.xlabel("Iterations") plt.ylabel("Loss") |
We plot the training loss curve to track the learning progress.
The x-axis represents iterations, while the y-axis shows the loss value. A decreasing loss curve indicates effective training. |
| Show output | The training loss curve shows how the model's error decreases over iterations.
Initially, the loss is high but quickly drops, showing rapid learning. After 20 iterations, the loss stabilizes, indicating model convergence. Model convergence means training has optimized weights, with minimal further gain. |
| Highlight
y_pred_relu = mlp_relu.predict(X_test) accuracy_relu = accuracy_score(y_test, y_pred_relu) |
Next, we predict the class labels on the test data.
We calculate and print the testing accuracy. |
| Show output | The model achieves a testing accuracy of 95.9%.
This indicates strong generalization to unseen data. |
| Highlight
y_probs = mlp_relu.predict_proba(X_test)[:, 1] fpr, tpr, _ = roc_curve(y_test, y_probs) roc_auc = auc(fpr, tpr) |
We further evaluate the performance using a ROC curve.
The curve shows the balance between true positive rate and false positive rate. |
| Show output | The Area under the curve that is, AUC value of 0.99 confirms the model’s strong classification ability. |
| Highlight
print("\nMLP with ReLU activation - Classification Report:") |
Finally, we display the classification report of the model. |
| Show output | The output indicates that the model performs well in both classes with above 95% precision.
The other metrics like recall and F1-score suggest that it correctly identifies most positive instances. Thus, the model has learnt to classify whether the patient has breast cancer or not. |
| Show Slide:Summary
In this tutorial, we have learnt about
|
This brings us to the end of the tutorial. Let us summarize. |
| Show Slide:
Assignment |
As an assignment, please do the following
|
| Show Slide:
Assignment Solution |
After execution, we should get the accuracy and classification report as shown here. |
| Show Slide:
FOSSEE Forum |
For any general or technical questions on Python for
Machine Learning, visit the FOSSEE forum and post your question. |
| Show Slide:
Thank You |
This is Anvita Thadavoose Manjummel, a FOSSEE Summer Fellow 2025, IIT Bombay signing off
Thanks for joining. |