The Hello World of Deep Learning with Neural Networks

Arpan Ghoshal
3 min readJul 6, 2020

--

Hi guys! This is my first blog on medium. I will make a series of articles where you will learn about how to code in Tensorflow from basics. Starting from “Hello World!” of Tensorflow.

In this article, we will go through the coding part of Deep Learning where we will create a simple neural network. Are you familiar with the neural network? If not then I will give you a basic idea.

Suppose, we have to detect the colour of the ball in an image. We will provide input as a feature vector of the image which will give output as the colour of a ball. So, what happens between the input and output?

Neural Network with 2 hidden layers

The input information is first traversed from sets of input layer then it flows through many hidden layers and finally we get the output. In every step, the weights are updated which is “learning” part of the deep neural networks.

Imports

Let’s create the simplest possible neural network. Which will have 1 layer and that layer will have 1 neuron, and the input shape to it is just 1 value.

Let’s start with our imports. First, we import Tensorflow then numpy which will help us to represent our data as lists easily and quickly. And lastly, Keras, which is the framework for defining neural networks.

import tensorflow as tf
import numpy as np
from tensorflow import keras

Define and Compile the Neural Network

As mentioned, we have defined our network with 1 layer, 1 neuron and 1 output. We will define the model by using Sequential Keras API with arguments describing a fully connected network.

model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])

I will recommend you to go through this link for a detailed explanation of the parameters and arguments used in this expression.

As you guys know, the “learning” part of an algorithm involves correction of the errors (Loss). So, here comes the role of optimizer and a loss function. Now we compile the model with these two arguments.

model.compile(optimizer='sgd', loss='mean_squared_error')

This link will give a mathematical and theoretical explanation of the various type of optimizers.

Providing Data

We finally got our model ready! It is the time to feed our model with some data so we can evaluate. In this case, we are taking 6 numbers in the training set and test set.

With the help of the numpy array, we will define the sets. We can analyse the relationship between xs and ys. Which is, y=2x-1, where x = -1, y=-3.

xs = np.array([-1.0,  0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)

Training the Neural Network

We got our model and data. Let’s train the model so that we can put our own inputs and see the desired output.

model.fit() is a call in Keras which helps in learning the relationship between xs and ys. This is the part where the model will guess the features, measure the loss and improvise its model using the optimizer. This whole process will repeat until it reaches the epochs we specify.

model.fit(xs, ys, epochs=500)

Now, our model is trained and it has learned the relationship between X’s and Y’s. We can use the model.predict() method to predict for our custom input.

print(model.predict([10.0]))

You might have thought 19, right? But it ended up being a little under. Why do you think that is?

Remember that neural networks deal with probabilities, so given the data that we fed the neural networks with, it calculated that there is a very high probability that the relationship between X and Y is Y=2X-1, but with only 6 data points we can’t know for sure. As a result, the result for 10 is very close to 19, but not necessarily 19.

As you work with neural networks, you’ll see this pattern recurring. You will almost always deal with probabilities, not certainties, and will do a little bit of coding to figure out what the result is based on the probabilities, particularly when it comes to classification.

Basic NN code

Happy learning ;)

--

--

Arpan Ghoshal
Arpan Ghoshal

No responses yet