63 KiB
ANN - Example 3¶
import numpy as np
W = np.array([[0.9, 0.3, 0.4],
[0.2, 0.8, 0.2],
[0.1, 0.5, 0.6]])
I = np.array([[0.9],[0.1], [0.8]])
def mySigmoid(x):
return 1/(1+np.exp(-x))
Xih = np.dot(W,I)
Xih
Oh = mySigmoid(Xih)
Oh
Creating Class and Methods¶
A class defines the structure, data and methods that an object will have. There is possible to have public and private variables to operate in the methods.
- Argument
- Initialitation
- Methods
- Destroy
class Dog:
# init method
def __init__(self, dogName, dogAge):
self.name=dogName
self.age=dogAge
# status method
def status(self):
print("The Dogs name is: ", self.name)
print("The Dogs age is: ", self.age)
perro1=Dog('cuadrado', 9)
perro1.status()
print(perro1.name)
print(perro1.age)
The Neural Notwork Class¶
The next chunk of code defines the Neural Network's basic structure. We are going to implement and define the methods one at tima to understand them in a better way.
class NeuralNetwork:
# init method
def __init__():
pass
# NN computing method
def feedforward():
pass
# NN trainning method
def backpropagation():
pass
Initialization or creation Method¶
Let’s begin with the initialization. We know we need to set the number of input, hidden and output layer nodes. That defines the shape and size of the neural network. Thus, we’ll let them be set when a new neural network object is created by using the class' parameters. That way we retain the choice to create new neural networks of different sizes with simple methods.
A good programmers, computer scientists and mathematicians, try to create more general code rather than specific code. It is a good habit, because it forces us to think about solving problems in a deeper and more general way. This means that our code can be used in more general scenarios.
Then, ket us see how our code should look like:
import numpy as np
class NeuralNetwork:
# init method
def __init__(self, inputN,hiddenN, outputN, lr):
# creates a NN with three layers (input, hidden, output)
# inputN - Number of input nodes
# hiddenN - Number of hidden nodes
self.inN=inputN
self.hiN=hiddenN
self.outN=outputN
self.lr=lr
#weight W11 W21 W31
# W12 W22 W32
# .....
self.wih=np.random.rand(self.hiN, self.inN)
self.who=np.random.rand(self.outN,self.hiN)
pass
# NN computing method
def feedforward():
pass
# NN trainning method
def backpropagation():
pass
myNN=NeuralNetwork(3,3,3,0.1)
myNN.wih
At this point were are only creating an object, but the myNN instance can't do any useful yet. Also, this is a good technique to start coding somethig, by keeping it small at the begining (make commits), and then grow the methods.
Next, we should add more code to allow our NN class finish its initialization by creating the weight matrixes.
Feedfordward method and weights initialization¶
So the next step is to create the network of nodes and links. The most important part of the network is the link weights. They’re used to calculate the signal being fed forward, the error as it’s propagated backwards, and it is the link weights themselves that are refined in an attempt to to improve the network.
For the basic NN, the weight matrix consist of:
- A matrix that links the input and hidden layers, $Wih$, of size hidden nodes by input nodes ($hn×in$)
- and another matrix for the links between the hidden and output layers, $Who$, of size $on×hn$ (output nodes by hidden nodes)
$$X_h=W_{ih}I$$ $$O_h=\sigma{X_h}$$
import numpy as np
class NeuralNetwork:
# init method
def __init__(self, inputN,hiddenN, outputN, lr):
# creates a NN with three layers (input, hidden, output)
# inputN - Number of input nodes
# hiddenN - Number of hidden nodes
self.inN=inputN
self.hiN=hiddenN
self.outN=outputN
self.lr=lr
#weight W11 W21 W31
# W12 W22 W32
# .....
np.random.seed(42)
self.wih=np.random.rand(self.hiN, self.inN)
self.who=np.random.rand(self.outN,self.hiN)
pass
# NN computing method
def feedforward(self, inputList):
# computing hidden output
inputs = np.array(inputList, ndmin=2).T
self.Xh = np.dot(self.wih, inputs)
self.af = lambda x:1/(1+np.exp(-x))
self.Oh = self.af(self.Xh)
# computing output
self.Xo = np.dot(self.who, self.Oh)
self.Oo = self.af(self.Xo)
return self.Oo
# NN trainning method
def backpropagation():
pass
myNN=NeuralNetwork(3,5,3,0.3)
At this point we can review the variables or class attributes by calling them:
myNN.wih
myNN.feedforward([0.3, 0.2, 0.1])
myNN.Oh
myNN.Xh
myNN.Oo
The backpropagation and trainning method¶
$$ \frac{\partial E}{\partial w_{jk}}= -e_j\cdot \sigma\left(\sum_i w_{ij} o_i\right) \left(1-\sigma\left(\sum_i w_{ij} o_i\right) \right) o_i $$
import numpy as np
class NeuralNetwork:
# init method
def __init__(self, inputN,hiddenN, outputN, lr):
# creates a NN with three layers (input, hidden, output)
# inputN - Number of input nodes
# hiddenN - Number of hidden nodes
self.inN=inputN
self.hiN=hiddenN
self.outN=outputN
self.lr=lr
#weight W11 W21 W31
# W12 W22 W32
# .....
np.random.seed(40)
self.wih=np.random.rand(self.hiN, self.inN)-0.5
self.who=np.random.rand(self.outN,self.hiN)-0.5
pass
# NN computing method
def feedforward(self, inputList):
# computing hidden output
inputs = np.array(inputList, ndmin=2).T
self.Xh = np.dot(self.wih, inputs)
self.af = lambda x:1/(1+np.exp(-x))
self.Oh = self.af(self.Xh)
# computing output
self.Xo = np.dot(self.who, self.Oh)
self.Oo = self.af(self.Xo)
return self.Oo
# NN trainning method
def backpropagation(self, inputList, targetList):
# data
lr = self.lr
inputs = np.array(inputList, ndmin=2).T
target = np.array(targetList, ndmin=2).T
#computting hidden layer
Xh = np.dot(self.wih, inputs)
af = lambda x:1/(1+np.exp(-x))
Oh = af(Xh)
# computing output
Xo = np.dot(self.who, Oh)
Oo = af(Xo)
# Output error
oe = target-Oo
# E propagation
hiddenE = np.dot(self.who.T, oe)
# updating weights
self.who+=lr*np.dot(oe*Oo*(1-Oo), Oh.T)
self.wih+=lr*np.dot(hiddenE*Oh*(1-Oh), inputs.T)
return self.wih, self.who
Exam¶
NN3 = NeuralNetwork(3,3,3,0.15)
NN3.wih
NN3.who
First Feed¶
NN3.feedforward([0.43, 0.88, 0.95])
NN3.backpropagation([0.43, 0.88, 0.95], [0.25, 0.7, 0.1])
Second feed¶
NN3.feedforward([0.47, 0.07, 0.64])
NN3.backpropagation([0.47, 0.07, 0.64], [0.17, 0.67, 0.14])
Third feed¶
Fourth feed¶
Fifth feed¶
MNIST Dataset¶
Reading the complete file
data_file= open("mnist_train.csv", 'r')
data_list= data_file.readlines()
data_file.close()
Interpreting one intrance¶
import numpy as np
import matplotlib.pyplot as plt
all_values= data_list[2].split(',')
image_array= np.asfarray(all_values[1:]).reshape((28,28))
plt.imshow(image_array, cmap='Greys', interpolation='None')
What the NN sees?¶
plt.plot(np.asfarray(all_values[1:]), '.k')
Trainning a NN for MNIST dataset¶
# number of input, hidden and output nodes
input_nodes = 784
hidden_nodes = 200
output_nodes = 10
# learning rate
learning_rate = 0.1
# create instance of neural network
nn1 = NeuralNetwork(input_nodes,hidden_nodes,output_nodes, learning_rate)
# epochs is the number of times the training data set is used for training
epochs = 1
for e in range(epochs):
# go through all records in the training data set
for record in data_list:
# split the record by the ',' commas
all_values = record.split(',')
# scale and shift the inputs
inputs = (np.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
# create the target output values (all 0.01, except the desired label which is 0.99)
targets = np.zeros(output_nodes) + 0.01
# all_values[0] is the target label for this record
targets[int(all_values[0])] = 0.99
nn1.backpropagation(inputs, targets)
pass
pass
all_values[0]
targets
Testing the trinning NN¶
# load the mnist test data CSV file into a list
test_data_file = open("mnist_test.csv", 'r')
test_data_list = test_data_file.readlines()
test_data_file.close()
all_values= test_data_list[55].split(',')
image_array= np.asfarray(all_values[1:]).reshape((28,28))
plt.imshow(image_array, cmap='Greys',interpolation='None')
nn1.feedforward((np.asfarray(all_values[1:])/255.0*0.99)+0.01)
nn1.who