Perceptron implementation in Python
(Logical OR gate Example)
The perceptron learning algorithm works as follows:
- Intilalize the weights for all inputs (including bias )
- Present the input xi to neural network
- Calculate the outpput for the given input. say it is zi
- update the weights as: w=w+a(actual-predicted output)xi
- Repeat from step 2 until the convergance or number of iteration reached.
Lets implement perceptron for the inputs:
inputs outputs
x1 x2 b
o 0 1 0
0 1 1 1
1 0 1 1
1 1 1 1
import numpy as np
import random as rd
unit_step = lambda x: 0 if x < 0 else 1
train_data = [
(np.array([0,0,1]), 0),
(np.array([0,1,1]), 1),
(np.array([1,0,1]), 1),
(np.array([1,1,1]), 1),
]
w = np.random.rand(3)
out:w
errors = []
eta = 0.2
n = 100
for i in range(n):
x, expected = rd.choice(train_data)
result = np.dot(w, x)
error = expected - unit_step(result)
errors.append(error)
w += eta * error * x
for x, _ in train_data:
result=np.dot(x,w)
z=unit_step(result)
print("{}->{}".format(x[:2], z))
Output:
import matplotlib.pyplot as plt
plt.plot(errors)
Here the plot of error show that after the 20th iteration, it becomes zero and remains stable until 100th iterations