feature_1
and feature_2
predict()
¶The perceptron model iteratively determines $A$, $B$, and $C$ by looking at every point in the data it is trained on.
Because this is greater than zero, we predict it to be in class 1
if result > 0:
predict class 1 and return 1;
else:
predict class 2 and return -1
For our example, we return 1!
But we know the class (because we are using supervised learning)!
Let's assume we were wrong, so the data is actually in class 2.
That update uses this equation:
$$ \vec{w}_{new} = \vec{w}_{old} + \eta*d*\vec{x}$$
where $\eta$ is the learning rate and $d$ = actual_class_value
- predicted_class_value
(as long as classes are 1 and -1)
We predicted class 1 (class_label
=1), but the data is in class 2 (class_label
=-1). So the update to the weights is:
where we choose $\eta$, let's take it to be 0.01. So the update is:
$$update = (-4,-6,-2)*0.01 = (-0.04, -0.06, -0.02)$$We add this to the guessed weights:
$$ \vec{w}_{new} = \vec{w}_{old} + update = (1,1,1) + (-0.04, -0.06, -0.02) = (0.96,0.94,0.98)$$In that case, the predicted and known classes are the same, so the update is:
$$update = \eta*d*\vec{x} = \eta*(-1 - (-1))*(2,3,1)$$$$update = \eta*(0)*(2,3,1) = (-4,-6,-2)*\eta = 0$$And there's no change to the weights because we did ok!
This means perceptrons don't find the "best line" just a line that separates the data.
fit()
¶for the number of iterations we choose:
for the data we have:
predict the class
update the weights