Skip to content

Commit 06ab071

Browse files
committed
removing the extra superscript
1 parent 74036d5 commit 06ab071

2 files changed

Lines changed: 3 additions & 3 deletions

File tree

docs/equations/pymle-equations.pdf

-102 Bytes
Binary file not shown.

docs/equations/pymle-equations.tex

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414
\title{Python Machine Learning\\ Equation Reference}
1515
\author{Sebastian Raschka \\ \texttt{mail@sebastianraschka.com}}
16-
\date{ \vspace{2cm} 05\slash 04\slash 2015 (last updated: 06\slash 19\slash 2016) \\\begin{flushleft} \vspace{2cm} \noindent\rule{10cm}{0.4pt} \\ Code Repository and Resources:: \href{https://github.com/rasbt/python-machine-learning-book}{https://github.com/rasbt/python-machine-learning-book} \vspace{2cm} \endgraf @book\{raschka2015python,\\
16+
\date{ \vspace{2cm} 05\slash 04\slash 2015 (last updated: 07\slash 06\slash 2016) \\\begin{flushleft} \vspace{2cm} \noindent\rule{10cm}{0.4pt} \\ Code Repository and Resources:: \href{https://github.com/rasbt/python-machine-learning-book}{https://github.com/rasbt/python-machine-learning-book} \vspace{2cm} \endgraf @book\{raschka2015python,\\
1717
title=\{Python Machine Learning\},\\
1818
author=\{Raschka, Sebastian\},\\
1919
year=\{2015\},\\
@@ -273,13 +273,13 @@ \section{Artificial neurons -- a brief glimpse into the early history of machine
273273
Let's assume that $x_{j}^{(i)}=0.5$ and we misclassify this sample as $-1$. In this case, we would increase the corresponding weight by $1$ so that the net input $x_{j}^{i} \times w_{j}^{(i)}$ will be more positive the next time we encounter this sample and thus will be more likely to be above the threshold of the unit step function to classify the sample as $+1$:
274274

275275
\[
276-
\Delta w_{j}^{(i)} = (1--1)0.5 = (2)0.5 = 1
276+
\Delta w_{j} = (1--1)0.5 = (2)0.5 = 1
277277
\]
278278

279279
The weight update is proportional to the value of $x_{j}^{(i)}$. For example, if we have another sample $x_{j}^{(i)}=2$ that is incorrectly classified as $-1$, we'd push the decision boundary by an even larger extent to classify this sample correctly the next time:
280280

281281
\[
282-
\Delta w_{j}^{(i)} = (1--1)2 = (2)2 = 4.
282+
\Delta w_{j} = (1--1)2 = (2)2 = 4.
283283
\]
284284

285285

0 commit comments

Comments
 (0)