next up previous
Next: Iterative Reweighted Least Squares Up: NNA_Exercises_2012 Previous: Decision Theory [2+2* P]

Perceptron learning rule [2 P]

Modify the convergence proof to prove the convergence of the following correction procedure: starting with an arbitrary initial weight vector $ {\bf w}(0)$ , correct $ {\bf w}(s+1)$ according to

$\displaystyle {\bf w}(s+1) = {\bf w}(s) + \eta(s)(2 t_k-1){\bf x}_k,    k = (s $mod$\displaystyle N)+1

not only if $ (t_k-{\bf y}({\bf x}_k)) \neq 0$ , i.e. $ {\bf w}^T(s) {\bf x}_k (2 t_k-1) \leq 0$ , but if and only if $ {\bf w}^T(s) {\bf x}_k (2 t_k-1)$ fails to exceed a margin $ b$ , where $ \eta(s)$ is bounded by $ 0 \le \eta(s) \le \eta_b \le \infty$ . What happens if $ b$ is negative?

Haeusler Stefan 2013-01-16