{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Neural Networks Introduction\n",
"* Author: Johannes Maucher\n",
"* Last Update: 20.10.2020"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Natural Neuron\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "skip"
}
},
"source": [
"Neurons are the basic elements for information processing. A neuron consists of a cell-body, many dendrites and an axon. The neuron receives electrical signals from other neurons via the dendrites. In the cell-body all input-signals received via the dendrites are accumulated. If the accumulated electrical signal exceeds a certain threshold, the cell-body outputs an electrical signal via it's axon. In this case the neuron is said to be activated. Otherwise, if the accumulated input at the cell-body is below the threshold, the neuron is not active, i.e. it does not send a signal to connected neurons. The point, where dendrites of neurons are connected to axons of other neurons is called synapse. The synapse consists of an electrochemical substance. The conductivity of this substance depends on it's concentration of neurotransmitters. The process of learning adapts the conductivity of synapses and, i.e. the degree of connection between neurons. A single neuron can receive inputs from 10-100000 other neurons. However, there is only one axon, but multiple dendrites of other cell can be connected to this axon. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Artificial Neuron\n",
"\n",
"\n",
"
\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "skip"
}
},
"source": [
"The artificial model of a neuron is shown in the picture below. At the input of each neuron the weighted sum \n",
"\n",
"$$in=\\sum\\limits_{j=0}^d w_jx_j = \\mathbf{w}\\cdot \\mathbf{x^T}=(w_0, w_1, \\ldots, w_d) \\cdot (x_0, x_1, \\ldots, x_d)^T $$ \n",
"\n",
"is calculated. The values $x_j$ are the outputs of other neurons. Each $x_j$ is weighted by a scalar $w_j$, similar as in the natural model the signal-strength from a connected neuron is damped by the conductivity of the synapse. As in the natural model, learning of an artificial network means adaptation of the weights between neurons. Also, as in the natural model, the weighted sum at the input of the neuron is fed to an **activation function g()**, which can be a simple threshold-function that outputs a `1` if the weighted sum $in=\\sum\\limits_{j=0}^d w_jx_j$ exceeds a certain threshold and a `0` otherwise. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Activation Function\n",
"\n",
"The most common activation functions are:\n",
"\n",
"* **Threshold:**\n",
"\n",
"\t$$g(in)= \\left\\lbrace \\begin{array}{ll} 1, & in \\geq 0 \\\\ 0, & else \\\\ \\end{array} \\right.$$\n",
" \n",
"* **Sigmoid:** \n",
"\n",
"\t$$g(in)=\\frac{1}{1+exp(-in)}$$\n",
" \n",
"* **Tanh:** \n",
"\n",
"\t$$g(in)=\\tanh(in)$$\n",
"\n",
"* **Identity:**\n",
"\n",
"\t$$\n",
"\tg(in)=in\n",
"\t$$\n",
" \n",
"* **ReLu:**\n",
"\n",
" $$g(in)=max\\left( 0 , in \\right)$$\n",
" \n",
"* **Softmax:**\n",
"\n",
"\t$$g(in_i,in_j)=\\frac{\\exp(in_i)}{\\sum\\limits_{j=1}^{K} \\exp(in_j)}$$\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"\n",
"\n",
"All artificial neurons calculate the sum of weighted inputs $in$. Neurons differ in the activation function, which is applied on $in$. In the sections below it will be described how to choose an appropriate activation function. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "skip"
}
},
"source": [
"### Bias \n",
"Among the input-signals, $x_0$ has a special meaning. In contrast to all other $x_j$ the value of this so called **bias** is constant $x_0=1$. Instead of denoting the bias input to a neuron by $w_0 \\cdot x_0 = w_0$ it can also be written as $b$. I.e. \n",
"\n",
"$$in=\\sum\\limits_{j=0}^d w_jx_j \\quad \\mbox{ is equivalent to } \\quad in=\\sum\\limits_{j=1}^d w_jx_j+b$$\n",
"\n",
"Hence the following two graphical representations are equivalent:\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Artificial Neural Networks: General Notions"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"toc-hr-collapsed": true,
"toc-nb-collapsed": true
},
"source": [
"### Layers\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"toc-hr-collapsed": true,
"toc-nb-collapsed": true
},
"source": [
"In Neural Networks neurons are arranged in layers. All Neurons of a single layer are of the same type, i.e. they apply the same activation function on the weighted sum at their input (see previous section). Each Neural Network has at least one input-layer and one output-layer. The number of neurons in the input-layer is determined by the number of features (attributes) in the given Machine-Learning problem. The number of neurons in the output-layer depends on the task. E.g. for **binary-classification** and **regression** only one neuron in the output-layer is requried, for classification into $K>2$ classes the output-layer consists of $K$ neurons.\n",
"\n",
"Actually, the **input-layer** is not considered as a *real* layer, since it only takes in the values of the current feature-vector, but does not perform any processing, such as calculating an activation function of a weighted sum. The input layer is ignored when determining the number of layers in a neural-network."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"toc-hr-collapsed": true,
"toc-nb-collapsed": true
},
"source": [
"For **example** for a binary credit-worthiness classification of customers, which are modelled by the numeric features *age, annual income, equity*, $3+1=4$ neurons are required at the input (3 neurons $x_1,x_2,x_3$ for the 3 features plus the constant bias $x_0=1$) and one neuron is required at the output. For non-numeric features at the input, the number of neurons in the inut-layer is not directly given by the number of features, since each non-numeric feature must be [One-Hot encoded](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) before passing it to the Neural Network. \n",
"\n",
"Inbetween the input- and the output-layer there may be zero, one or more other layers. The number of layers in a Neural Network is an essential architectural hyperparameter. **Hyperparameters** in Neural Networks, as well as in all other Machine Learning algorithms, are parameters, which are not learned automatically in the training phase, but must be configured from outside. Finding appropriate hyperparameters for the given task and the given data is possibly the most challenging task in machine-learning."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Feedforward- and Recurrent Neural Networks\n",
"\n",
"In **Feedforward Neural Networks (FNN)** signals are propagated only in one direction - from the input- towards the output layer. In a network with $L$ layers, the input-layer is typically indexed by 0 and the output-layer's index is $L$ (as mentioned above the input-layer is ignored in the layer-count). Then in a FNN the output of layer $j$ can be passed to the input of neurons in layer $i$, if and only if $i>j$. \n",
"\n",
"**Recurrent Neural Networks (RNN)**, in contrast to FNNs, not only have forward connections, but also backward-connections. I.e the output of neurons in layer $j$ can be passed to the input of neurons in the same layer or to neurons in layers of index $k"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"As depicted above, for linear regression only a **single neuron in the output-layer** is required. The activation function $g()$ applied for regression is the **identity function**. The loss-function, which is minimized in the training procedure is the **sum of squared error**: \n",
"\n",
"$$SSE(T,\\Theta)= \\frac{1}{2} \\sum\\limits_{t=1}^N (r_t-y_t)^2 = \\frac{1}{2} \\sum\\limits_{t=1}^N \\left( r_t-\\sum\\limits_{j=0}^d w_j x_{j,t}\\right)^2,$$\n",
"\n",
"where $\\Theta=\\lbrace w_0,w_1,\\ldots, w_d \\rbrace$ is the set of weights, which are adapted in the training process."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"**Example:** The $N=5$ training-elements given in the table of the picture below contain only a single input feature $x_1$ and the corresponding target-value $r$. From these training-elements a SLP can learn a linear function $y=w_0+w_1 x_1$, which minimizes the loss-function SSE. \n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"toc-hr-collapsed": true,
"toc-nb-collapsed": true
},
"source": [
"### SLP for binary classification\n",
"\n",
"A SLP can be applied to learn a binary classifier from a set of N labeled observations \n",
"\n",
"$$T=\\lbrace(x_{1,t},x_{2,t},\\ldots,x_{d,t}),r_t \\rbrace_{t=1}^N,$$\n",
"\n",
"where $x_{j,t}$ is the j.th feature of the t.th training-element and $r_t \\in \\lbrace 0,1 \\rbrace$ is the class-index of the t.th training-element. \n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"toc-hr-collapsed": true,
"toc-nb-collapsed": true
},
"source": [
"As depicted above, for binary classification only a **single neuron in the output-layer** is required. The activation function $g()$ applied for binary classification is either the **threshold-** or the **sigmoid-function**. The threshold-function output values are either 0 or 1, i.e. this function can provide only a *hard* classifikcation-decision, with no further information on the certainty of this decision. In contrast the value range of the sigmoid-function covers all floats between 0 and 1. It can be shown that if the weighted-sum is processed by the sigmoid-function the output is an indicator for the a-posteriori propability that the given observation belongs to class $C_1$: \n",
"\n",
"$$P(C_1|(x_{1},x_{2}, ,x_{d}))=1-P(C_0|(x_{1},x_{2},\\ldots,x_{d})).$$\n",
"\n",
"If the output value \n",
"\n",
"$$y=sigmoid(\\sum\\limits_{j=0}^d w_j x_{j,t})$$\n",
"\n",
"is larger than 0.5 the observation $(x_{1},x_{2}, \\ldots,x_{d})$ is assigned to class $C_1$, otherwise it is assigned to class $C_0$. A value close to 0.5 indicates an uncertaion decision, whereas a value close to 0 or 1 indicates a certain decision.\n",
"\n",
"In the case that the sigmoid-activation function is applied, the loss-function, which is minimized in the training procedure is the **binary cross-entropy function**: \n",
"\n",
"$$L(T,\\Theta)= \\sum\\limits_{t=1}^N r_{t} \\log y_{t}+(1-r_{t}) \\log(1-y_{t}),$$\n",
"\n",
"where $r_t$ is the target class-index and $y_t$ is the output of the sigmoid-function, for the t.th training-element. Again, $\\Theta=\\lbrace w_0,w_1,\\ldots, w_d \\rbrace$ is the set of weights, which are adapted in the training process."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"toc-hr-collapsed": true,
"toc-nb-collapsed": true
},
"source": [
"**Example:** The $N=9$ 2-dimensional labeled training-elements given in the table of the picture below are applied to learn a SLP for binary classification. The learned model can be specified by the parameters (weights) \n",
"\n",
"$$w_0=-3, w_1=0.6, w_2=1.$$ \n",
"\n",
"These weights define a line \n",
"\n",
"$$w_0+w_1x_1+w_2x_2=0 \\Longrightarrow x_2 = -\\frac{w_1}{w_2}x_1 -\\frac{w_0}{w_2} ,$$ \n",
"\n",
"whose slope is \n",
"\n",
"$$m=-\\frac{w_1}{w_2}=-0.6$$\n",
"\n",
"and whose intersection with the $x_2$-axis is \n",
"\n",
"$$\n",
"b=-\\frac{w_0}{w_2}=3. \n",
"$$\n",
"\n",
"\n",
"\n",
"
\n",
"\n",
"\n",
"Once this model, i.e. the set of weights, is learned it can be applied for classification as follows: A new observation $\\mathbf{x'}=(x'_1,x'_2)$ is inserted into the learned equation $w_0 \\cdot 1 + w_1 \\cdot x'_1 + w_2 \\cdot x'_2$. The result of this linear equation is passed to the sigmoid-function. If sigmoid-function's output is $>0.5$ the most probable class is $C_1$, otherwise it is $C_0$."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"toc-hr-collapsed": true,
"toc-nb-collapsed": true
},
"source": [
"### SLP for classification in $K>2$ classes\n",
"\n",
"A SLP can be applied to learn a non-binary classifier from a set of N labeled observations \n",
"\n",
"$$T=\\lbrace(x_{1,t},x_{2,t}, \\ldots, x_{d,t}),r_t \\rbrace_{t=1}^N,$$\n",
"\n",
"where $x_{j,t}$ is the j.th feature of the t.th training-element and $r_t \\in \\lbrace 0,1 \\rbrace$ is the class-index of the t.th training-element. \n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"toc-hr-collapsed": true,
"toc-nb-collapsed": true
},
"source": [
"As depicted above, for classification into $K>2$ classes **K neurons are required in the output-layer**. The activation function $g()$ applied for non-binary classification is usually the **softmax-function**: \n",
"\n",
"$$g(in_i,in_j)=\\frac{\\exp(in_i)}{\\sum\\limits_{j=1}^{K} \\exp(in_j)} \\quad with \\quad in_j=\\sum\\limits_{j=0}^d w_j x_{j,t}$$ \n",
"\n",
"The softmax-function outputs for for each neuron in the output-layer a value $y_k$, with the property, that \n",
"\n",
"$$\\sum\\limits_{k=1}^K y_k = 1.$$ \n",
"\n",
"Each of these outputs is an indicator for the \n",
"a-posteriori propability that the given observation belongs to class $C_i$: \n",
"\n",
"$$P(C_i|(x_{1},x_{2}, \\ldots,x_{d})).$$\n",
"\n",
"The class, whose neuron outputs the maximum value is the most likely class for the current observation at the input of the SLP. \n",
"\n",
"\n",
"In the case that the softmax-activation function is applied, the loss-function, which is minimized in the training procedure is the **cross-entropy function**: \n",
"\n",
"$$L(T,\\Theta)= \\sum\\limits_{t=1}^N \\sum\\limits_{k=1}^K r_{t,k} \\log(y_{t,k}),$$\n",
"\n",
"where $\\Theta=\\lbrace w_0,w_1,\\ldots, w_d \\rbrace$ is the set of weights, which are adapted in the training process. $r_{t,k}=1$, if the t.th training-element belongs to class $k$, otherwise it is 0. $y_{t,k}$ is the output of the k.th neuron for the t.th training-element."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"toc-hr-collapsed": true,
"toc-nb-collapsed": true
},
"source": [
"Each output neuron has its own set of weights, and each weight-set defines a (d-1)-dimensional hyperplane in the d-dimensional space. However, now these hyperplanes are not the class boundary itself, but they determine the class boundaries, which are actually of convex shape as depicted below. In the picture below, the red area indicates the inputs, who yield a maximum output at the neuron, whose weights belong to the red line, the blue area is the of inputs, whose maximum value is at the neuron, which belongs to the blue line and the green area comprises the inputs, whose maximum value is at the neuron, which belongs to the green line. \n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Summary Single Layer Perceptron\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Multi Layer Perceptron\n",
"\n",
"\n",
"### Notations and Basic Characteristics\n",
"A Multi Layer Perceptron (MLP) with $L\\geq 2$ layers is a Feedforward Neural Network (FNN), which consists of \n",
"* an input-layer (which is actually not counted as *layer*)\n",
"* an output layer \n",
"* a sequence of $L-1$ hidden layers inbetween the input- and output-layer\n",
"\n",
"Usually the number of hidden layers is 1,2 or 3. All neurons of a layer are connected to all neurons of the successive layer. A layer with this property is also called a **fully-connected layer** or a **dense layer**. \n",
"\n",
"An example of a $L=3$ layer MLP is shown in the following picture. \n",
"\n",
"
\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As in the case of SLPs, the biases in MLP can be modelled implicitily by including to all non-output-layers a constant neuron $x_0=1$, or by the explicit bias-vector $\\mathbf{b^l}$ in layer $l$. In the picture above, the latter option is applied.\n",
"\n",
"In order to provide a unified description the following notation is used:\n",
"* the number of neurons in layer $l$ is denoted by $z_l$.\n",
"* the output of the layer in depth $l$ is denoted by the vector $\\mathbf{h^l}=(h_1^l,h_2^l,\\ldots,h_{z_l}^l)$, \n",
"* $\\mathbf{x}=\\mathbf{h^0}$ is the input to the network,\n",
"* $\\mathbf{y}=\\mathbf{h^L}$ is the network's output,\n",
"* $\\mathbf{b^l}$ is the bias-vector of layer $l$,\n",
"* $W^l$ is the weight-matrix of layer $l$. It's entry $W_{ij}^l$ is the weight from the j.th neuron in layer $l-1$ to the i.th neuron in layer $l$. Hence, the weight-matrix $W^l$ has $z_l$ rows and $z_{l-1}$ columns.\n",
"\n",
"With this notation the **Forward-Pass** of the MLP in the picture above can be calculated as follows:\n",
"\n",
"**Output of first hidden-layer:**\n",
"\n",
"$$\\left( \\begin{array}{c} h_1^1 \\\\ h_2^1 \\\\ h_3^1 \\\\ h_4^1 \\end{array} \\right) = g\\left( \\left( \\begin{array}{ccc} W_{11}^1 & W_{12}^1 & W_{13}^1 \\\\ W_{21}^1 & W_{22}^1 & W_{23}^1 \\\\ W_{31}^1 & W_{32}^1 & W_{33}^1 \\\\ W_{41}^1 & W_{42}^1 & W_{43}^1 \\end{array} \\right) \\left( \\begin{array}{c} x_1 \\\\ x_2 \\\\ x_3 \\end{array} \\right) + \\left( \\begin{array}{c} b_1^1 \\\\ b_2^1 \\\\ b_3^1 \\\\ b_4^1 \\end{array} \\right) \\right)$$\n",
"\n",
"\n",
"\n",
"**Output of second hidden-layer:**\n",
"\n",
"$$\\left( \\begin{array}{c} h_1^2 \\\\ h_2^2 \\\\ h_3^2 \\end{array} \\right) = g\\left( \\left( \\begin{array}{cccc} W_{11}^2 & W_{12}^2 & W_{13}^2 & W_{14}^2\\\\ W_{21}^2 & W_{22}^2 & W_{23}^2 & W_{24}^2\\\\ W_{31}^2 & W_{32}^2 & W_{33}^2 & W_{34}^2 \\end{array} \\right) \\left( \\begin{array}{c} h^1_1 \\\\ h^1_2 \\\\ h^1_3 \\\\ h^1_4 \\end{array} \\right) + \\left( \\begin{array}{c} b_1^2 \\\\ b_2^2 \\\\ b_3^2 \\end{array} \\right) \\right)$$\n",
"\n",
"**Output of the network:**\n",
"\n",
"$$y = \\left( \\begin{array}{c} h_1^3 \\\\ \\end{array} \\right) = g\\left( \\left( \\begin{array}{ccc} W_{11}^3 & W_{12}^3 & W_{13}^3 \\end{array} \\right) \\left( \\begin{array}{c} h^2_1 \\\\ h^2_2 \\\\ h^2_3 \\end{array} \\right) + \\left( \\begin{array}{c} b_1^3 \\end{array} \\right) \\right)$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"As in the case of Single Layer Perceptrons the three categories \n",
"* regression, \n",
"* binary classification \n",
"* $K$-ary classification \n",
"\n",
"are distinguished. The corresponding MLP output-layer is the same as in the case of a SLP."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"In contrast to SLPs, MLPs are able to **learn non-linear** models. This difference is depicted below: The left hand side shows the linear classification-boundary, as learned by a SLP, whereas on the right-hand side the non-linear boundary, as learned by a MLP from the same training data, is plotted. \n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Early MLP Example: Autonomos Driving\n",
"The ALVINN net is a MLP with one hidden layer. It has been designed and trained for *road following* in autonomous driving. The input has been provided by a simple $30 \\times 32$ greyscale camera. As shown in the picture below, the hidden layer contains only 4 neurons. In the output-layer each of the 30 neurons belongs to one \"steering-wheel-direction\". The training data has been collected by recording videos while an expert driver steers the car. For each frame (input) the steering-wheel-direction (label) has been tracked. \n",
"\n",
"
\n",
"\n",
"After training the vehicle cruised autonomously for 90 miles on a highway at a speed of up to 70mph. The test-highway has not been included in the training cruises. "
]
}
],
"metadata": {
"anaconda-cloud": {},
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.0"
},
"nav_menu": {},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": "block",
"toc_window_display": false
},
"toc-autonumbering": true,
"toc_position": {
"height": "918px",
"left": "0px",
"right": "1793.2px",
"top": "123.9px",
"width": "321px"
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}