Code: 18NES1 Neural Networks 1
Lecturer: RNDr. Zuzana Petříčková Ph.D. Weekly load: 2P+2C Completion: GA
Department: 14118 Credits: 5 Semester: S
Description:
The aim of the course "Neural Networks 1" is to acquaint students with basic models of artificial neural networks, algorithms for their learning, and other related machine learning techniques. The goal is to teach students how to apply these models and methods to solve practical tasks.
Contents:
1. Introduction to Artificial Neural Networks. History, biological motivation, learning and machine learning. Machine learning. Types of tasks. Solving a machine learning task.
2. Perceptron. Mathematical model of a neuron and its geometric interpretation. Early models of neural networks: perceptrons with step activation function. Representation of logical functions using perceptrons and perceptron networks. Examples.
3. Perceptron. Learning algorithms (Hebb, Rosenblatt,...). Linear separability. Linear classification. Overview of basic transfer functions for neurons.
4. Linear neuron and the task of linear regression. Learning algorithms (least squares method, pseudoinverse, gradient method, regularization). Linear neural network, linear regression, logistic regression.
5. Single-layer neural network. Model description, transfer and error functions, gradient learning method and its variants. Associative memories, recurrent associative memories. Types of tasks, training data.
6. Feedforward neural network. Backpropagation algorithm, derivation, variants, practical applications.
7. Analysis of layered neural network model (learning rate and approximation capability, ability to generalize). Techniques that accelerate learning and techniques that enhance the model's generalization ability. Shallow vs. deep layered neural networks.
8. Clustering and self-organizing artificial neural networks. K-means algorithm, hierarchical clustering.
9. Competitive models, Kohonen maps, learning algorithms. Hybrid models (LVQ, Counter-propagation, RBF model, modular neural networks).
10-11. Convolutional neural networks. Convolution operations. Architecture. Typical tasks. Transfer learning.
12. Vanilla recurrent neural networks. Processing sequential data.
13. Probabilistic models (Hopfield network, simulated annealing, Boltzmann machine).

Seminar contents:
The syllabus corresponds to the structure of the lectures.
Recommended literature:
Recommended literature:
[1] R. Rojas: Neural Networks: A Systematic Introduction, Springer-Verlag, Berlin, 1996
[2] S. Haykin: Neural Networks, Macmillan, New York, 1994.
[3] L.V. Fausett: Fundamentals of Neural Networks: Architectures, Algorithms and Applications, Prentice Hall, New Jersey, 1994.
[4] I. Goodfellow, Y. Bengio, A. Courville: Deep Learning, MIT Press, 2016.
[5] E. Volná, Neuronové sítě 1, Ostrava, 2008 
Keywords:
Artificial Neural Networks, Perceptron, Shallow Neural Network, Self-Organization, Convolutional Neural Networks

Abbreviations used:

Semester:

Mode of completion of the course:

Weekly load (hours per week):