- Turing Machine
- AI Discussion
- Classification Problem
- Types of Features
- Discriminant Functions
- Gaussian Distribution
- Bayes Theorem
- Decision Boundary or Decision Rule
- Bayes Decision Rule
- ML (Maximum Likelihood) Estimator
- First X Principal Components
- Validation of Classifiers
- Supervised Learning
- Estimated Probability
- Classifier for Probability Estimation - K-NN, Parzen Window and Kernels
- K-NN (K-Nearest Neighbour)
- Parzen Window
- Kernels
- Differences Between K-NN, Parzen Window and Kernel Classifiers
- Kn-NN (Kn-Nearest Neighbour)
- University AI - Non-Parametric Decision Rule
- ANN (Artificial Neural Network)
- MLP (Multilayer Perceptron)
- SP (Simple Perceptron)
- Learning Methods for ANN
- Gradient Descent
- Delta Rule
- Practical Insights for training ANNs
- Main Supervised Learning Tasks
- Universality of MLP
- Mixture of Experts
- Autoencoders
- Normal Uses of an Autoencoder
- Patter Recognition and Probability Estimation using ANNs
- Theorem: Lippmann, Richard
- Estimate Class-Conditional PDFs using MLPs
- RBF (Radial Basis Function) Networks
- ML Estimations of Mixture Densities
- CNN (Competitive Neural Network)
- Train an ANN in an Unsupervised MannerTODO
- Density Estimation
- Parzen Neural NetworkTODO
- Symbolic AI (Problem Solving)
- Exploration vs. Exploitation AlgorithmsTODO
- State as a Solution