AI

Artificial Intelligence

Perquisites:


Contents and Program:


Mind-Map:


Lectures:

Exercises & Past Exams:

Exam 2022/MM/GG

  1. Techniques used to estimate PDFs with ANN.
  2. General questions of RBF.
  3. Why RBF is better than GMM ?
  4. How to compute the mean and the variance of the RBF Gaussian constraint satisfaction ?
  5. Hill-Climbing Search.
  6. Hill-Climbing Search: 8-Queens Problem.

Original File:

Exam 2022/09/09

  • The input (i.e., the activation) of a generic hidden unit in an artificial neural network is always given by the sum of weighted outputs from the previous layer

    • False
  • A linear discriminant function can never minimize the probability of error

    • False
  • The product between the scaled likelihood and the evidence is a particular type of pdf

    • True
  • Any given GMM can be realized exactly by an equivalent RBF

    • True
  • The kn-Nearest Neighbour technique always converges asymptotically to the true pdf underlying the data sample

    • If the condition for stability are respected
  • Mixtures of experts do not estimate PDFs

    • It depends


All My Notes

For the best experience in reading these and all other notes, and also if you wish to EDIT them, do as follows:

  1. Install Obsidian, or another markdown editor.
  2. Go to the Github link of this or another note
  3. Download all the repo or if you know git just the ‘content/’ folder
  4. Extract just the ‘content/’ folder from the repo zip file
  5. Open Obsidian >> Menage Vaults >> Open Folder as Vault >> and select the ‘content/’ folder you just extracted

PLEASE NOTE:

  • These notes were not revised by the professors, so take all of them with a grain of salt.
  • However if you download them since they are made in markdown you can EDIT them, please do so.
  • If you edit and “upgrade” them, please pass the new ones to the other students and professors.

Here are all the links to my notes: