Get in Touch

Course Outline

DAY 1 - ARTIFICIAL NEURAL NETWORKS

Introduction and ANN Structure.

  • Biological neurons versus artificial neurons.
  • Structural model of an ANN.
  • Activation functions employed in ANNs.
  • Common classes of network architectures.

Mathematical Foundations and Learning Mechanisms.

  • Review of vector and matrix algebra.
  • State-space concepts.
  • Optimization principles.
  • Error-correction learning.
  • Memory-based learning.
  • Hebbian learning.
  • Competitive learning.

Single Layer Perceptrons.

  • Structure and learning processes of perceptrons.
  • Introduction to pattern classifiers and Bayes' classifiers.
  • Application of perceptrons as pattern classifiers.
  • Perceptron convergence criteria.
  • Limitations inherent to perceptrons.

Feedforward ANNs.

  • Structures of multi-layer feedforward networks.
  • The backpropagation algorithm.
  • Training and convergence in backpropagation.
  • Functional approximation using backpropagation.
  • Practical considerations and design issues in backpropagation learning.

Radial Basis Function (RBF) Networks.

  • Pattern separability and interpolation.
  • Regularization Theory.
  • Integration of regularization with RBF networks.
  • Design and training of RBF networks.
  • Approximation capabilities of RBF networks.

Competitive Learning and Self-Organizing ANNs.

  • General clustering procedures.
  • Learning Vector Quantization (LVQ).
  • Algorithms and architectures for competitive learning.
  • Self-organizing feature maps.
  • Properties of feature maps.

Fuzzy Neural Networks.

  • Neuro-fuzzy systems.
  • Foundations of fuzzy sets and logic.
  • Design of fuzzy systems.
  • Design of fuzzy ANNs.

Applications

  • Discussion of various Neural Network application examples, highlighting their advantages and challenges.

DAY 2 - MACHINE LEARNING

  • The PAC Learning Framework
    • Guarantees for finite hypothesis sets – consistent case
    • Guarantees for finite hypothesis sets – inconsistent case
    • General Considerations
      • Deterministic vs. stochastic scenarios
      • Bayes error noise
      • Estimation and approximation errors
      • Model selection
  • Rademacher Complexity and VC Dimension
  • Bias-Variance Tradeoff
  • Regularization
  • Overfitting
  • Validation techniques
  • Support Vector Machines
  • Kriging (Gaussian Process regression)
  • PCA and Kernel PCA
  • Self-Organizing Maps (SOM)
  • Kernel-induced vector spaces
    • Mercer Kernels and kernel-induced similarity metrics
  • Reinforcement Learning

DAY 3 - DEEP LEARNING

This module will be taught in conjunction with the topics covered on Day 1 and Day 2

  • Logistic and Softmax Regression
  • Sparse Autoencoders
  • Vectorization, PCA, and Whitening
  • Self-Taught Learning
  • Deep Networks
  • Linear Decoders
  • Convolution and Pooling
  • Sparse Coding
  • Independent Component Analysis
  • Canonical Correlation Analysis
  • Demos and Applications

Requirements

A solid understanding of mathematics is essential.

Familiarity with fundamental statistics is required.

While basic programming skills are not strictly mandatory, they are highly recommended.

 21 Hours

Number of participants


Price per participant

Testimonials (2)

Upcoming Courses

Related Categories