ML at AIMS        Machine Learning Made Easy
                                       AIMS Cameroon

                                                      (Photos)
                                          Jan 25 to Feb 13, 2021
            Tue, Thurs, Fri, & Sat 2-4:30pm (Cam) & 8-10:30am (Tor)
       20hrs theory + 10hrs practical = 2.5hrs/day×4days/week×3weeks
   Tutorials exercises and other material, typically 7:30-9 (Cam) & 1:30-3 (Tor) once or twice a week.
They will be taking another course at the same time. There should be one assignment per week, the last one being a group assignment (to work on a team of 4-5) + Bi-weekly 10 minutes quiz session
Jeff Edmonds
Dept. EE and Computer Science
York University
Toronto Canada
Email: jeff cse.yorku.ca
Jeff teaches Theoretical Computer
Science at all levels slides, Discord Instructions, GitHub, Schedule, Zoom Class, Recording, Zoom Tutorials, Lab document
Computers can now drive cars and find cancer in x-rays. For better or worse, this will change the world (and the job market). Strangely designing these algorithms is not done by telling the computer what to do or even by understanding what the computer does. The computers learn themselves from lots and lots of data and lots of trial and error. This learning process is more analogous to how brains evolved over billions of years of learning. The machine itself is a neural network which models both the brain and silicon and-or-not circuits, both of which are great for computing. The only difference with neural networks is that what they compute is determined by weights and small changes in these weights give you small changes in the result of the computation. The process for finding an optimal setting of these weights is analogous to finding the bottom of a valley. "Gradient Decent" achieves this by using the local slope of the hill (derivatives) to direct the travel down the hill, i.e. small changes to the weights. There is some theory. If a machine is found that gives the correct answers on the randomly chosen training data without simply memorizing, then we can prove that with high probability this same machine will also work well on never seen before instances.

Practical Machine Learning: (Possible Teachers)
Laily Ajellu
Pedram Ahadinejad
Ariel Freeman Fawcett
Chester Wyke
Sarah Vollmer
Tyler Thomson

Theory of Machine Learning: slides New
Magic, Overview, Training Data, Machine, Error Surface, Learning, Gradient Descent, Generalizing, Singularity
    86min Invited Talk (April 2019)
    94 99 Two classes (Feb 2020)
    More classes (Feb 2019)
      32 Overview
      33 Generalizing from Training Data
      16 Linear Regression, Neural Networks 1
      14 Neural Networks 2
      30 Matrix Multiplication
      16 Error
      3 Compression Example
      14 Gradient Descent, Steepest Direction
      1:40 Review
More Advanced Topics
    Practical Considerations
    Back Propagation
    Convolutional, Recurrent
    Generative Adversarial Networks
    Reinforcement Learning Markoff Chains
    Bayesian Inference
    Decision Trees, Clustering
    Maximum Likelihood
    Dimension Reduction
    Generalizing from Training Data
    VC-Dimension, Sigmoid, Singularity


If we run out of material: More Topics
Building a Computer from Sand
           From silicon, to And gates, to circuits, to computers, to self driving cars.
39 Asymptotic Analysis of Time Complexity
           We classify algorithms based on whether they are polynomial or
           exponential time.
Loop Invariants for Iterative Algorithms
           Jeff strongly believes that this is the most important topic in
           Algorithms. Instead of worrying about the entire computation, only
           worry about one step at a time - make progress while maintaining the
           loop invariant.
Recursive Algorithms
           Again do not try to understand the entire computation. Trust your
           friends to solve a smaller instances of your problem and use these to
           the solve your own instance.
Pointers & Balanced Trees
           One data structure points at another forming linked lists and trees.
           They are often a challenge for new programmers in C or Java.
           A great data structure storing objects is a binary tree. If it is
           balanced than its depth is only log(n) with n nodes. Hence, operations
           are quick.)
Probabilities & Causality
           It is important to have a good understanding the basics of probability theory.
           Randomized Algorithms
           Calculating probabilities differently to handle causality.
First Order Logic (Quantifiers)
           Before a student can understand or prove anything in mathematics, it
           is essential to first be able to represent it in first order
           logic. Hence, Jeff reviews it in each of his courses.
Common Knowledge: Mud on forehead

Request: Jeff tends to talk too fast. Please help him go
pole pole slowly.