|Thursday 15-16||LT5 (Sciences)|
In this course we will study the mathematical foundations of Machine Learning, with an emphasis on the interplay between approximation theory, statistics, and numerical optimization. We will begin with a study of Statistical Learning Theory, including the concepts of Empirical Risk Minimization, Regularization and VC dimension. We will then study popular machine learning models, including deep neural networks, and analyse the underlying Optimization methods. While the course will be theoretical in nature, you are encouraged to experiment with Python and machine learning packages. Regular lecture notes will be published on these pages.
Intended Learning Outcomes
Upon completion of this module you should be able to:
Describe the problem of supervised learning from the point of view of function approximation, optimization, and statistics.
Identify the most suitable optimization and modelling approach for a given machine learning problem.
Analyse the performance of various optimization algorthms from the point of view of computational complexity (both space and time) and statistical accuracy.
Implement a simple neural network architecture and apply it to a pattern recognition task.
Brief lecture notes will be published regularly, usually in the days after each lecture. They will be available on the dedicated Lectures page.
Weekly problem sheets can be found on the Exercises page. Assessed work will be 15% of your mark. Of this, 2% (at most) may be earned every week (starting the second week) by turning in one of the three indicated exercises. These will be marked with a score of 0, 1, or 2. Please let me know if any of the problems are unclear or have typos.
Homework solutions must be placed in the dropoff box (near the front office), by 12:00 on Thursdays. No late work will be accepted. Please write your name, the date, and the module code (MA3K1) at the top of the page. If you collaborate with other students, please include their names.
Solutions typeset using LaTeX are preferred. Ideally, each problem should require at most one side of one page. Longer papers will not be penalized, but try to be concise. If you find you need more space then write out a complete solution and then rewrite with conciseness in mind.
The following references are not needed for the course, but can provide additional information and perspective for those interested.
- Felipe Cucker and Ding Xuan Zhou. Learning theory: an approximation theory viewpoint. Cambridge University Press, 2007
- Vladimir Vapnik. The nature of statistical learning theory. Springer, 2013
- Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning. Springer, 2001
- Amir Beck. First-Order Methods in Optimization. SIAM, 2017
- Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. MIT Press, 2018
- Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014