Syllabus for Statistical Machine Learning
A revised version of the syllabus is available.
- 5 credits
- Course code: 1RT700
- Education cycle: Second cycle
Main field(s) of study and in-depth level:
Image Analysis and Machine Learning A1N,
Computer Science A1N,
Data Science A1N
Explanation of codes
The code indicates the education cycle and in-depth level of the course in relation to other courses within the same main field of study according to the requirements for general degrees:
- G1N: has only upper-secondary level entry requirements
- G1F: has less than 60 credits in first-cycle course/s as entry requirements
- G1E: contains specially designed degree project for Higher Education Diploma
- G2F: has at least 60 credits in first-cycle course/s as entry requirements
- G2E: has at least 60 credits in first-cycle course/s as entry requirements, contains degree project for Bachelor of Arts/Bachelor of Science
- GXX: in-depth level of the course cannot be classified
- A1N: has only first-cycle course/s as entry requirements
- A1F: has second-cycle course/s as entry requirements
- A1E: contains degree project for Master of Arts/Master of Science (60 credits)
- A2E: contains degree project for Master of Arts/Master of Science (120 credits)
- AXX: in-depth level of the course cannot be classified
- Grading system: Fail (U), Pass (3), Pass with credit (4), Pass with distinction (5)
- Established: 2016-03-08
- Established by: The Faculty Board of Science and Technology
- Revised: 2017-05-02
- Revised by: The Faculty Board of Science and Technology
- Applies from: Autumn 2017
120 credits including Probability and Statistics, Linear Algebra II, Single Variable Calculus and a course in introductory programming.
- Responsible department: Department of Information Technology
Students who pass the course should be able to:
- Structure and divide statistical learning problems into tractable sub-problems, formulate a mathematical solution to the problems and implement this solution using statistical software.
- Use and develop linear and nonlinear models for classification and regression.
- Describe the limitations of linear models and understand how these limitations can be handled using nonlinear models.
- Explain how the quality of a model can be evaluated and how model selection and tuning can be done using cross validation.
- Explain the trade-off between bias and variance.
- Describe the difference between parametric and nonparametric models.
This is an introductory course in statistical machine learning, focusing on classification and regression: linear regression, regularization (ridge regression and LASSO), classification via logistic regression, linear discriminant analysis, classification and regression trees, boosting, neural networks, deep learning; practical considerations such as cross validation, model selection, the bias-variance trade off, applying the methods to real data.
Lectures, problem solving sessions (both with and without computer), laboratory work, feedback on written assignments.
Written exam combined with oral and written presentation of assignments.
- Latest syllabus (applies from Autumn 2023)
- Previous syllabus (applies from Autumn 2020)
- Previous syllabus (applies from Spring 2019)
- Previous syllabus (applies from Autumn 2017)
- Previous syllabus (applies from Autumn 2016)
Applies from: Autumn 2017
Some titles may be available electronically through the University library.
An introduction to statistical learning : with applications in R
New York, NY: Springer, 2013