Section 2
Support Vector Machines
7. Introducing: Support Vector Machines
07:44 (Preview)
8. Support Vector Machines to Maximise Decision Margins 📂
25:06
9. A Code Walkthrough for SVMs 📂
32:55
10. Overlapping Classes and Kernel SVMs 📂
21:06
11. Experimenting with Overlapping Class Distributions 📂
25:33
12. Using Kernel SVMs for Non-Linear Predictions 📂
11:36
13. Support Vector Machines in the Wild 📂
17:16
14. Solving Regression Problems with SVMs
22:37
15. Comparing Least-Squares with SVM Regression 📂
56:07
16. Conclusion, Certificate, and What Next?
04:39
7. Introducing: Support Vector Machines
Support Vector Machines

Welcome to the second section of the course. In this section we shall be introducing a highly effective machine learning algorithm: the Support Vector Machine (SVM). Applicable to both classification and regression problems, these robust algorithms also provide novel of explainations of the decision process via their supporting vectors.

We shall cover:

  • Maximising the margin between data classes (the descision surface margin).
  • The role of support vectors in application to data problems.
  • Non-parametric forms of SVMs via the kernel trick.
  • Overlapping class boundaries of noisey data.
  • SVM regression: sparse regression by allowing an error margin around fit.
Next Lesson
8. Support Vector Machines to Maximise Decision Margins