Section 2
Random Forests
6. Ensemble Methods: Machine Learning and Democracy
04:57 (Preview)
7. Random Forests: Decisions Don't Fall Far from the Decision Tree 📂
15:38
8. Random Forests out in the Wild 📂
36:15
9. Interpolation Through a Random Forest 📂
08:57
Section 3
Gradient Boosting
10. Give Yourself a Gradient Boost
07:01 (Preview)
11. Auto-Correction in a Forest of Stumps 📂
22:06
12. Gradient Boosting by Hand: Code Example 📂
15:55
13. XGBoost in the Wild 📂
14:41
14. Cross validate with the XGBoost API 📂
15:30
15. Conclusion, Certificate, and What Next?
05:07
10. Give Yourself a Gradient Boost
Gradient Boosting

Welcome to the fifth and final section of our course 'Machine Thinking' in which we shall introduce our last professional ML tool: Gradient Boosted Trees. A close relative of the Random Forest, boosting gradients zooms in on the convergence of an ensemble, using less complicated weak leaners--trees stripped down right to the tree stump. What does this mean? A light-weight, fast and effective algorithm.

We take a deep dive into a popular API for this technique: XGBoost, standing for 'eXtreme Gradient Boosting'. Possibly the most common deployment of a machine learning algorithm known for its predictive ability, high functionality and fine tuning.

Shining a Light on the Stumps

In this section we explore gradient boosting and XGBoost in the following:

  • Introduction to the technique of Gradient Boosted Trees.
  • Key mathematical construction for intuition.
  • Worked data example: coding the algorithm from scratch.
  • XGBoost in the wild: how does it fair on our prototype data.
  • XGBoost functionality and cross validation.
Next Lesson
11. Auto-Correction in a Forest of Stumps