Random Forest averages many decision trees built on bootstrap samples to produce a strong, low‑variance model.

Key Parameters

  • n_estimators, max_depth, max_features
  • min_samples_split/leaf for regularization
  • class_weight for imbalance

Validation

Use out‑of‑bag (OOB) estimates and a holdout or cross‑validation for final metrics. Keep preprocessing within folds.

Explainability

Permutation importances and partial dependence plots help communicate drivers; impurity importances provide a quick sanity check.

When to Use

As a strong baseline on tabular problems, or when you need performance with reasonable explainability and minimal tuning.

Run Random Forest Back to Service Page