evansyangs evansyangs
首页
分类
标签
归档
GitHub (opens new window)

Evan Yang

摸鱼小能手
首页
分类
标签
归档
GitHub (opens new window)
  • Kaggle Micro-Course
  • Kaggle Python Micro-Course

  • Kaggle Machine Learning

    • Kaggle Machine Learning Micro-Course
    • 1. How Models Work
    • 2. Basic Data Exploration
    • 3. Your First Machine Learning Model
    • 4. Model Validation
    • 5. Underfitting and Overfitting
    • 6. Random Forests
      • 6.1 Introduction
      • 6.2 Example
      • 6.3 Conclusion
      • 6.4 Exercise: Random Forests
        • Recap
        • Step 1: Use a Random Forest
    • 7. Exercise Machine Learning Competitions
目录

6. Random Forests

# 6.1 Introduction

Decision trees leave you with a difficult decision. A deep tree with lots of leaves will overfit because each prediction is coming from historical data from only the few houses at its leaf. But a shallow tree with few leaves will perform poorly because it fails to capture as many distinctions in the raw data.

Even today's most sophisticated modeling techniques face this tension between underfitting and overfitting. But, many models have clever ideas that can lead to better performance. We'll look at the random forest as an example.

The random forest uses many trees, and it makes a prediction by averaging the predictions of each component tree. It generally has much better predictive accuracy than a single decision tree and it works well with default parameters. If you keep modeling, you can learn more models with even better performance, but many of those are sensitive to getting the right parameters.

# 6.2 Example

You've already seen the code to load the data a few times. At the end of data-loading, we have the following variables:

  • train_X
  • val_X
  • train_y
  • val_y
import pandas as pd
    
# Load data
melbourne_file_path = '../input/melbourne-housing-snapshot/melb_data.csv'
melbourne_data = pd.read_csv(melbourne_file_path) 
# Filter rows with missing values
melbourne_data = melbourne_data.dropna(axis=0)
# Choose target and features
y = melbourne_data.Price
melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'BuildingArea', 'YearBuilt', 'Lattitude', 'Longtitude']
X = melbourne_data[melbourne_features]

from sklearn.model_selection import train_test_split

# split data into training and validation data, for both features and target
# The split is based on a random number generator. Supplying a numeric value to
# the random_state argument guarantees we get the same split every time we
# run this script.
train_X, val_X, train_y, val_y = train_test_split(X, y,random_state = 0)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

We build a random forest model similarly to how we built a decision tree in scikit-learn - this time using the RandomForestRegressor class instead of DecisionTreeRegressor.

from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error

forest_model = RandomForestRegressor(random_state=1)
forest_model.fit(train_X, train_y)
melb_preds = forest_model.predict(val_X)
print(mean_absolute_error(val_y, melb_preds))
1
2
3
4
5
6
7

# 6.3 Conclusion

There is likely room for further improvement, but this is a big improvement over the best decision tree error of 250,000. There are parameters which allow you to change the performance of the Random Forest much as we changed the maximum depth of the single decision tree. But one of the best features of Random Forest models is that they generally work reasonably even without this tuning.

You'll soon learn the XGBoost model, which provides better performance when tuned well with the right parameters (but which requires some skill to get the right model parameters).

# 6.4 Exercise: Random Forests

Try Using a Random Forest model (opens new window) yourself and see how much it improves your model.

# Recap

Here's the code you've written so far.

# Code you have previously used to load data
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor


# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'

home_data = pd.read_csv(iowa_file_path)
# Create target object and call it y
y = home_data.SalePrice
# Create X
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[features]

# Split into validation and training data
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)

# Specify Model
iowa_model = DecisionTreeRegressor(random_state=1)
# Fit Model
iowa_model.fit(train_X, train_y)

# Make validation predictions and calculate mean absolute error
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE when not specifying max_leaf_nodes: {:,.0f}".format(val_mae))

# Using best value for max_leaf_nodes
iowa_model = DecisionTreeRegressor(max_leaf_nodes=100, random_state=1)
iowa_model.fit(train_X, train_y)
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE for best value of max_leaf_nodes: {:,.0f}".format(val_mae))


# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex6 import *
print("\nSetup complete")
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43

Data science isn't always this easy. But replacing the decision tree with a Random Forest is going to be an easy win.

# Step 1: Use a Random Forest

from sklearn.ensemble import RandomForestRegressor

# Define the model. Set random_state to 1
rf_model = RandomForestRegressor(random_state=1)

# fit your model
rf_model.fit(train_X, train_y)

# Calculate the mean absolute error of your Random Forest model on the validation data
rf_val_mae = mean_absolute_error(val_y,rf_model.predict(val_X))

print("Validation MAE for Random Forest Model: {}".format(rf_val_mae))

step_1.check()
1
2
3
4
5
6
7
8
9
10
11
12
13
14

So far, you have followed specific instructions at each step of your project. This helped learn key ideas and build your first model, but now you know enough to try things on your own.

Machine Learning competitions are a great way to try your own ideas and learn more as you independently navigate a machine learning project.

编辑 (opens new window)
#机器学习
上次更新: 2021/02/16, 02:45:37
5. Underfitting and Overfitting
7. Exercise Machine Learning Competitions

← 5. Underfitting and Overfitting 7. Exercise Machine Learning Competitions→

最近更新
01
Dell7472黑苹果
03-27
02
上手HackMD
04-13
03
Windows Terminal设置
03-14
更多文章>
Theme by Vdoing | Copyright © 2019-2023 Evan Yang
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式