evansyangs evansyangs
首页
分类
标签
归档
GitHub (opens new window)

Evan Yang

摸鱼小能手
首页
分类
标签
归档
GitHub (opens new window)
  • Kaggle Micro-Course
  • Kaggle Python Micro-Course

  • Kaggle Machine Learning

    • Kaggle Machine Learning Micro-Course
    • 1. How Models Work
    • 2. Basic Data Exploration
    • 3. Your First Machine Learning Model
    • 4. Model Validation
    • 5. Underfitting and Overfitting
    • 6. Random Forests
    • 7. Exercise Machine Learning Competitions
      • 7.1 Introduction
      • 7.2 Recap
      • 7.3 Creating a Model For the Competition
      • 7.4 Make Predictions
      • 7.5 Test Your Work
      • 7.6 Continuing Your Progress
      • 7.7 Other Micro-Courses
目录

7. Exercise Machine Learning Competitions

# 7.1 Introduction

Machine learning competitions are a great way to improve your data science skills and measure your progress.

In this exercise, you will create and submit predictions for a Kaggle competition. You can then improve your model (e.g. by adding features) to improve and see how you stack up to others taking this micro-course.

The steps in this notebook are:

  1. Build a Random Forest model with all of your data (X and y)
  2. Read in the "test" data, which doesn't include values for the target. Predict home values in the test data with your Random Forest model.
  3. Submit those predictions to the competition and see your score.
  4. Optionally, come back to see if you can improve your model by adding features or changing your model. Then you can resubmit to see how that stacks up on the competition leaderboard.

# 7.2 Recap

Here's the code you've written so far. Start by running it again.

# Code you have previously used to load data
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor

# Set up code checking
import os
if not os.path.exists("../input/train.csv"):
    os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")  
    os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv") 
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex7 import *

# Path of the file to read. We changed the directory structure to simplify submitting to a competition
iowa_file_path = '../input/train.csv'

home_data = pd.read_csv(iowa_file_path)
# Create target object and call it y
y = home_data.SalePrice
# Create X
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[features]

# Split into validation and training data
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)

# Specify Model
iowa_model = DecisionTreeRegressor(random_state=1)
# Fit Model
iowa_model.fit(train_X, train_y)

# Make validation predictions and calculate mean absolute error
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE when not specifying max_leaf_nodes: {:,.0f}".format(val_mae))

# Using best value for max_leaf_nodes
iowa_model = DecisionTreeRegressor(max_leaf_nodes=100, random_state=1)
iowa_model.fit(train_X, train_y)
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE for best value of max_leaf_nodes: {:,.0f}".format(val_mae))

# Define the model. Set random_state to 1
rf_model = RandomForestRegressor(random_state=1)
rf_model.fit(train_X, train_y)
rf_val_predictions = rf_model.predict(val_X)
rf_val_mae = mean_absolute_error(rf_val_predictions, val_y)

print("Validation MAE for Random Forest Model: {:,.0f}".format(rf_val_mae))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53

# 7.3 Creating a Model For the Competition

Build a Random Forest model and train it on all of X and y.

# To improve accuracy, create a new Random Forest model which you will train on all training data
rf_model_on_full_data = RandomForestRegressor()

# fit rf_model_on_full_data on all data from the training data
rf_model_on_full_data.fit(X, y)
1
2
3
4
5

# 7.4 Make Predictions

Read the file of "test" data. And apply your model to make predictions

# path to file you will use for predictions
test_data_path = '../input/test.csv'

# read test data file using pandas
test_data = pd.read_csv(test_data_path)

# create test_X which comes from test_data but includes only the columns you used for prediction.
# The list of columns is stored in a variable called features
test_X = test_data[features]

# make predictions which we will submit. 
test_preds = rf_model_on_full_data.predict(test_X)

# The lines below shows how to save predictions in format used for competition scoring
# Just uncomment them.

output = pd.DataFrame({'Id': test_data.Id,
                       'SalePrice': test_preds})
output.to_csv('submission.csv', index=False)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

Before submitting, run a check to make sure your test_preds have the right format.

# 7.5 Test Your Work

To test your results, you'll need to join the competition (if you haven't already). So open a new window by clicking on this link (opens new window). Then click on the Join Competition button.

Next, follow the instructions below:

  1. Begin by clicking on the blue COMMIT button in the top right corner of this window. This will generate a pop-up window.
  2. After your code has finished running, click on the blue Open Version button in the top right of the pop-up window. This brings you into view mode of the same page. You will need to scroll down to get back to these instructions.
  3. Click on the Output tab on the left of the screen. Then, click on the Submit to Competition button to submit your results to the leaderboard.

You have now successfully submitted to the competition!

  1. If you want to keep working to improve your performance, select the blue Edit button in the top right of the screen. Then you can change your model and repeat the process. There's a lot of room to improve your model, and you will climb up the leaderboard as you work.

# 7.6 Continuing Your Progress

There are many ways to improve your model, and experimenting is a great way to learn at this point.

The best way to improve your model is to add features. Look at the list of columns and think about what might affect home prices. Some features will cause errors because of issues like missing values or non-numeric data types.

The Intermediate Machine Learning (opens new window) micro-course will teach you how to handle these types of features. You will also learn to use xgboost, a technique giving even better accuracy than Random Forest.

# 7.7 Other Micro-Courses

The Pandas (opens new window) micro-course will give you the data manipulation skills to quickly go from conceptual idea to implementation in your data science projects.

You are also ready for the Deep Learning (opens new window) micro-course, where you will build models with better-than-human level performance at computer vision tasks.

编辑 (opens new window)
#机器学习
上次更新: 2021/02/16, 02:45:37
6. Random Forests

← 6. Random Forests

最近更新
01
Dell7472黑苹果
03-27
02
上手HackMD
04-13
03
Windows Terminal设置
03-14
更多文章>
Theme by Vdoing | Copyright © 2019-2023 Evan Yang
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式