- Machine Learning with Python
- What is Machine Learning?
- Data Preprocessing in Data Science and Machine Learning
- Feature Selection in Machine Learning
- Train-Test Datasets in Machine Learning
- Evaluate Model Performance - Loss Function
- Model Selection in Machine Learning
- Bias Variance Trade Off
- Supervised Learning Models
- Multiple Linear Regression
- Logistic Regression
- Logistic Regression in Python using scikit-learn Package
- Decision Trees in Machine Learning
- Random Forest Algorithm in Python
- Support Vector Machine Algorithm Explained
- Multivariate Linear Regression in Python with scikit-learn Library
- Classifier Model in Machine Learning Using Python
- Cross Validation to Avoid Overfitting in Machine Learning
- K-Fold Cross Validation Example Using Python scikit-learn
- Unsupervised Learning Models
- K-Means Algorithm Python Example
- Neural Networks Overview
Cross Validation to Avoid Overfitting in Machine Learning
Cross validation is a technique used to determine how the results of a machine learning model could be generalized to new, unseen data. The training error associated with a model might underestimate the test error of the model, so the Cross Validation approach provides a mechanism to get the MSE test with the current dataset without the need of finding new data to test the model.
Basically, to perform Cross Validation we need to keep aside a portion of the data that is not used to train the model. The goal of Cross Validation is to estimate the test error of the model, by holding a subset of the dataset in order to use them as test observations. This approach gives a more accurately estimate of the test error.
Validation Set Approach
The classical method for training and testing a dataset is called the Validation Set approach. We have used this approach in both examples of Multivariate linear regression and for the Classifier Forecasting. This consists of splitting the dataset into a train and a test set. Commonly around 80% of the data is used to train the dataset and the other 20 % of the data is used as the test set.
The splitting is done in chronological order, where the first two thirds represent the first two thirds of the historical data. One of the drawbacks of this method is that by choosing different length for the train and the test data, the model performance can vary significantly.
Likewise, if we have a limited amount of data, there is a possibility of high bias because we would miss some information about the data that was not used for training. If the amount of data is huge and the train and test data have the same distribution, this approach is acceptable.
The second approach to address overfitting is to train and test the model using the method called K-Fold Cross Validation.
K-Fold Cross Validation
K-Fold Cross Validation is a more sophisticated approach that generally results in a less biased model compared to other methods. This method consists in the following steps:
- Divides the n observations of the dataset into k mutually exclusive and equal or close-to-equal sized subsets known as “folds”.
- Fit the model using k-1 folds as the training set and one fold (kth) as the test set. After each iteration has been finished, store the error of the model.
- Repeat this process k times using one different fold every time as a test set and the remaining folds (k-1) as the training set.
- Once all the iterations have finished, take the mean of the k models. This would be the Mean Squared Error of the model.
The error model in using the K-Fold cross validation has the following formula:
Error Model Formula
An important consideration of this approach is the selection of the number of folds. The choice of the number of folds should be done on the basis that each fold needs to have enough data points to provide a fair estimate of the model performance. On the other hand, the k number should not be so small such as 2, in order to have enough trained models to assess the model performance.
K-Fold Cross Validation method
Data Science in Finance: 9-Book Bundle
Master R and Python for financial data science with our comprehensive bundle of 9 ebooks.
What's Included:
- Getting Started with R
- R Programming for Data Science
- Data Visualization with R
- Financial Time Series Analysis with R
- Quantitative Trading Strategies with R
- Derivatives with R
- Credit Risk Modelling With R
- Python for Data Science
- Machine Learning in Finance using Python
Each book includes PDFs, explanations, instructions, data files, and R code for all examples.
Get the Bundle for $29 (Regular $57)Free Guides - Getting Started with R and Python
Enter your name and email address below and we will email you the guides for R programming and Python.