- Machine Learning with Python
- What is Machine Learning?
- Data Preprocessing in Data Science and Machine Learning
- Feature Selection in Machine Learning
- Train-Test Datasets in Machine Learning
- Evaluate Model Performance - Loss Function
- Model Selection in Machine Learning
- Bias Variance Trade Off
- Supervised Learning Models
- Multiple Linear Regression
- Logistic Regression
- Logistic Regression in Python using scikit-learn Package
- Decision Trees in Machine Learning
- Random Forest Algorithm in Python
- Support Vector Machine Algorithm Explained
- Multivariate Linear Regression in Python with scikit-learn Library
- Classifier Model in Machine Learning Using Python
- Cross Validation to Avoid Overfitting in Machine Learning
- K-Fold Cross Validation Example Using Python scikit-learn
- Unsupervised Learning Models
- K-Means Algorithm Python Example
- Neural Networks Overview
Train-Test Datasets in Machine Learning
Once we have cleaned the data and have selected the features from the data for building the model, the next step is to generate the train and test dataset. We will divide our data into two different data sets, namely training and testing datasets. The model will be built using the training set and then we will test it on the testing set to evaluate how our model is performing. There are many ways in which we can split the data, for example, we can split the data randomly.
Splitting the data into training and testing set is required for Supervised Learning problems. Unsupervised Learning models don’t require a train and a test dataset.
Classification and Regression problems would supervise or “train” a model with specific data in order to provide predictions of the target variable y. The process of training a dataset is conducted by choosing the set of relevant features or independent variables and combining these with a response y (labelled data) that is the observed value of the target variable.
In this phase, the algorithm is trained on the data and will determine the influence of each feature on the response y. Finally, we can make predictions for out-of-sample or unseen data based on the prior training experience.
This process has two main stages that are called training and testing the model. In the training phase as we described above, we fit the model with the data and afterwards we use the test data to assess the model performance.
The training dataset includes both features and the target variable while the testing dataset includes only features that are used to run the model and get the predictions of the target variable. The training dataset usually represent 70%-80% of the total data and the test dataset is the remaining portion of the data which is preserved to test the model accuracy.
Data Science in Finance: 9-Book Bundle
Master R and Python for financial data science with our comprehensive bundle of 9 ebooks.
What's Included:
- Getting Started with R
- R Programming for Data Science
- Data Visualization with R
- Financial Time Series Analysis with R
- Quantitative Trading Strategies with R
- Derivatives with R
- Credit Risk Modelling With R
- Python for Data Science
- Machine Learning in Finance using Python
Each book includes PDFs, explanations, instructions, data files, and R code for all examples.
Get the Bundle for $29 (Regular $57)Free Guides - Getting Started with R and Python
Enter your name and email address below and we will email you the guides for R programming and Python.