# Supervised Learning Models

As we pointed out earlier, both ** classification** and

**models are in the field of Supervised Learning. These models are characterized by having a group of features or independent variables and a target variable that is the variable that the model aims to predict.**

*regression*This target variable is called the labelled data and is the main property of the ** Supervised Learning** models because it acts as the orientation for constructing the model in the training phase and to evaluate model performance.

In a ** classification** problem the target variable to predict (

*y**)*is a categorical variable and can take a finite set of possible choices,

**On the other hand, in the**

*K.***problem, the target variable**

*regression***is a real value rather than categorical.**

*y*The ** classification** problem has the goal of estimating membership for a set of features into a particular group. A common

**problem in the financial sector is to determine the price direction for the next day based on N days of asset price history.**

*classification*The ** regression** problem involves estimate a real value response with a set of features or predictors as the independent variables. In the financial field, an example is to estimate tomorrows asset price based on the historical prices (or other features) of the price. The

**problem would estimate the real value of the price and not just its direction.**

*regression*Common classification algorithms include Logistic Regression, Naïve Bayes Classifiers, Support Vector Machines, Decision Tree, and Deep Convolution Neural Networks. Common regression techniques include Linear Regression, Support Vector Regression and Random Forest.

In the next lessons, we will explain relevant concepts of the most popular algorithms used for Supervised Learning. These algorithms are the following:

- Multiple Linear Regression
- Logistic Regression
- Decision Tree-Random Forest
- Support Vector Machine
- Linear Discriminant Analysis

- Machine Learning with Python
- What is Machine Learning?
- Data Preprocessing in Data Science and Machine Learning
- Feature Selection in Machine Learning
- Train-Test Datasets in Machine Learning
- Evaluate Model Performance - Loss Function
- Model Selection in Machine Learning
- Bias Variance Trade Off
- Supervised Learning Models
- Multiple Linear Regression
- Logistic Regression
- Logistic Regression in Python using scikit-learn Package
- Decision Trees in Machine Learning
- Random Forest Algorithm in Python
- Support Vector Machine Algorithm Explained
- Multivariate Linear Regression in Python with scikit-learn Library
- Classifier Model in Machine Learning Using Python
- Cross Validation to Avoid Overfitting in Machine Learning
- K-Fold Cross Validation Example Using Python scikit-learn
- Unsupervised Learning Models
- K-Means Algorithm Python Example
- Neural Networks Overview

# R Programming Bundle: 25% OFF

**R Programming - Data Science for Finance Bundle**for just $29 $39.