精选博文

C++ implementation of a simple order book

Please refer to my github for the code:  https://github.com/DongliangLarryYi.  1.      Data Structure 1.1   A basic or...

Thursday, January 19, 2017

Summary of some machine learning concepts and methods

Concepts
Cost function: It is used to measure the accuracy of a predictive model. It takes an average difference of all the results predicted by the model with inputs from x's (features or explanatory variables) and the actual output y's. (Week 1 of Andrew Ng’s class)
Regularization: It is added in the cost function to put some penalty on parameters of a model. It is used to prevent overfitting. (Week 3)
Gradient descent: It is a method to find the local minimum of a function with respect to some parameters. It is used in machine learning to find the best parameters in the model. (Week 1)
Decision boundary: The line (or higher dimension boundary) separated different classes. It is the z (input to the sigmoid function) in the logistic regression model. (Week 3)
F1 Score: It is a method to measure the performance of an anomaly detection algorithm. (Week 9)
Random initialization: It is a method to initialize the parameters which are used for further optimization. Some models can initialize all parameters to zero, but it does not work for Neural Network. (Week 5)
Gaussian Kernel: It is one kind of transformation of explanatory variables in SVM. (Week 7)
Cross Validation: It is used to find the best parameters of the regularization term in the cost function. (Week 6)
Bias-variance tradeoff: Models with high bias are not complex enough for the data and tend to underfit, while models with high variance overfit to the training data. (Week 6)
Feedforward: It is an algorithm to calculate the output of a neural network. (Week 5)
Backpropagation: It is an algorithm to minimize the cost function and derive the best parameters of neural network. (Week 5)
Sigmoid function: It is used in logistic model to do classification. (Week 3)
Feature mapping: It is a method to create more explanatory variables with power or interaction. (Week 3)
Feature normalization: It is a method to transform explanatory variables by making then have the same range. When features differ by orders of magnitude, first performing feature scaling (normalization) can make gradient descent converge much more quickly. (Week 2)
                           
Methods
Linear regression: The simplest regression model.
Logistic regression: A kind of regression model which can be used to predict a probability or do classification. It uses a sigmoid function in the model.
Neural Network: One kind of complex model which used many units to do prediction. The learning process needs more computational resource than regression model.
Support Vector Machine: One update version of logistic regression. It uses a cost function different from logistic regression, and may use some kernels (transformation of explanatory variables).
K-means Clustering: One kind of multi-class classification method which use the distance to centroids to do classification and keep updating centroids until stable.
Principle Component Analysis: One kind of feature dimension reduction method while keeping most of the explanatory powers in the original dataset.
Anomaly Detection: One kind of algorithm to fit unbalanced datasets. We may need to fit a distribution of datasets and to find the probability of specific point. Like the hypothesis test in statistics.
Collaborative filtering: One kind of recommending algorithm which optimizes the feature X and parameter Q for prediction.
Multi-class Classification: Several logistic regression and SVM can be used to build a supervised classification. If unsupervised learning, we can use K-means clustering.




No comments:

Post a Comment