Monday, November 9, 2020

HyperParameters for each ML algorithm

In machine learning, a hyperparameter (sometimes called a tuning or training parameter) is defined as any parameter whose value is set/chosen at the onset of the learning process. Whereas other parameter values are computed during training.


K-Nearest Neighbors           : K , Leaf_size , Weights and Metric 

Decision Trees and Random Forests : N_estimators, Max_depth  , Min_samples_split  ,                     Min_samples_leaf and Criterion 

AdaBoost and Gradient Boost : N_estimators, Learning_rate  and Base_estimator (AdaBoost)                                                                     /  Loss (Gradient Boosting) 

Support Vector Machines         : C, Kernel, and Gamma.


Specifically, I will focus on the hyperparameters that tend to have the greatest effect on the bias-variance tradeoff

However, it is very, very important to keep in mind the bias-variance tradeoff, as well as the tradeoff between computational costs and scoring metrics. Ideally, we want a model with low bias and low variance to limit overall error

https://medium.com/swlh/the-hyperparameter-cheat-sheet-770f1fed32ff

The Hyperparameter Cheat Sheet  A quick guide to hyperparameter tuning utilizing Scikit Learn’s GridSearchCV, and the bias/variance trade-off

J.P. Rinfret

---------------------------------------------------------------------------------------------------------------------

=====================================================================


https://towardsdatascience.com/model-parameters-and-hyperparameters-in-machine-learning-what-is-the-difference-702d30970f6


Examples of hyperparameters used in the scikit-learn package

1.Perceptron Classifier

Perceptron(n_iter=40, eta0=0.1, random_state=0)


2. Train, Test Split Estimator

train_test_split( X, y, test_size=0.4, random_state=0)


3. Logistic Regression Classifier

LogisticRegression(C=1000.0, random_state=0)


4. KNN (k-Nearest Neighbors) Classifier

KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski')


5. Support Vector Machine Classifier

SVC(kernel='linear', C=1.0, random_state=0)


6. Decision Tree Classifier

DecisionTreeClassifier(criterion='entropy', 

                       max_depth=3, random_state=0)

7. Lasso Regression

Lasso(alpha = 0.1)

8. Principal Component Analysis

PCA(n_components = 4)


No comments:

Post a Comment