Things on this page are fragmentary and immature notes/thoughts of the author. Please read with your own judgement!
It is strongly suggested that you load data into a pandas DataFrame and handle categorical variables by specifying a
dtypeof"category"for those categorical variables.:::bash df.cat_var = df.cat_var.astype("category")This is the easiest way to handle categorical variables in LightGBM. For more details, please refer to Handle Categorical Variables in LightGBM .
The sklearn wrapper of LightGBM lag behind the development of sklearn. Be aware of the latest supported version of sklearn when you use sklearn wrapper of LightGBM. It is suggested that you use the original API of LightGBM to avoid version issues.
It is suggested that you always specify an validation dataset when you train a model using the function
train.LightGBM supports distributed training on multiple machines (without Spark).
https://
github .com /microsoft /LightGBM /tree /master /examples /parallel _learning
Hyper Parameter Tuning¶
Optuna is a good framework for tuning hyper parameters.
https://
https://
GPU¶
https://
References¶
Handle Categorical Variables in LightGBM
https://
https://
https://
https://
examples
https://
https://
https://
https://
https://