Parameter . | Range . |
---|---|
Maximum number of boosting iterations | between 50 and 200 |
Maximum depth of a tree | 2 or 4 |
Learning rate | 0.1 or 0.3 |
Subsample ratio of the training instance | 0.2,0.4, or 0.6 |
Subsample ratio of columns to construct each tree | 0.2,0.5, or 0.7 |
Parameter . | Range . |
---|---|
Maximum number of boosting iterations | between 50 and 200 |
Maximum depth of a tree | 2 or 4 |
Learning rate | 0.1 or 0.3 |
Subsample ratio of the training instance | 0.2,0.4, or 0.6 |
Subsample ratio of columns to construct each tree | 0.2,0.5, or 0.7 |
Source: Author's calculations.
Note: This table reports the main parameters considered in XGBoost models to estimate the benchmark welfare. The values in these tables are the options considered in the selection of optimal hyperparameters.
Parameter . | Range . |
---|---|
Maximum number of boosting iterations | between 50 and 200 |
Maximum depth of a tree | 2 or 4 |
Learning rate | 0.1 or 0.3 |
Subsample ratio of the training instance | 0.2,0.4, or 0.6 |
Subsample ratio of columns to construct each tree | 0.2,0.5, or 0.7 |
Parameter . | Range . |
---|---|
Maximum number of boosting iterations | between 50 and 200 |
Maximum depth of a tree | 2 or 4 |
Learning rate | 0.1 or 0.3 |
Subsample ratio of the training instance | 0.2,0.4, or 0.6 |
Subsample ratio of columns to construct each tree | 0.2,0.5, or 0.7 |
Source: Author's calculations.
Note: This table reports the main parameters considered in XGBoost models to estimate the benchmark welfare. The values in these tables are the options considered in the selection of optimal hyperparameters.
This PDF is available to Subscribers Only
View Article Abstract & Purchase OptionsFor full access to this pdf, sign in to an existing account, or purchase an annual subscription.