Machine Learning algorithm based on gradient boosting forest that merges the power of tree ensembles with neural network architectures.
pip install deepgboostOptional plotting support:
pip install deepgboost[plotting]To install from source with development dependencies:
git clone https://github.com/delgadopanadero/deepgboost.git
cd deepgboost
pip install -e .from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
from deepgboost import DeepGBoostRegressor
X, y = load_diabetes(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42)
model = DeepGBoostRegressor(
n_trees=10,
n_layers=15,
max_depth=4,
learning_rate=0.1,
).fit(X_train, y_train)
predictions = model.predict(X_test)Detailed usage examples are available in the examples/ directory:
- quickstart.ipynb — full tour of the API (regression, classification, callbacks, feature importances)
- classifier.ipynb — binary and multiclass classification walkthrough
- regressor.ipynb — regression walkthrough
- serialization.ipynb — saving and loading trained models with pickle
DeepGBoost implements the Distributed Gradient Boosting Forest (DGBF), a novel tree ensemble algorithm introduced in:
Delgado-Panadero, Á., Benítez-Andrades, J. A., & García-Ordás, M. T. (2023). A generalized decision tree ensemble based on the NeuralNetworks architecture: Distributed Gradient Boosting Forest (DGBF). Applied Intelligence, 53, 22991–23003. https://doi.org/10.1007/s10489-023-04735-w
Classical tree ensemble methods — RandomForest (bagging) and GradientBoosting (boosting) — are powerful for tabular data but cannot perform hierarchical representation learning as Neural Networks do. DGBF addresses this by mathematically combining both bagging and boosting into a unified formulation that defines a graph-structured tree ensemble with distributed representation learning, without requiring back-propagation or parametric models.
The core idea is to distribute the gradient descent of each boosting step across the individual trees of a RandomForest layer, so that each tree learns an independent gradient component:
where L is the number of boosting layers and T is the number of trees per layer. This structure is a direct analogue of a Dense Neural Network, where each RandomForest layer corresponds to a network layer, with distributed gradients replacing back-propagation.
Fig. 1 — NeuralNetwork vs DGBF architecture: In NN (left), each neuron's output feeds into the next layer via back-propagation. In DGBF (right), the distributed gradients of all trees from each layer are forwarded to every tree of the following layer.
Both RandomForest and GradientBoosting emerge naturally as special cases of DGBF: RandomForest is recovered with a single layer (L = 1) and GradientBoosting with a single tree per layer (T = 1).
Fig. 2 — RandomForest & GradientBoosting as DGBF special cases: RandomForest (left) and GradientBoosting (right) represented as particular graph architectures of DGBF.
DGBF was evaluated against RandomForest (RF) and GradientBoosting (GBDT) on 9 regression datasets from the UCI Machine Learning Repository (Parkinson, Wine, Concrete, Obesity, NavalVessel, Temperature, Cargo2000, BikeSales, Superconduct), using 200 randomized simulations per dataset with an 80/20 train-test split.
[!note] Winner DeepGBoost 🏆 DGBF surpasses the mean R² score of both GradientBoosting and RandomForest in 7 out of 9 datasets
To reproduce the benchmark, run the experiment script from the benchmark/ directory:
cd benchmark
python run_experiments.pyThe script reads its configuration from benchmark/config.json, where you can adjust the models, hyperparameters, datasets, and experiment settings (e.g. number of bootstrap runs). Results are saved to benchmark/results/.
Contributions are welcome. See CONTRIBUTING.md for development setup, code style, and pull request guidelines.
If you use DeepGBoost in your research, please cite using the metadata in CITATION.cff or the BibTeX entry provided by GitHub ("Cite this repository" button).
