package sklearn

  1. Overview
  2. Docs
Legend:
Page
Library
Module
Module type
Parameter
Class
Class type
Source

Module Ensemble.AdaBoostRegressorSource

Sourcetype tag = [
  1. | `AdaBoostRegressor
]
Sourcetype t = [ `AdaBoostRegressor | `BaseEnsemble | `BaseEstimator | `BaseWeightBoosting | `MetaEstimatorMixin | `Object | `RegressorMixin ] Obj.t
Sourceval of_pyobject : Py.Object.t -> t
Sourceval to_pyobject : [> tag ] Obj.t -> Py.Object.t
Sourceval as_meta_estimator : t -> [ `MetaEstimatorMixin ] Obj.t
Sourceval as_regressor : t -> [ `RegressorMixin ] Obj.t
Sourceval as_estimator : t -> [ `BaseEstimator ] Obj.t
Sourceval as_weight_boosting : t -> [ `BaseWeightBoosting ] Obj.t
Sourceval as_ensemble : t -> [ `BaseEnsemble ] Obj.t
Sourceval create : ?base_estimator:[> `BaseEstimator ] Np.Obj.t -> ?n_estimators:int -> ?learning_rate:float -> ?loss:[ `Linear | `Square | `Exponential ] -> ?random_state:int -> unit -> t

An AdaBoost regressor.

An AdaBoost 1 regressor is a meta-estimator that begins by fitting a regressor on the original dataset and then fits additional copies of the regressor on the same dataset but where the weights of instances are adjusted according to the error of the current prediction. As such, subsequent regressors focus more on difficult cases.

This class implements the algorithm known as AdaBoost.R2 2.

Read more in the :ref:`User Guide <adaboost>`.

.. versionadded:: 0.14

Parameters ---------- base_estimator : object, default=None The base estimator from which the boosted ensemble is built. If ``None``, then the base estimator is ``DecisionTreeRegressor(max_depth=3)``.

n_estimators : int, default=50 The maximum number of estimators at which boosting is terminated. In case of perfect fit, the learning procedure is stopped early.

learning_rate : float, default=1. Learning rate shrinks the contribution of each regressor by ``learning_rate``. There is a trade-off between ``learning_rate`` and ``n_estimators``.

loss : 'linear', 'square', 'exponential', default='linear' The loss function to use when updating the weights after each boosting iteration.

random_state : int or RandomState, default=None Controls the random seed given at each `base_estimator` at each boosting iteration. Thus, it is only used when `base_estimator` exposes a `random_state`. In addition, it controls the bootstrap of the weights used to train the `base_estimator` at each boosting iteration. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`.

Attributes ---------- base_estimator_ : estimator The base estimator from which the ensemble is grown.

estimators_ : list of classifiers The collection of fitted sub-estimators.

estimator_weights_ : ndarray of floats Weights for each estimator in the boosted ensemble.

estimator_errors_ : ndarray of floats Regression error for each estimator in the boosted ensemble.

feature_importances_ : ndarray of shape (n_features,) The impurity-based feature importances if supported by the ``base_estimator`` (when based on decision trees).

Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See :func:`sklearn.inspection.permutation_importance` as an alternative.

Examples -------- >>> from sklearn.ensemble import AdaBoostRegressor >>> from sklearn.datasets import make_regression >>> X, y = make_regression(n_features=4, n_informative=2, ... random_state=0, shuffle=False) >>> regr = AdaBoostRegressor(random_state=0, n_estimators=100) >>> regr.fit(X, y) AdaBoostRegressor(n_estimators=100, random_state=0) >>> regr.predict([0, 0, 0, 0]) array(4.7972...) >>> regr.score(X, y) 0.9771...

See also -------- AdaBoostClassifier, GradientBoostingRegressor, sklearn.tree.DecisionTreeRegressor

References ---------- .. 1 Y. Freund, R. Schapire, 'A Decision-Theoretic Generalization of on-Line Learning and an Application to Boosting', 1995.

.. 2 H. Drucker, 'Improving Regressors using Boosting Techniques', 1997.

Sourceval get_item : index:Py.Object.t -> [> tag ] Obj.t -> Py.Object.t

Return the index'th estimator in the ensemble.

Sourceval iter : [> tag ] Obj.t -> Dict.t Seq.t

Return iterator over estimators in the ensemble.

Sourceval fit : ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> y:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> t

Build a boosted regressor from the training set (X, y).

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR.

y : array-like of shape (n_samples,) The target values (real numbers).

sample_weight : array-like of shape (n_samples,), default=None Sample weights. If None, the sample weights are initialized to 1 / n_samples.

Returns ------- self : object

Sourceval get_params : ?deep:bool -> [> tag ] Obj.t -> Dict.t

Get parameters for this estimator.

Parameters ---------- deep : bool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns ------- params : mapping of string to any Parameter names mapped to their values.

Sourceval predict : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t

Predict regression value for X.

The predicted regression value of an input sample is computed as the weighted median prediction of the classifiers in the ensemble.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR.

Returns ------- y : ndarray of shape (n_samples,) The predicted regression values.

Sourceval score : ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> y:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> float

Return the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.

Parameters ---------- X : array-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead, shape = (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

y : array-like of shape (n_samples,) or (n_samples, n_outputs) True values for X.

sample_weight : array-like of shape (n_samples,), default=None Sample weights.

Returns ------- score : float R^2 of self.predict(X) wrt. y.

Notes ----- The R2 score used when calling ``score`` on a regressor uses ``multioutput='uniform_average'`` from version 0.23 to keep consistent with default value of :func:`~sklearn.metrics.r2_score`. This influences the ``score`` method of all the multioutput regressors (except for :class:`~sklearn.multioutput.MultiOutputRegressor`).

Sourceval set_params : ?params:(string * Py.Object.t) list -> [> tag ] Obj.t -> t

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form ``<component>__<parameter>`` so that it's possible to update each component of a nested object.

Parameters ---------- **params : dict Estimator parameters.

Returns ------- self : object Estimator instance.

Sourceval staged_predict : x:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> [> `ArrayLike ] Np.Obj.t Seq.t

Return staged predictions for X.

The predicted regression value of an input sample is computed as the weighted median prediction of the classifiers in the ensemble.

This generator method yields the ensemble prediction after each iteration of boosting and therefore allows monitoring, such as to determine the prediction on a test set after each boost.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) The training input samples.

Yields ------- y : generator of ndarray of shape (n_samples,) The predicted regression values.

Sourceval staged_score : ?sample_weight:[> `ArrayLike ] Np.Obj.t -> x:[> `ArrayLike ] Np.Obj.t -> y:[> `ArrayLike ] Np.Obj.t -> [> tag ] Obj.t -> Py.Object.t

Return staged scores for X, y.

This generator method yields the ensemble score after each iteration of boosting and therefore allows monitoring, such as to determine the score on a test set after each boost.

Parameters ---------- X : array-like, sparse matrix of shape (n_samples, n_features) The training input samples. Sparse matrix can be CSC, CSR, COO, DOK, or LIL. COO, DOK, and LIL are converted to CSR.

y : array-like of shape (n_samples,) Labels for X.

sample_weight : array-like of shape (n_samples,), default=None Sample weights.

Yields ------ z : float

Sourceval base_estimator_ : t -> [ `BaseEstimator | `Object ] Np.Obj.t

Attribute base_estimator_: get value or raise Not_found if None.

Sourceval base_estimator_opt : t -> [ `BaseEstimator | `Object ] Np.Obj.t option

Attribute base_estimator_: get value as an option.

Sourceval estimators_ : t -> Py.Object.t

Attribute estimators_: get value or raise Not_found if None.

Sourceval estimators_opt : t -> Py.Object.t option

Attribute estimators_: get value as an option.

Sourceval estimator_weights_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute estimator_weights_: get value or raise Not_found if None.

Sourceval estimator_weights_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute estimator_weights_: get value as an option.

Sourceval estimator_errors_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute estimator_errors_: get value or raise Not_found if None.

Sourceval estimator_errors_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute estimator_errors_: get value as an option.

Sourceval feature_importances_ : t -> [> `ArrayLike ] Np.Obj.t

Attribute feature_importances_: get value or raise Not_found if None.

Sourceval feature_importances_opt : t -> [> `ArrayLike ] Np.Obj.t option

Attribute feature_importances_: get value as an option.

Sourceval warning : t -> Py.Object.t

Attribute Warning: get value or raise Not_found if None.

Sourceval warning_opt : t -> Py.Object.t option

Attribute Warning: get value as an option.

Sourceval to_string : t -> string

Print the object to a human-readable representation.

Sourceval show : t -> string

Print the object to a human-readable representation.

Sourceval pp : Format.formatter -> t -> unit

Pretty-print the object to a formatter.

OCaml

Innovation. Community. Security.