CSTreeClassifier#
- class empulse.models.CSTreeClassifier(*, tp_cost=0.0, tn_cost=0.0, fn_cost=0.0, fp_cost=0.0, loss=None, criterion='cost', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, class_weight=None, ccp_alpha=0.0, monotonic_cst=None)[source]#
Cost-sensitive decision tree classifier.
- Parameters:
- tp_costfloat or array-like, shape=(n_samples,), default=0.0
Cost of true positives. If
float, then all true positives have the same cost. If array-like, then it is the cost of each true positive classification. Is overwritten if another tp_cost is passed to thefitmethod.Note
It is not recommended to pass instance-dependent costs to the
__init__method. Instead, pass them to thefitmethod.- fp_costfloat or array-like, shape=(n_samples,), default=0.0
Cost of false positives. If
float, then all false positives have the same cost. If array-like, then it is the cost of each false positive classification. Is overwritten if another fp_cost is passed to thefitmethod.Note
It is not recommended to pass instance-dependent costs to the
__init__method. Instead, pass them to thefitmethod.- tn_costfloat or array-like, shape=(n_samples,), default=0.0
Cost of true negatives. If
float, then all true negatives have the same cost. If array-like, then it is the cost of each true negative classification. Is overwritten if another tn_cost is passed to thefitmethod.Note
It is not recommended to pass instance-dependent costs to the
__init__method. Instead, pass them to thefitmethod.- fn_costfloat or array-like, shape=(n_samples,), default=0.0
Cost of false negatives. If
float, then all false negatives have the same cost. If array-like, then it is the cost of each false negative classification. Is overwritten if another fn_cost is passed to thefitmethod.Note
It is not recommended to pass instance-dependent costs to the
__init__method. Instead, pass them to thefitmethod.- lossMetric or None, default=None
The metric to measure the quality of a split. If None, the cost impurity is used.
- criterion{“cost”,, “gini”, “log_loss” or “entropy”}, default=”cost”
The function to measure the quality of a split.
How the measure to estimate quality of a split is weighted.
If
"cost": The metric is used normally, without extra weighting.If
"gini": The Gini impurity is used to weight the metric.If
"log_loss"or"entropy": The Shannon information gain is used to weight the metric.
- splitter{“best”, “random”}, default=”best”
The strategy used to choose the split at each node. Supported strategies are “best” to choose the best split and “random” to choose the best random split.
- max_depthint or None, default=None
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
- min_samples_splitint or float, default=2
The minimum number of samples required to split an internal node:
If int, then consider min_samples_split as the minimum number.
If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples) are the minimum number of samples for each split.
- min_samples_leafint or float, default=1
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least
min_samples_leaftraining samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.If int, then consider min_samples_leaf as the minimum number.
If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
Changed in version 0.18: Added float values for fractions.
- min_weight_fraction_leaffloat, default=0.0
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
- max_featuresint, float or {“sqrt”, “log2”}, default=None
The number of features to consider when looking for the best split:
If int, then consider max_features features at each split.
If float, then max_features is a fraction and max(1, int(max_features * n_features_in_)) features are considered at each split.
If “sqrt”, then max_features=sqrt(n_features).
If “log2”, then max_features=log2(n_features).
If None, then max_features=n_features.
Note
The search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than
max_featuresfeatures.- random_stateint, RandomState instance or None, default=None
Controls the randomness of the estimator. The features are always randomly permuted at each split, even if
splitteris set to"best". Whenmax_features < n_features, the algorithm will selectmax_featuresat random at each split before finding the best split among them. But the best found split may vary across different runs, even ifmax_features=n_features. That is the case, if the improvement of the criterion is identical for several splits and one split has to be selected at random. To obtain a deterministic behaviour during fitting,random_statehas to be fixed to an integer. See Sklearn Glossary for details.- max_leaf_nodesint, default=None
Grow a tree with
max_leaf_nodesin best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.- min_impurity_decreasefloat, default=0.0
A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
The weighted impurity decrease equation is the following:
N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity)
where
Nis the total number of samples,N_tis the number of samples at the current node,N_t_Lis the number of samples in the left child, andN_t_Ris the number of samples in the right child.N,N_t,N_t_RandN_t_Lall refer to the weighted sum, ifsample_weightis passed.- class_weightdict, list of dict or “balanced”, default=None
Weights associated with classes in the form
{class_label: weight}. If None, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y.Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}].
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as
n_samples / (n_classes * np.bincount(y))For multi-output, the weights of each column of y will be multiplied.
Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.
- ccp_alphanon-negative float, default=0.0
Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than
ccp_alphawill be chosen. By default, no pruning is performed. See Minimal Cost-Complexity Pruning for details. See Post pruning decision trees with cost complexity pruning for an example of such pruning.- monotonic_cstarray-like of int of shape (n_features), default=None
- Indicates the monotonicity constraint to enforce on each feature.
1: monotonic increase
0: no constraint
-1: monotonic decrease
If monotonic_cst is None, no constraints are applied.
- Monotonicity constraints are not supported for:
multiclass classifications (i.e. when n_classes > 2),
multioutput classifications (i.e. when n_outputs_ > 1),
classifications trained on data with missing values.
The constraints hold over the probability of the positive class.
Read more in the Sklearn User Guide.
- Attributes:
- estimator_
DecisionTreeClassifier The underlying DecisionTreeClassifier estimator.
- classes_ndarray of shape (n_classes,) or list of ndarray
The classes labels (single output problem), or a list of arrays of class labels (multi-output problem).
feature_importances_ndarray of shape (n_features,)Return the feature importances.
max_features_intReturn the inferred value of max_features.
n_classes_int or list of intReturn the number of classes.
- n_features_in_int
Number of features seen during fit.
- feature_names_in_ndarray of shape (n_features_in_,)
Names of features seen during fit. Defined only when X has feature names that are all strings.
n_outputs_intThe number of outputs when
fitis performed.tree_Tree instanceThe underlying Tree object.
- estimator_
References
[1]Correa Bahnsen, A., Aouada, D., & Ottersten, B. “Example-Dependent Cost-Sensitive Decision Trees. Expert Systems with Applications”, Expert Systems with Applications, 42(19), 6609–6619, 2015, http://doi.org/10.1016/j.eswa.2015.04.042
- apply(X, check_input=True)[source]#
Return the index of the leaf that each sample is predicted as.
- Parameters:
- X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, it will be converted to
dtype=np.float32and if a sparse matrix is provided to a sparsecsr_matrix.- check_inputbool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you’re doing.
- Returns:
- X_leavesndarray of shape (n_samples,)
For each datapoint x in X, return the index of the leaf x ends up in. Leaves are numbered within
[0; self.tree_.node_count), possibly with gaps in the numbering.
- cost_complexity_pruning_path(X, y, sample_weight=None)[source]#
Compute the pruning path during Minimal Cost-Complexity Pruning.
See Minimal Cost-Complexity Pruning for details on the pruning process.
- Parameters:
- X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Internally, it will be converted to
dtype=np.float32and if a sparse matrix is provided to a sparsecsc_matrix.- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels) as integers or strings.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. Splits are also ignored if they would result in any single class carrying a negative weight in either child node.
- Returns:
- ccp_path
Bunch Dictionary-like object, with the following attributes.
- ccp_alphasndarray
Effective alphas of subtree during pruning.
- impuritiesndarray
Sum of the impurities of the subtree leaves for the corresponding alpha value in
ccp_alphas.
- ccp_path
- decision_path(X, check_input=True)[source]#
Return the decision path in the tree.
- Parameters:
- X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, it will be converted to
dtype=np.float32and if a sparse matrix is provided to a sparsecsr_matrix.- check_inputbool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you’re doing.
- Returns:
- indicatorsparse matrix of shape (n_samples, n_nodes)
Return a node indicator CSR matrix where non zero elements indicates that the samples goes through the nodes.
- property feature_importances_#
Return the feature importances.
- fit(X, y, *, tp_cost=Parameter.UNCHANGED, tn_cost=Parameter.UNCHANGED, fn_cost=Parameter.UNCHANGED, fp_cost=Parameter.UNCHANGED, **loss_params)[source]#
Build an example-dependent cost-sensitive decision tree from the training set.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
The input samples.
- yarray-like of shape (n_samples,)
Ground truth (correct) labels.
- tp_costfloat or array-like, shape=(n_samples,), default=$UNCHANGED$
Cost of true positives. If
float, then all true positives have the same cost. If array-like, then it is the cost of each true positive classification.- fp_costfloat or array-like, shape=(n_samples,), default=$UNCHANGED$
Cost of false positives. If
float, then all false positives have the same cost. If array-like, then it is the cost of each false positive classification.- tn_costfloat or array-like, shape=(n_samples,), default=$UNCHANGED$
Cost of true negatives. If
float, then all true negatives have the same cost. If array-like, then it is the cost of each true negative classification.- fn_costfloat or array-like, shape=(n_samples,), default=$UNCHANGED$
Cost of false negatives. If
float, then all false negatives have the same cost. If array-like, then it is the cost of each false negative classification.- loss_paramsdict
Additional keyword arguments to pass to the loss function if using a custom loss function.
- Returns:
- selfobject
Returns self.
- get_metadata_routing()#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequestencapsulating routing information.
- get_params(deep=True)#
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- property max_features_#
Return the inferred value of max_features.
- property n_classes_#
Return the number of classes.
- property n_outputs_#
The number of outputs when
fitis performed.
- predict(X, check_input=True)[source]#
Predict class value for X.
- Parameters:
- X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, it will be converted to
dtype=np.float32and if a sparse matrix is provided to a sparsecsr_matrix.- check_inputbool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you’re doing.
- Returns:
- yarray-like of shape (n_samples,)
The predicted classes.
- predict_log_proba(X)[source]#
Predict class log-probabilities of the input samples X.
- Parameters:
- X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, it will be converted to
dtype=np.float32and if a sparse matrix is provided to a sparsecsr_matrix.
- Returns:
- probandarray of shape (n_samples, n_classes)
The class log-probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.
- predict_proba(X, check_input=True)[source]#
Predict class probabilities of the input samples X.
The predicted class probability is the fraction of samples of the same class in a leaf.
- Parameters:
- X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, it will be converted to
dtype=np.float32and if a sparse matrix is provided to a sparsecsr_matrix.- check_inputbool, default=True
Allow to bypass several input checking. Don’t use this parameter unless you know what you’re doing.
- Returns:
- probandarray of shape (n_samples, n_classes)
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.
- score(X, y, sample_weight=None)#
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
Test samples.
- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- Returns:
- scorefloat
Mean accuracy of
self.predict(X)w.r.t. y.
- set_fit_request(*, fn_cost='$UNCHANGED$', fp_cost='$UNCHANGED$', tn_cost='$UNCHANGED$', tp_cost='$UNCHANGED$')#
Request metadata passed to the
fitmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed tofitif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it tofit.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
- fn_coststr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
fn_costparameter infit.- fp_coststr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
fp_costparameter infit.- tn_coststr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
tn_costparameter infit.- tp_coststr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
tp_costparameter infit.
- Returns:
- selfobject
The updated object.
- set_params(**params)#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline). The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- set_predict_proba_request(*, check_input='$UNCHANGED$')#
Request metadata passed to the
predict_probamethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed topredict_probaif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it topredict_proba.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
- check_inputstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
check_inputparameter inpredict_proba.
- Returns:
- selfobject
The updated object.
- set_predict_request(*, check_input='$UNCHANGED$')#
Request metadata passed to the
predictmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed topredictif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it topredict.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
- check_inputstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
check_inputparameter inpredict.
- Returns:
- selfobject
The updated object.
- set_score_request(*, sample_weight='$UNCHANGED$')#
Request metadata passed to the
scoremethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed toscoreif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it toscore.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weightparameter inscore.
- Returns:
- selfobject
The updated object.
- property tree_#
The underlying Tree object.