TpFpFn#
- class TpFpFn[source]#
Abstract class to calculate the TP, FP, FN and runs an evaluating function on the result.
- Parameters
- iou_thres: float, default: 0.5
IoU below this threshold will be ignored.
- confidence_thres: float, default: 0.5
Confidence below this threshold will be ignored.
- evaluating_function: Union[Callable, str], default: “recall”
will run on each class result i.e func(tp, fp, fn)
- averaging_methodstr, default: ‘per_class’
Determines which averaging method to apply, possible values are: ‘per_class’: Return a np array with the scores for each class (sorted by class name). ‘binary’: Returns the score for the positive class. Should be used only in binary classification cases. ‘micro’: Returns the micro-averaged score. ‘macro’: Returns the mean of scores per class. ‘weighted’: Returns a weighted mean of scores based of the class size in y_true.
- __init__(*args, iou_thres: float = 0.5, confidence_thres: float = 0.5, evaluating_function: Union[Callable, str] = 'recall', averaging_method='per_class', **kwargs)[source]#
- __new__(*args, **kwargs)#
Attributes
Methods
|
Attaches current metric to provided engine. |
|
Get a single result from group_class_detection_label and return a matrix of IoUs. |
|
Helper method to compute metric's value and put into the engine. |
Compute metric value. |
|
|
Detaches current metric from the engine and no metric's computation is done during the run. |
|
Get detections object of single image and should return confidence for each detection. |
|
Get detection object of single image and should return area for each detection. |
|
Get labels object of single image and should return area for each label. |
Group detection and labels in dict of format {class_id: {'detected' [...], 'ground_truth': [...]}}. |
|
|
Checks if current metric is attached to provided engine. |
|
Helper method to update metric's computation. |
Reset metric state. |
|
|
Helper method to start data gathering for metric's computation. |
|
Update metric with batch of samples. |