-Some metrics need some parameters to work the way we expect it to. For eg. the averaging scheme for a multi-class f1 score. Such parameters can be fed in through `metrics_params`, which is a list of dictionaries holding the parameters for the metrics declared in the same order. In classification metrics, PyTorch Tabular also needs to know whether the metric defined expects the predicted class (like Accuracy) or the predicted probability (like ROCAUC). This can be specified by `metrics_prob_input` which is a list of `True` or `False`, one for each custom metric you define. Length of `metrics_params`, `metric_prob_input` and `metrics` should be the same.
0 commit comments