Visualization

learning_curves

  1. ludwig.visualize.learning_curves(
  2. train_stats_per_model,
  3. field,
  4. model_names=None,
  5. output_directory=None,
  6. file_format='pdf'
  7. )

Show how model measures change over training and validation data epochs.

For each model and for each output feature and measure of the model,it produces a line plot showing how that measure changed over the courseof the epochs of training on the training and validation sets.:param train_stats_per_model: List containing train statistics per model:param field: Prediction field containing ground truth.:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


compare_performance

  1. ludwig.visualize.compare_performance(
  2. test_stats_per_model,
  3. field,
  4. model_names=None,
  5. output_directory=None,
  6. file_format='pdf'
  7. )

Produces model comparision barplot visualization for each overall metric

For each model (in the aligned lists of test_statistics and model_names)it produces bars in a bar plot, one for each overall metric availablein the test_statistics file for the specified field.:param test_stats_per_model: List containing train statistics per model:param field: Prediction field containing ground truth.:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


compare_classifiers_performance_from_prob

  1. ludwig.visualize.compare_classifiers_performance_from_prob(
  2. probabilities_per_model,
  3. ground_truth,
  4. top_n_classes,
  5. labels_limit,
  6. model_names=None,
  7. output_directory=None,
  8. file_format='pdf'
  9. )

Produces model comparision barplot visualization from probabilities.

For each model it produces bars in a bar plot, one for each overall metriccomputed on the fly from the probabilities of predictions for the specifiedfield.:param probabilities_per_model: List of model probabilities:param ground_truth: NumPy Array containing computed model ground truthdata for target prediction field based on the model metadata:param top_n_classes: List containing the number of classes to plot:param labels_limit: Maximum numbers of labels.If labels in dataset are higher than this number, "rare" label:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


compare_classifiers_performance_from_pred

  1. ludwig.visualize.compare_classifiers_performance_from_pred(
  2. predictions_per_model,
  3. ground_truth,
  4. metadata,
  5. field,
  6. labels_limit,
  7. model_names=None,
  8. output_directory=None,
  9. file_format='pdf'
  10. )

Produces model comparision barplot visualization from predictions.

For each model it produces bars in a bar plot, one for each overall metriccomputed on the fly from the predictions for the specified field.:param predictions_per_model: List containing the model predictionsfor the specified field:param ground_truth: NumPy Array containing computed model ground truthdata for target prediction field based on the model metadata:param metadata: Model's input metadata:param field: field containing ground truth:param labels_limit: Maximum numbers of labels.If labels in dataset are higher than this number, "rare" label:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


compare_classifiers_performance_subset

  1. ludwig.visualize.compare_classifiers_performance_subset(
  2. probabilities_per_model,
  3. ground_truth,
  4. top_n_classes,
  5. labels_limit,
  6. subset,
  7. model_names=None,
  8. output_directory=None,
  9. file_format='pdf'
  10. )

Produces model comparision barplot visualization from train subset.

For each model it produces bars in a bar plot, one for each overall metriccomputed on the fly from the probabilities predictions for thespecified field, considering only a subset of the full training set.The way the subset is obtained is using the top_n_classes andsubset parameters.:param probabilities_per_model: List of model probabilities:param ground_truth: NumPy Array containing computed model ground truthdata for target prediction field based on the model metadata:param top_n_classes: List containing the number of classes to plot:param labels_limit: Maximum numbers of labels.:param subset: Type of the subset filtering:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


compare_classifiers_performance_changing_k

  1. ludwig.visualize.compare_classifiers_performance_changing_k(
  2. probabilities_per_model,
  3. ground_truth,
  4. top_k,
  5. labels_limit,
  6. model_names=None,
  7. output_directory=None,
  8. file_format='pdf'
  9. )

Produce lineplot that show Hits@K measure while k goes from 1 to top_k.

For each model it produces a line plot that shows the Hits@K measure(that counts a prediction as correct if the model produces it among thefirst k) while changing k from 1 to top_k for the specified field.:param probabilities_per_model: List of model probabilities:param ground_truth: NumPy Array containing computed model ground truthdata for target prediction field based on the model metadataparam top_k: Number of elements in the ranklist to consider:param labels_limit: Maximum numbers of labels.If labels in dataset are higher than this number, "rare" label:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


compare_classifiers_multiclass_multimetric

  1. ludwig.visualize.compare_classifiers_multiclass_multimetric(
  2. test_stats_per_model,
  3. metadata,
  4. field,
  5. top_n_classes,
  6. model_names=None,
  7. output_directory=None,
  8. file_format='pdf'
  9. )

Show the precision, recall and F1 of the model for the specified field.

For each model it produces four plots that show the precision,recall and F1 of the model on several classes for the specified field.:param test_stats_per_model: List containing train statistics per model:param metadata: Model's input metadata:param field: Prediction field containing ground truth.:param top_n_classes: List containing the number of classes to plot:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None::return:


compare_classifiers_predictions

  1. ludwig.visualize.compare_classifiers_predictions(
  2. predictions_per_model,
  3. ground_truth,
  4. labels_limit,
  5. model_names=None,
  6. output_directory=None,
  7. file_format='pdf'
  8. )

Show two models comparision of their field predictions.

:param predictions_per_model: List containing the model predictions:param ground_truth: NumPy Array containing computed model ground truthdata for target prediction field based on the model metadata:param labels_limit: Maximum numbers of labels.If labels in dataset are higher than this number, "rare" label:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


confidence_thresholding_2thresholds_2d

  1. ludwig.visualize.confidence_thresholding_2thresholds_2d(
  2. probabilities_per_model,
  3. ground_truths,
  4. threshold_fields,
  5. labels_limit,
  6. model_names=None,
  7. output_directory=None,
  8. file_format='pdf'
  9. )

Show confidence trethreshold data vs accuracy for two field thresholds

The first plot shows several semi transparent lines. They summarize the3d surfaces displayed by confidence_thresholding_2thresholds_3d that havethresholds on the confidence of the predictions of the twothreshold_fields as x and y axes and either the data coverage percentage orthe accuracy as z axis. Each line represents a slice of the datacoverage surface projected onto the accuracy surface.:param probabilities_per_model: List of model probabilities:param ground_truths: List of NumPy Arrays containing computed model groundtruth data for target prediction fields based on the modelmetadata:param threshold_fields: List of fields for 2d threshold:param labels_limit: Maximum numbers of labels.If labels in dataset are higher than this number, "rare" label:param model_names: Name of the model to use as label.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


confidence_thresholding_2thresholds_3d

  1. ludwig.visualize.confidence_thresholding_2thresholds_3d(
  2. probabilities_per_model,
  3. ground_truths,
  4. threshold_fields,
  5. labels_limit,
  6. output_directory=None,
  7. file_format='pdf'
  8. )

Show 3d confidence trethreshold data vs accuracy for two field thresholds

The plot shows the 3d surfaces displayed byconfidence_thresholding_2thresholds_3d that have thresholds on theconfidence of the predictions of the two threshold_fields as x and y axesand either the data coverage percentage or the accuracy as z axis.:param probabilities_per_model: List of model probabilities:param ground_truths: List of NumPy Arrays containing computed model groundtruth data for target prediction fields based on the modelmetadata:param threshold_fields: List of fields for 2d threshold:param labels_limit: Maximum numbers of labels.If labels in dataset are higher than this number, "rare" label:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


confidence_thresholding

  1. ludwig.visualize.confidence_thresholding(
  2. probabilities_per_model,
  3. ground_truth,
  4. labels_limit,
  5. model_names=None,
  6. output_directory=None,
  7. file_format='pdf'
  8. )

Show models accuracy and data coverage while increasing treshold

For each model it produces a pair of lines indicating the accuracy ofthe model and the data coverage while increasing a threshold (x axis) onthe probabilities of predictions for the specified field.:param probabilities_per_model: List of model probabilities:param ground_truth: NumPy Array containing computed model ground truthdata for target prediction field based on the model metadata:param labels_limit: Maximum numbers of labels.If labels in dataset are higher than this number, "rare" label:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


confidence_thresholding_data_vs_acc

  1. ludwig.visualize.confidence_thresholding_data_vs_acc(
  2. probabilities_per_model,
  3. ground_truth,
  4. labels_limit,
  5. model_names=None,
  6. output_directory=None,
  7. file_format='pdf'
  8. )

Show models comparision of confidence treshold data vs accuracy.

For each model it produces a line indicating the accuracy of the modeland the data coverage while increasing a threshold on the probabilitiesof predictions for the specified field. The difference withconfidence_thresholding is that it uses two axes instead of three,not visualizing the threshold and having coverage as x axis instead ofthe threshold.:param probabilities_per_model: List of model probabilities:param ground_truth: NumPy Array containing computed model ground truthdata for target prediction field based on the model metadata:param labels_limit: Maximum numbers of labels.If labels in dataset are higher than this number, "rare" label:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


confidence_thresholding_data_vs_acc_subset

  1. ludwig.visualize.confidence_thresholding_data_vs_acc_subset(
  2. probabilities_per_model,
  3. ground_truth,
  4. top_n_classes,
  5. labels_limit,
  6. subset,
  7. model_names=None,
  8. output_directory=None,
  9. file_format='pdf'
  10. )

Show models comparision of confidence treshold data vs accuracy on asubset of data.

For each model it produces a line indicating the accuracy of the modeland the data coverage while increasing a threshold on the probabilitiesof predictions for the specified field, considering only a subset of thefull training set. The way the subset is obtained is using the top_n_classesand subset parameters.The difference with confidence_thresholding is that it uses two axesinstead of three, not visualizing the threshold and having coverage asx axis instead of the threshold.

If the values of subset is ground_truth, then only datapoints where theground truth class is within the top n most frequent ones will beconsidered as test set, and the percentage of datapoints that have beenkept from the original set will be displayed. If the values of subset ispredictions, then only datapoints where the the model predicts a classthat is within the top n most frequent ones will be considered as test set,and the percentage of datapoints that have been kept from the original setwill be displayed for each model.:param probabilities_per_model: List of model probabilities:param ground_truth: NumPy Array containing computed model ground truthdata for target prediction field based on the model metadata:param top_n_classes: List containing the number of classes to plot:param labels_limit: Maximum numbers of labels.:param subset: Type of the subset filtering:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


binary_threshold_vs_metric

  1. ludwig.visualize.binary_threshold_vs_metric(
  2. probabilities_per_model,
  3. ground_truth,
  4. metrics,
  5. positive_label=1,
  6. model_names=None,
  7. output_directory=None,
  8. file_format='pdf'
  9. )

Show confidence of the model against metric for the specified field.

For each metric specified in metrics (options are f1, precision, recall,accuracy), this visualization produces a line chart plotting a thresholdon the confidence of the model against the metric for the specifiedfield. If field is a category feature, positive_label indicates which isthe class to be considered positive class and all the others will beconsidered negative. It needs to be an integer, to figure out theassociation between classes and integers check the ground_truth_metadataJSON file.:param probabilities_per_model: List of model probabilities:param ground_truth: List of NumPy Arrays containing computed modelground truth data for target prediction fields based on the modelmetadata:param metrics: metrics to dispay (f1, precision, recall,accuracy):param positive_label: Label of the positive class:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


roc_curves

  1. ludwig.visualize.roc_curves(
  2. probabilities_per_model,
  3. ground_truth,
  4. positive_label=1,
  5. model_names=None,
  6. output_directory=None,
  7. file_format='pdf'
  8. )

Show the roc curves for the specified models output field.

This visualization produces a line chart plotting the roc curves for thespecified field. If field is a category feature, positive_label indicateswhich is the class to be considered positive class and all the others willbe considered negative. It needs to be an integer, to figure out theassociation between classes and integers check the ground_truth_metadataJSON file.:param probabilities_per_model: List of model probabilities:param ground_truth: List of NumPy Arrays containing computed modelground truth data for target prediction fields based on the modelmetadata:param positive_label: Label of the positive class:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


roc_curves_from_test_statistics

  1. ludwig.visualize.roc_curves_from_test_statistics(
  2. test_stats_per_model,
  3. field,
  4. model_names=None,
  5. output_directory=None,
  6. file_format='pdf'
  7. )

Show the roc curves for the specified models output binary field.

This visualization uses the field, test_statistics and model_namesparameters. field needs to be binary feature. This visualization produces aline chart plotting the roc curves for the specified field.:param test_stats_per_model: List containing train statistics per model:param field: Prediction field containing ground truth.:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


calibration_1_vs_all

  1. ludwig.visualize.calibration_1_vs_all(
  2. probabilities_per_model,
  3. ground_truth,
  4. top_n_classes,
  5. labels_limit,
  6. model_names=None,
  7. output_directory=None,
  8. file_format='pdf'
  9. )

Show models probability of predictions for the specified field.

For each class or each of the k most frequent classes if top_k isspecified, it produces two plots computed on the fly from theprobabilities of predictions for the specified field.

The first plot is a calibration curve that shows the calibration of thepredictions considering the current class to be the true one and allothers to be a false one, drawing one line for each model (in thealigned lists of probabilities and model_names).

The second plot shows the distributions of the predictions consideringthe current class to be the true one and all others to be a false one,drawing the distribution for each model (in the aligned lists ofprobabilities and model_names).:param probabilities_per_model: List of model probabilities:param ground_truth: NumPy Array containing computed model ground truthdata for target prediction field based on the model metadata:param top_n_classes: List containing the number of classes to plot:param labels_limit: Maximum numbers of labels.If labels in dataset are higher than this number, "rare" label:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


calibration_multiclass

  1. ludwig.visualize.calibration_multiclass(
  2. probabilities_per_model,
  3. ground_truth,
  4. labels_limit,
  5. model_names=None,
  6. output_directory=None,
  7. file_format='pdf'
  8. )

Show models probability of predictions for each class of the thespecified field.

:param probabilities_per_model: List of model probabilities:param ground_truth: NumPy Array containing computed model ground truthdata for target prediction field based on the model metadata:param labels_limit: Maximum numbers of labels.If labels in dataset are higher than this number, "rare" label:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None:


confusion_matrix

  1. ludwig.visualize.confusion_matrix(
  2. test_stats_per_model,
  3. metadata,
  4. field,
  5. top_n_classes,
  6. normalize,
  7. model_names=None,
  8. output_directory=None,
  9. file_format='pdf'
  10. )

Show confision matrix in the models predictions for each field.

For each model (in the aligned lists of test_statistics and model_names)it produces a heatmap of the confusion matrix in the predictions foreach field that has a confusion matrix in test_statistics. The value oftop_n_classes limits the heatmap to the n most frequent classes.:param test_stats_per_model: List containing train statistics per model:param metadata: Model's input metadata:param field: Prediction field containing ground truth.:param top_n_classes: List containing the number of classes to plot:param normalize: Flag to normalize rows in confusion matrix:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None::return:


frequency_vs_f1

  1. ludwig.visualize.frequency_vs_f1(
  2. test_stats_per_model,
  3. metadata,
  4. field,
  5. top_n_classes,
  6. model_names=None,
  7. output_directory=None,
  8. file_format='pdf'
  9. )

Show prediction statistics for the specified field for each model.

For each model (in the aligned lists of test_statistics and model_names),produces two plots statistics of predictions for the specified field.

The first plot is a line plot with one x axis representing the differentclasses and two vertical axes colored in orange and blue respectively.The orange one is the frequency of the class and an orange line is plottedto show the trend. The blue one is the F1 score for that class and a blueline is plotted to show the trend. The classes on the x axis are sorted byf1 score.The second plot has the same structure of the first one,but the axes are flipped and the classes on the x axis are sorted byfrequency.:param test_stats_per_model: List containing train statistics per model:param metadata: Model's input metadata:param field: Prediction field containing ground truth.:param top_n_classes: List containing the number of classes to plot:param model_names: List of the names of the models to use as labels.:param output_directory: Directory where to save plots.If not specified, plots will be displayed in a window:param file_format: File format of output plots - pdf or png:return None::return: