Metrics

Open In Colab

Definition of the metrics that can be used in training models

  1. /usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
  2. return torch._C._cuda_getDeviceCount() > 0

Core metric

This is where the function that converts scikit-learn metrics to fastai metrics is defined. You should skip this section unless you want to know all about the internals of fastai.

flatten_check[source]

flatten_check(inp, targ)

Check that out and targ have the same number of elements and flatten them.

  1. x1,x2 = torch.randn(5,4),torch.randn(20)
  2. x1,x2 = flatten_check(x1,x2)
  3. test_eq(x1.shape, [20])
  4. test_eq(x2.shape, [20])
  5. x1,x2 = torch.randn(5,4),torch.randn(21)
  6. test_fail(lambda: flatten_check(x1,x2))

class AccumMetric[source]

AccumMetric(func, dim_argmax=None, activation='no', thresh=None, to_np=False, invert_arg=False, flatten=True, **kwargs) :: Metric

Stores predictions and targets on CPU in accumulate to perform final calculations with func.

func is only applied to the accumulated predictions/targets when the value attribute is asked for (so at the end of a validation/training phase, in use with Learner and its Recorder).The signature of func should be inp,targ (where inp are the predictions of the model and targ the corresponding labels).

For classification problems with single label, predictions need to be transformed with a softmax then an argmax before being compared to the targets. Since a softmax doesn’t change the order of the numbers, we can just apply the argmax. Pass along dim_argmax to have this done by AccumMetric (usually -1 will work pretty well). If you need to pass to your metrics the probabilities and not the predictions, use softmax=True.

For classification problems with multiple labels, or if your targets are one-hot encoded, predictions may need to pass through a sigmoid (if it wasn’t included in your model) then be compared to a given threshold (to decide between 0 and 1), this is done by AccumMetric if you pass sigmoid=True and/or a value for thresh.

If you want to use a metric function sklearn.metrics, you will need to convert predictions and labels to numpy arrays with to_np=True. Also, scikit-learn metrics adopt the convention y_true, y_preds which is the opposite from us, so you will need to pass invert_arg=True to make AccumMetric do the inversion for you.

  1. @delegates()
  2. class TstLearner(Learner):
  3. def __init__(self,dls=None,model=None,**kwargs): self.pred,self.xb,self.yb = None,None,None
  1. def _l2_mean(x,y): return torch.sqrt((x.float()-y.float()).pow(2).mean())
  2. #Go through a fake cycle with various batch sizes and computes the value of met
  3. def compute_val(met, x1, x2):
  4. met.reset()
  5. vals = [0,6,15,20]
  6. learn = TstLearner()
  7. for i in range(3):
  8. learn.pred,learn.yb = x1[vals[i]:vals[i+1]],(x2[vals[i]:vals[i+1]],)
  9. met.accumulate(learn)
  10. return met.value
  1. x1,x2 = torch.randn(20,5),torch.randn(20,5)
  2. tst = AccumMetric(_l2_mean)
  3. test_close(compute_val(tst, x1, x2), _l2_mean(x1, x2))
  4. test_eq(torch.cat(tst.preds), x1.view(-1))
  5. test_eq(torch.cat(tst.targs), x2.view(-1))
  6. #test argmax
  7. x1,x2 = torch.randn(20,5),torch.randint(0, 5, (20,))
  8. tst = AccumMetric(_l2_mean, dim_argmax=-1)
  9. test_close(compute_val(tst, x1, x2), _l2_mean(x1.argmax(dim=-1), x2))
  10. #test thresh
  11. x1,x2 = torch.randn(20,5),torch.randint(0, 2, (20,5)).bool()
  12. tst = AccumMetric(_l2_mean, thresh=0.5)
  13. test_close(compute_val(tst, x1, x2), _l2_mean((x1 >= 0.5), x2))
  14. #test sigmoid
  15. x1,x2 = torch.randn(20,5),torch.randn(20,5)
  16. tst = AccumMetric(_l2_mean, activation=ActivationType.Sigmoid)
  17. test_close(compute_val(tst, x1, x2), _l2_mean(torch.sigmoid(x1), x2))
  18. #test to_np
  19. x1,x2 = torch.randn(20,5),torch.randn(20,5)
  20. tst = AccumMetric(lambda x,y: isinstance(x, np.ndarray) and isinstance(y, np.ndarray), to_np=True)
  21. assert compute_val(tst, x1, x2)
  22. #test invert_arg
  23. x1,x2 = torch.randn(20,5),torch.randn(20,5)
  24. tst = AccumMetric(lambda x,y: torch.sqrt(x.pow(2).mean()))
  25. test_close(compute_val(tst, x1, x2), torch.sqrt(x1.pow(2).mean()))
  26. tst = AccumMetric(lambda x,y: torch.sqrt(x.pow(2).mean()), invert_arg=True)
  27. test_close(compute_val(tst, x1, x2), torch.sqrt(x2.pow(2).mean()))

skm_to_fastai[source]

skm_to_fastai(func, is_class=True, thresh=None, axis=-1, activation=None, **kwargs)

Convert func from sklearn.metrics to a fastai metric

This is the quickest way to use a scikit-learn metric in a fastai training loop. is_class indicates if you are in a classification problem or not. In this case:

  • leaving thresh to None indicates it’s a single-label classification problem and predictions will pass through an argmax over axis before being compared to the targets
  • setting a value for thresh indicates it’s a multi-label classification problem and predictions will pass through a sigmoid (can be deactivated with sigmoid=False) and be compared to thresh before being compared to the targets

If is_class=False, it indicates you are in a regression problem, and predictions are compared to the targets without being modified. In all cases, kwargs are extra keyword arguments passed to func.

  1. tst_single = skm_to_fastai(skm.precision_score)
  2. x1,x2 = torch.randn(20,2),torch.randint(0, 2, (20,))
  3. test_close(compute_val(tst_single, x1, x2), skm.precision_score(x2, x1.argmax(dim=-1)))
  1. tst_multi = skm_to_fastai(skm.precision_score, thresh=0.2)
  2. x1,x2 = torch.randn(20),torch.randint(0, 2, (20,))
  3. test_close(compute_val(tst_multi, x1, x2), skm.precision_score(x2, torch.sigmoid(x1) >= 0.2))
  4. tst_multi = skm_to_fastai(skm.precision_score, thresh=0.2, activation=ActivationType.No)
  5. x1,x2 = torch.randn(20),torch.randint(0, 2, (20,))
  6. test_close(compute_val(tst_multi, x1, x2), skm.precision_score(x2, x1 >= 0.2))
  1. tst_reg = skm_to_fastai(skm.r2_score, is_class=False)
  2. x1,x2 = torch.randn(20,5),torch.randn(20,5)
  3. test_close(compute_val(tst_reg, x1, x2), skm.r2_score(x2.view(-1), x1.view(-1)))
  1. test_close(tst_reg(x1, x2), skm.r2_score(x2.view(-1), x1.view(-1)))

optim_metric[source]

optim_metric(f, argname, bounds, tol=0.01, do_neg=True, get_x=False)

Replace metric f with a version that optimizes argument argname

Single-label classification

Warning: All functions defined in this section are intended for single-label classification and targets that are not one-hot encoded. For multi-label problems or one-hot encoded targets, use the version suffixed with multi.

Warning: Many metrics in fastai are thin wrappers around sklearn functionality. However, sklearn metrics can handle python list strings, amongst other things, whereas fastai metrics work with PyTorch, and thus require tensors. The arguments that are passed to metrics are after all transformations, such as categories being converted to indices, have occurred. This means that when you pass a label of a metric, for instance, that you must pass indices, not strings. This can be converted with vocab.map_obj.

accuracy[source]

accuracy(inp, targ, axis=-1)

Compute accuracy with targ when pred is bs * n_classes

  1. def change_targ(targ, n, c):
  2. idx = torch.randperm(len(targ))[:n]
  3. res = targ.clone()
  4. for i in idx: res[i] = (res[i]+random.randint(1,c-1))%c
  5. return res
  1. x = torch.randn(4,5)
  2. y = x.argmax(dim=1)
  3. test_eq(accuracy(x,y), 1)
  4. y1 = change_targ(y, 2, 5)
  5. test_eq(accuracy(x,y1), 0.5)
  6. test_eq(accuracy(x.unsqueeze(1).expand(4,2,5), torch.stack([y,y1], dim=1)), 0.75)

error_rate[source]

error_rate(inp, targ, axis=-1)

1 - accuracy

  1. x = torch.randn(4,5)
  2. y = x.argmax(dim=1)
  3. test_eq(error_rate(x,y), 0)
  4. y1 = change_targ(y, 2, 5)
  5. test_eq(error_rate(x,y1), 0.5)
  6. test_eq(error_rate(x.unsqueeze(1).expand(4,2,5), torch.stack([y,y1], dim=1)), 0.25)

top_k_accuracy[source]

top_k_accuracy(inp, targ, k=5, axis=-1)

Computes the Top-k accuracy (targ is in the top k predictions of inp)

  1. x = torch.randn(6,5)
  2. y = torch.arange(0,6)
  3. test_eq(top_k_accuracy(x[:5],y[:5]), 1)
  4. test_eq(top_k_accuracy(x, y), 5/6)

APScoreBinary[source]

APScoreBinary(axis=-1, average='macro', pos_label=1, sample_weight=None)

Average Precision for single-label binary classification problems

See the scikit-learn documentation for more details.

BalancedAccuracy[source]

BalancedAccuracy(axis=-1, sample_weight=None, adjusted=False)

Balanced Accuracy for single-label binary classification problems

See the scikit-learn documentation for more details.

BrierScore[source]

BrierScore(axis=-1, sample_weight=None, pos_label=None)

Brier score for single-label classification problems

See the scikit-learn documentation for more details.

CohenKappa[source]

CohenKappa(axis=-1, labels=None, weights=None, sample_weight=None)

Cohen kappa for single-label classification problems

See the scikit-learn documentation for more details.

F1Score[source]

F1Score(axis=-1, labels=None, pos_label=1, average='binary', sample_weight=None)

F1 score for single-label classification problems

See the scikit-learn documentation for more details.

FBeta[source]

FBeta(beta, axis=-1, labels=None, pos_label=1, average='binary', sample_weight=None)

FBeta score with beta for single-label classification problems

See the scikit-learn documentation for more details.

HammingLoss[source]

HammingLoss(axis=-1, sample_weight=None)

Hamming loss for single-label classification problems

See the scikit-learn documentation for more details.

Jaccard[source]

Jaccard(axis=-1, labels=None, pos_label=1, average='binary', sample_weight=None)

Jaccard score for single-label classification problems

See the scikit-learn documentation for more details.

Precision[source]

Precision(axis=-1, labels=None, pos_label=1, average='binary', sample_weight=None)

Precision for single-label classification problems

See the scikit-learn documentation for more details.

Recall[source]

Recall(axis=-1, labels=None, pos_label=1, average='binary', sample_weight=None)

Recall for single-label classification problems

See the scikit-learn documentation for more details.

RocAuc[source]

RocAuc(axis=-1, average='macro', sample_weight=None, max_fpr=None, multi_class='ovr')

Area Under the Receiver Operating Characteristic Curve for single-label multiclass classification problems

See the scikit-learn documentation for more details.

RocAucBinary[source]

RocAucBinary(axis=-1, average='macro', sample_weight=None, max_fpr=None, multi_class='raise')

Area Under the Receiver Operating Characteristic Curve for single-label binary classification problems

See the scikit-learn documentation for more details.

MatthewsCorrCoef[source]

MatthewsCorrCoef(sample_weight=None, **kwargs)

Matthews correlation coefficient for single-label classification problems

See the scikit-learn documentation for more details.

class Perplexity[source]

Perplexity() :: AvgLoss

Perplexity (exponential of cross-entropy loss) for Language Models

  1. x1,x2 = torch.randn(20,5),torch.randint(0, 5, (20,))
  2. tst = perplexity
  3. tst.reset()
  4. vals = [0,6,15,20]
  5. learn = TstLearner()
  6. for i in range(3):
  7. learn.yb = (x2[vals[i]:vals[i+1]],)
  8. learn.loss = F.cross_entropy(x1[vals[i]:vals[i+1]],x2[vals[i]:vals[i+1]])
  9. tst.accumulate(learn)
  10. test_close(tst.value, torch.exp(F.cross_entropy(x1,x2)))

Multi-label classification

accuracy_multi[source]

accuracy_multi(inp, targ, thresh=0.5, sigmoid=True)

Compute accuracy when inp and targ are the same size.

  1. def change_1h_targ(targ, n):
  2. idx = torch.randperm(targ.numel())[:n]
  3. res = targ.clone().view(-1)
  4. for i in idx: res[i] = 1-res[i]
  5. return res.view(targ.shape)
  1. x = torch.randn(4,5)
  2. y = (torch.sigmoid(x) >= 0.5).byte()
  3. test_eq(accuracy_multi(x,y), 1)
  4. test_eq(accuracy_multi(x,1-y), 0)
  5. y1 = change_1h_targ(y, 5)
  6. test_eq(accuracy_multi(x,y1), 0.75)
  7. #Different thresh
  8. y = (torch.sigmoid(x) >= 0.2).byte()
  9. test_eq(accuracy_multi(x,y, thresh=0.2), 1)
  10. test_eq(accuracy_multi(x,1-y, thresh=0.2), 0)
  11. y1 = change_1h_targ(y, 5)
  12. test_eq(accuracy_multi(x,y1, thresh=0.2), 0.75)
  13. #No sigmoid
  14. y = (x >= 0.5).byte()
  15. test_eq(accuracy_multi(x,y, sigmoid=False), 1)
  16. test_eq(accuracy_multi(x,1-y, sigmoid=False), 0)
  17. y1 = change_1h_targ(y, 5)
  18. test_eq(accuracy_multi(x,y1, sigmoid=False), 0.75)

APScoreMulti[source]

APScoreMulti(sigmoid=True, average='macro', pos_label=1, sample_weight=None)

Average Precision for multi-label classification problems

See the scikit-learn documentation for more details.

BrierScoreMulti[source]

BrierScoreMulti(thresh=0.5, sigmoid=True, sample_weight=None, pos_label=None)

Brier score for multi-label classification problems

See the scikit-learn documentation for more details.

F1ScoreMulti[source]

F1ScoreMulti(thresh=0.5, sigmoid=True, labels=None, pos_label=1, average='macro', sample_weight=None)

F1 score for multi-label classification problems

See the scikit-learn documentation for more details.

FBetaMulti[source]

FBetaMulti(beta, thresh=0.5, sigmoid=True, labels=None, pos_label=1, average='macro', sample_weight=None)

FBeta score with beta for multi-label classification problems

See the scikit-learn documentation for more details.

HammingLossMulti[source]

HammingLossMulti(thresh=0.5, sigmoid=True, labels=None, sample_weight=None)

Hamming loss for multi-label classification problems

See the scikit-learn documentation for more details.

JaccardMulti[source]

JaccardMulti(thresh=0.5, sigmoid=True, labels=None, pos_label=1, average='macro', sample_weight=None)

Jaccard score for multi-label classification problems

See the scikit-learn documentation for more details.

MatthewsCorrCoefMulti[source]

MatthewsCorrCoefMulti(thresh=0.5, sigmoid=True, sample_weight=None)

Matthews correlation coefficient for multi-label classification problems

See the scikit-learn documentation for more details.

PrecisionMulti[source]

PrecisionMulti(thresh=0.5, sigmoid=True, labels=None, pos_label=1, average='macro', sample_weight=None)

Precision for multi-label classification problems

See the scikit-learn documentation for more details.

RecallMulti[source]

RecallMulti(thresh=0.5, sigmoid=True, labels=None, pos_label=1, average='macro', sample_weight=None)

Recall for multi-label classification problems

See the scikit-learn documentation for more details.

RocAucMulti[source]

RocAucMulti(sigmoid=True, average='macro', sample_weight=None, max_fpr=None)

Area Under the Receiver Operating Characteristic Curve for multi-label binary classification problems

  1. roc_auc_metric = RocAucMulti(sigmoid=False)
  2. x,y = torch.tensor([np.arange(start=0, stop=0.2, step=0.04)]*20), torch.tensor([0, 0, 1, 1]).repeat(5)
  3. assert compute_val(roc_auc_metric, x, y) == 0.5

See the scikit-learn documentation for more details.

Regression

mse[source]

mse(inp, targ)

Mean squared error between inp and targ.

  1. x1,x2 = torch.randn(4,5),torch.randn(4,5)
  2. test_close(mse(x1,x2), (x1-x2).pow(2).mean())

rmse[source]

rmse(preds, targs)

Root mean squared error

  1. x1,x2 = torch.randn(20,5),torch.randn(20,5)
  2. test_eq(compute_val(rmse, x1, x2), torch.sqrt(F.mse_loss(x1,x2)))

mae[source]

mae(inp, targ)

Mean absolute error between inp and targ.

  1. x1,x2 = torch.randn(4,5),torch.randn(4,5)
  2. test_eq(mae(x1,x2), torch.abs(x1-x2).mean())

msle[source]

msle(inp, targ)

Mean squared logarithmic error between inp and targ.

  1. x1,x2 = torch.randn(4,5),torch.randn(4,5)
  2. x1,x2 = torch.relu(x1),torch.relu(x2)
  3. test_close(msle(x1,x2), (torch.log(x1+1)-torch.log(x2+1)).pow(2).mean())

exp_rmspe[source]

exp_rmspe(preds, targs)

Root mean square percentage error of the exponential of predictions and targets

  1. x1,x2 = torch.randn(20,5),torch.randn(20,5)
  2. test_eq(compute_val(exp_rmspe, x1, x2), torch.sqrt((((torch.exp(x2) - torch.exp(x1))/torch.exp(x2))**2).mean()))

ExplainedVariance[source]

ExplainedVariance(sample_weight=None)

Explained variance between predictions and targets

See the scikit-learn documentation for more details.

R2Score[source]

R2Score(sample_weight=None)

R2 score between predictions and targets

See the scikit-learn documentation for more details.

PearsonCorrCoef[source]

PearsonCorrCoef(dim_argmax=None, activation='no', thresh=None, to_np=False, invert_arg=False, flatten=True)

Pearson correlation coefficient for regression problem

See the scipy documentation for more details.

  1. x = torch.randint(-999, 999,(20,))
  2. y = torch.randint(-999, 999,(20,))
  3. test_eq(compute_val(PearsonCorrCoef(), x, y), scs.pearsonr(x.view(-1), y.view(-1))[0])

SpearmanCorrCoef[source]

SpearmanCorrCoef(dim_argmax=None, axis=0, nan_policy='propagate', activation='no', thresh=None, to_np=False, invert_arg=False, flatten=True)

Spearman correlation coefficient for regression problem

See the scipy documentation for more details.

  1. x = torch.randint(-999, 999,(20,))
  2. y = torch.randint(-999, 999,(20,))
  3. test_eq(compute_val(SpearmanCorrCoef(), x, y), scs.spearmanr(x.view(-1), y.view(-1))[0])

Segmentation

foreground_acc[source]

foreground_acc(inp, targ, bkg_idx=0, axis=1)

Computes non-background accuracy for multiclass segmentation

  1. x = torch.randn(4,5,3,3)
  2. y = x.argmax(dim=1)[:,None]
  3. test_eq(foreground_acc(x,y), 1)
  4. y[0] = 0 #the 0s are ignored so we get the same value
  5. test_eq(foreground_acc(x,y), 1)

class Dice[source]

Dice(axis=1) :: Metric

Dice coefficient metric for binary target in segmentation

  1. x1 = torch.randn(20,2,3,3)
  2. x2 = torch.randint(0, 2, (20, 3, 3))
  3. pred = x1.argmax(1)
  4. inter = (pred*x2).float().sum().item()
  5. union = (pred+x2).float().sum().item()
  6. test_eq(compute_val(Dice(), x1, x2), 2*inter/union)

class DiceMulti[source]

DiceMulti(axis=1) :: Metric

Averaged Dice metric (Macro F1) for multiclass target in segmentation

The DiceMulti method implements the “Averaged F1: arithmetic mean over harmonic means” described in this publication: https://arxiv.org/pdf/1911.03347.pdf

  1. x1a = torch.ones(20,1,1,1)
  2. x1b = torch.clone(x1a)*0.5
  3. x1c = torch.clone(x1a)*0.3
  4. x1 = torch.cat((x1a,x1b,x1c),dim=1) # Prediction: 20xClass0
  5. x2 = torch.zeros(20,1,1) # Target: 20xClass0
  6. test_eq(compute_val(DiceMulti(), x1, x2), 1.)
  7. x2 = torch.ones(20,1,1) # Target: 20xClass1
  8. test_eq(compute_val(DiceMulti(), x1, x2), 0.)
  9. x2a = torch.zeros(10,1,1)
  10. x2b = torch.ones(5,1,1)
  11. x2c = torch.ones(5,1,1) * 2
  12. x2 = torch.cat((x2a,x2b,x2c),dim=0) # Target: 10xClass0, 5xClass1, 5xClass2
  13. dice1 = (2*10)/(2*10+10) # Dice: 2*TP/(2*TP+FP+FN)
  14. dice2 = 0
  15. dice3 = 0
  16. test_eq(compute_val(DiceMulti(), x1, x2), (dice1+dice2+dice3)/3)

class JaccardCoeff[source]

JaccardCoeff(axis=1) :: Dice

Implementation of the Jaccard coefficient that is lighter in RAM

  1. x1 = torch.randn(20,2,3,3)
  2. x2 = torch.randint(0, 2, (20, 3, 3))
  3. pred = x1.argmax(1)
  4. inter = (pred*x2).float().sum().item()
  5. union = (pred+x2).float().sum().item()
  6. test_eq(compute_val(JaccardCoeff(), x1, x2), inter/(union-inter))

NLP

class CorpusBLEUMetric[source]

CorpusBLEUMetric(vocab_sz=5000, axis=-1) :: Metric

Blueprint for defining a metric

  1. def create_vcb_emb(pred, targ):
  2. # create vocab "embedding" for predictions
  3. vcb_sz = max(torch.unique(torch.cat([pred, targ])))+1
  4. pred_emb=torch.zeros(pred.size()[0], pred.size()[1] ,vcb_sz)
  5. for i,v in enumerate(pred):
  6. pred_emb[i].scatter_(1, v.view(len(v),1),1)
  7. return pred_emb
  8. def compute_bleu_val(met, x1, x2):
  9. met.reset()
  10. learn = TstLearner()
  11. learn.training=False
  12. for i in range(len(x1)):
  13. learn.pred,learn.yb = x1, (x2,)
  14. met.accumulate(learn)
  15. return met.value
  16. targ = torch.tensor([[1,2,3,4,5,6,1,7,8]])
  17. pred = torch.tensor([[1,9,3,4,5,6,1,10,8]])
  18. pred_emb = create_vcb_emb(pred, targ)
  19. test_close(compute_bleu_val(CorpusBLEUMetric(), pred_emb, targ), 0.48549)
  20. targ = torch.tensor([[1,2,3,4,5,6,1,7,8],[1,2,3,4,5,6,1,7,8]])
  21. pred = torch.tensor([[1,9,3,4,5,6,1,10,8],[1,9,3,4,5,6,1,10,8]])
  22. pred_emb = create_vcb_emb(pred, targ)
  23. test_close(compute_bleu_val(CorpusBLEUMetric(), pred_emb, targ), 0.48549)

The BLEU metric was introduced in this article to come up with a way to evaluate the performance of translation models. It’s based on the precision of n-grams in your prediction compared to your target. See the fastai NLP course BLEU notebook for a more detailed description of BLEU.

The smoothing used in the precision calculation is the same as in SacreBLEU, which in turn is “method 3” from the Chen & Cherry, 2014 paper.

class LossMetric[source]

LossMetric(attr, nm=None) :: AvgMetric

Create a metric from loss_func.attr named nm

LossMetrics[source]

LossMetrics(attrs, nms=None)

List of LossMetric for each of attrs and nms

  1. class CombineL1L2(Module):
  2. def forward(self, out, targ):
  3. self.l1 = F.l1_loss(out, targ)
  4. self.l2 = F.mse_loss(out, targ)
  5. return self.l1+self.l2
  1. learn = synth_learner(metrics=LossMetrics('l1,l2'))
  2. learn.loss_func = CombineL1L2()
  3. learn.fit(2)
  1. [0, 16.63826560974121, 14.52301025390625, 3.3376736640930176, 11.18533706665039, '00:00']
  2. [1, 14.520439147949219, 10.179483413696289, 2.7222838401794434, 7.457200050354004, '00:00']

Company logo

©2021 fast.ai. All rights reserved.
Site last generated: Mar 31, 2021