--- title: Callback keywords: fastai sidebar: home_sidebar summary: "Miscellaneous callbacks for timeseriesAI." description: "Miscellaneous callbacks for timeseriesAI." nb_path: "nbs/060_callback.core.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

Events

A callback can implement actions on the following events:

  • before_fit: called before doing anything, ideal for initial setup.
  • before_epoch: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.
  • before_train: called at the beginning of the training part of an epoch.
  • before_batch: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).
  • after_pred: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.
  • after_loss: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).
  • before_backward: called after the loss has been computed, but only in training mode (i.e. when the backward pass will be used)
  • after_backward: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).
  • after_step: called after the step and before the gradients are zeroed.
  • after_batch: called at the end of a batch, for any clean-up before the next one.
  • after_train: called at the end of the training phase of an epoch.
  • before_validate: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.
  • after_validate: called at the end of the validation part of an epoch.
  • after_epoch: called at the end of an epoch, for any clean-up before the next one.
  • after_fit: called at the end of training, for final clean-up.

Learner attributes

When writing a callback, the following attributes of Learner are available:

  • model: the model used for training/validation
  • data: the underlying DataLoaders
  • loss_func: the loss function used
  • opt: the optimizer used to udpate the model parameters
  • opt_func: the function used to create the optimizer
  • cbs: the list containing all Callbacks
  • dl: current DataLoader used for iteration
  • x/xb: last input drawn from self.dl (potentially modified by callbacks). xb is always a tuple (potentially with one element) and x is detuplified. You can only assign to xb.
  • y/yb: last target drawn from self.dl (potentially modified by callbacks). yb is always a tuple (potentially with one element) and y is detuplified. You can only assign to yb.
  • pred: last predictions from self.model (potentially modified by callbacks)
  • loss: last computed loss (potentially modified by callbacks)
  • n_epoch: the number of epochs in this training
  • n_iter: the number of iterations in the current self.dl
  • epoch: the current epoch index (from 0 to n_epoch-1)
  • iter: the current iteration index in self.dl (from 0 to n_iter-1)

The following attributes are added by TrainEvalCallback and should be available unless you went out of your way to remove that callback:

  • train_iter: the number of training iterations done since the beginning of this training
  • pct_train: from 0. to 1., the percentage of training iterations completed
  • training: flag to indicate if we're in training mode or not

The following attribute is added by Recorder and should be available unless you went out of your way to remove that callback:

  • smooth_loss: an exponentially-averaged version of the training loss

Gambler's loss: noisy labels

{% raw %}

class GamblersCallback[source]

GamblersCallback(after_create=None, before_fit=None, before_epoch=None, before_train=None, before_batch=None, after_pred=None, after_loss=None, before_backward=None, before_step=None, after_cancel_step=None, after_step=None, after_cancel_batch=None, after_batch=None, after_cancel_train=None, after_train=None, before_validate=None, after_cancel_validate=None, after_validate=None, after_cancel_epoch=None, after_epoch=None, after_cancel_fit=None, after_fit=None) :: Callback

A callback to use metrics with gambler's loss

{% endraw %} {% raw %}
{% endraw %} {% raw %}
from tsai.data.all import *
from tsai.models.InceptionTime import *
from tsai.models.layers import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=[64, 128])
loss_func = gambler_loss()
learn = Learner(dls, InceptionTime(dls.vars, dls.c + 1), loss_func=loss_func, cbs=GamblersCallback, metrics=[accuracy])
learn.fit_one_cycle(1)
epoch train_loss valid_loss accuracy time
0 1.395603 1.523850 0.266667 00:05
{% endraw %}

Transform scheduler

{% raw %}

class TransformScheduler[source]

TransformScheduler(schedule_func:callable, show_plot:bool=False) :: Callback

A callback to schedule batch transforms during training based on a function (sched_lin, sched_exp, sched_cos (default), etc)

{% endraw %} {% raw %}
{% endraw %} {% raw %}
TransformScheduler(SchedCos(1, 0))
TransformScheduler(<fastai.callback.schedule._Annealer object at 0x7ffc9d32bdd8>)
{% endraw %} {% raw %}
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3, 0.4, 0.3], [SchedLin(1.,1.), SchedCos(1.,0.), SchedLin(0.,.0), ])
plt.plot(p, [f(o) for o in p]);
{% endraw %} {% raw %}
p = torch.linspace(0.,1,100)
f = combine_scheds([0.3, 0.7], [SchedCos(0.,1.), SchedCos(1.,0.)])
plt.plot(p, [f(o) for o in p]);
{% endraw %}

ShowGraph

{% raw %}

class ShowGraph[source]

ShowGraph(plot_metrics:bool=True, final_losses:bool=False) :: Callback

(Modified) Update a graph of training and validation loss

{% endraw %} {% raw %}
{% endraw %}

SaveModel

{% raw %}

class SaveModel[source]

SaveModel(monitor='valid_loss', comp=None, min_delta=0.0, fname='model', every_epoch=False, at_end=False, with_opt=False, reset_on_fit=True, verbose=False) :: TrackerCallback

A TrackerCallback that saves the model's best during training and loads it at the end with a verbose option.

{% endraw %} {% raw %}
{% endraw %}

Uncertainty-based data augmentation

{% raw %}

class UBDAug[source]

UBDAug(batch_tfms:list, N:int=2, C:int=4, S:int=1) :: Callback

A callback to implement the uncertainty-based data augmentation.

{% endraw %} {% raw %}
{% endraw %} {% raw %}
from tsai.data.all import *
from tsai.models.all import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=[TSStandardize()])
model = create_model(InceptionTime, dls=dls)
TS_tfms = [TSMagScale(.75, p=.5), TSMagWarp(.1, p=0.5),  TSWindowWarp(.25, p=.5), 
           TSSmooth(p=0.5), TSRandomResizedCrop(.1, p=.5), 
           TSRandomCropPad(.3, p=0.5), 
           TSMagAddNoise(.5, p=.5)]

ubda_cb = UBDAug(TS_tfms, N=2, C=4, S=2)
learn = Learner(dls, model, cbs=ubda_cb, metrics=accuracy)
learn.fit_one_cycle(1)
epoch train_loss valid_loss accuracy time
0 1.852533 1.804390 0.166667 00:11
{% endraw %}

Weight per sample loss

This process shows an example of how the weights could be calculated. This particular regression method was published in:

Yang, Y., Zha, K., Chen, Y. C., Wang, H., & Katabi, D. (2021). Delving into Deep Imbalanced Regression. arXiv preprint arXiv:2102.09554.
(https://arxiv.org/pdf/2102.09554.pdf)

{% raw %}

get_lds_kernel_window[source]

get_lds_kernel_window(lds_kernel='gaussian', lds_ks=9, lds_sigma=1)

Function to determine the label distribution smoothing kernel window

lds_kernel (str): LDS kernel type lds_ks (int): LDS kernel size (should be an odd number). lds_sigma (float): LDS gaussian/laplace kernel sigma

{% endraw %} {% raw %}

prepare_LDS_weights[source]

prepare_LDS_weights(labels, n_bins=None, label_range=None, reweight='inv', lds_kernel='gaussian', lds_ks=9, lds_sigma=1, max_rel_weight=None, show_plot=True)

{% endraw %} {% raw %}
{% endraw %} {% raw %}
labels = np.concatenate([np.random.normal(-20, 1, 10), np.random.normal(0, 2, 100), np.random.normal(12, 2, 300)], -1)
labels[(-1<labels) & (labels<1)] = 0   # This is done to create some 'gaps' for demo purposes
labels[(10<labels) & (labels<12)] = 0  # This is done to create some 'gaps' for demo purposes

n_bins = 50
label_range=None
reweight = 'inv'
lds_kernel='gaussian'
lds_ks=5
lds_sigma=2

weights_per_sample = prepare_LDS_weights(labels, n_bins, label_range=label_range, reweight=reweight, 
                                         lds_kernel=lds_kernel, lds_ks=lds_ks, lds_sigma=lds_sigma, show_plot=True)

n_bins = 50
label_range=None
reweight = 'sqrt_inv'
lds_kernel='gaussian'
lds_ks=5
lds_sigma=2

weights_per_sample = prepare_LDS_weights(labels, n_bins, label_range=label_range, reweight=reweight, 
                                         lds_kernel=lds_kernel, lds_ks=lds_ks, lds_sigma=lds_sigma, show_plot=True)

n_bins = None
label_range=None
reweight = 'sqrt_inv'
lds_kernel='triang'
lds_ks=9
lds_sigma=1

weights_per_sample = prepare_LDS_weights(labels, n_bins, label_range=label_range, reweight=reweight, 
                                         lds_kernel=lds_kernel, lds_ks=lds_ks, lds_sigma=lds_sigma, show_plot=True)
{% endraw %}

This loss will allow you to pass a different weight per individual sample.

{% raw %}

class WeightedPerSampleLoss[source]

WeightedPerSampleLoss(instance_weights) :: Callback

Basic class handling tweaks of the training loop by changing a Learner in various events

{% endraw %} {% raw %}
{% endraw %}

BatchSubsampler

{% raw %}

class BatchSubsampler[source]

BatchSubsampler(sample_pct:Optional[float]=None, step_pct:Optional[float]=None, same_seq_len:bool=True, update_y:bool=False) :: Callback

Callback that selects a percentage of samples and/ or sequence steps with replacement from each training batch

Args:

sample_pct: percentage of random samples (or instances) that will be drawn. If 1. the output batch will contain the same number of samples as the input batch. step_pct: percentage of random sequence steps that will be drawn. If 1. the output batch will contain the same number of sequence steps as the input batch. If used with models that don't use a pooling layer, this must be set to 1 to keep the same dimensions. With CNNs, this value may be different. same_seq_len: If True, it ensures that the output has the same shape as the input, even if the step_pct chosen is < 1. Defaults to True. update_y: used with step_pct. If True, it applies the same random indices to y. It can only be used with sequential targets.

{% endraw %} {% raw %}
{% endraw %}

BatchLossFilter

{% raw %}

class BatchLossFilter[source]

BatchLossFilter(loss_perc=1.0, schedule_func:Optional[callable]=None) :: Callback

Callback that selects the hardest samples in every batch representing a percentage of the total loss

{% endraw %} {% raw %}
{% endraw %}

RandomWeightLossWrapper

{% raw %}

class RandomWeightLossWrapper[source]

RandomWeightLossWrapper(after_create=None, before_fit=None, before_epoch=None, before_train=None, before_batch=None, after_pred=None, after_loss=None, before_backward=None, before_step=None, after_cancel_step=None, after_step=None, after_cancel_batch=None, after_batch=None, after_cancel_train=None, after_train=None, before_validate=None, after_cancel_validate=None, after_validate=None, after_cancel_epoch=None, after_epoch=None, after_cancel_fit=None, after_fit=None) :: Callback

Basic class handling tweaks of the training loop by changing a Learner in various events

{% endraw %} {% raw %}
{% endraw %}

SamplerWithReplacement

{% raw %}

class SamplerWithReplacement[source]

SamplerWithReplacement(after_create=None, before_fit=None, before_epoch=None, before_train=None, before_batch=None, after_pred=None, after_loss=None, before_backward=None, before_step=None, after_cancel_step=None, after_step=None, after_cancel_batch=None, after_batch=None, after_cancel_train=None, after_train=None, before_validate=None, after_cancel_validate=None, after_validate=None, after_cancel_epoch=None, after_epoch=None, after_cancel_fit=None, after_fit=None) :: Callback

Callback that modify the sampler to select a percentage of samples and/ or sequence steps with replacement from each training batch

{% endraw %} {% raw %}
{% endraw %}

BatchMasker

{% raw %}

class BatchMasker[source]

BatchMasker(r:float=0.15, lm:int=3, stateful:bool=True, sync:bool=False, subsequence_mask:bool=True, variable_mask:bool=False, future_mask:bool=False, schedule_func:Optional[callable]=None) :: Callback

Callback that applies a random mask to each sample in a training batch

Args:

r: probability of masking. subsequence_mask: apply a mask to random subsequences. lm: average mask len when using stateful (geometric) masking. stateful: geometric distribution is applied so that average mask length is lm. sync: all variables have the same masking. variable_mask: apply a mask to random variables. Only applicable to multivariate time series. future_mask: used to train a forecasting model. schedule_func: if a scheduler is passed, it will modify the probability of masking during training.

{% endraw %} {% raw %}
{% endraw %}

SamplerWithReplacement

{% raw %}

class SamplerWithReplacement[source]

SamplerWithReplacement(after_create=None, before_fit=None, before_epoch=None, before_train=None, before_batch=None, after_pred=None, after_loss=None, before_backward=None, before_step=None, after_cancel_step=None, after_step=None, after_cancel_batch=None, after_batch=None, after_cancel_train=None, after_train=None, before_validate=None, after_cancel_validate=None, after_validate=None, after_cancel_epoch=None, after_epoch=None, after_cancel_fit=None, after_fit=None) :: Callback

Callback that modify the sampler to select a percentage of samples and/ or sequence steps with replacement from each training batch

{% endraw %} {% raw %}
{% endraw %}