models package

Subpackages

Submodules

models.common module

Common definitions for the models module.

class models.common.CommonModelMixin(*args: Any, **kwargs: Any)

Bases: LightningModule

Common model attributes.

dl_classification_mode

Classification mode for the dataloader instances.

Type:

utils.types.ClassificationMode

eval_classification_mode

Classification mode for the evaluation process.

Type:

utils.types.ClassificationMode

dice_metrics

A collection of dice score variants.

Type:

dict[str, torchmetrics.collections.MetricCollection | torchmetrics.metric.Metric]

other_metrics

A collection of other metrics (recall, precision, jaccard).

Type:

dict[str, torchmetrics.collections.MetricCollection]

model

The internal model used.

Type:

torch.nn.modules.module.Module

model_type

The architecture of the model, if appropriate.

Type:

utils.types.ModelType

de_transform

The inverse transformation from augmentation of the samples by the dataloaders.

Type:

torchvision.transforms.v2._container.Compose | utils.types.InverseNormalize

dl_classification_mode: ClassificationMode

Classification mode for the dataloader instances.

eval_classification_mode: ClassificationMode

Classification mode for the evaluation process.

dice_metrics: dict[str, MetricCollection | Metric]

A collection of dice score variants.

other_metrics: dict[str, MetricCollection]

A collection of other metrics (recall, precision, jaccard).

hausdorff_metrics: dict[str, MetricCollection]

Just hausdorff distance metrics.

infarct_metrics: dict[str, MetricCollection]

A collection of infarct-related clinical heuristics.

model: Module

The internal model used.

model_type: ModelType

The architecture of the model, if appropriate.

de_transform: Compose | InverseNormalize

The inverse transformation from augmentation of the samples by the dataloaders.

classes: int

Number of output classes.

optimizer: type[Optimizer] | Optimizer | str

Which optimizer to use.

optimizer_kwargs: dict[str, Any]

Optimizer parameters.

total_epochs: int

Total number of training epochs.

scheduler: type[LRScheduler] | LRScheduler | str

Learning rate scheduler.

scheduler_kwargs: dict[str, Any]

Scheduler parameters.

show_r2_plots: bool = False

Whether to show R^2 plots for infarct metrics.

setup(stage: str) None

Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.

Parameters:

stage – either 'fit', 'validate', 'test', or 'predict'

Example:

class LitModel(...):
    def __init__(self):
        self.l1 = None

    def prepare_data(self):
        download_data()
        tokenize()

        # don't do this
        self.something = else

    def setup(self, stage):
        data = load_data(...)
        self.l1 = nn.Linear(28, data.num_classes)
on_train_start()

Called at the beginning of training after sanity check.

log_metrics(prefix: Literal['train', 'val', 'test']) None

Implement shared metric logging epoch end here.

Note: This is to prevent circular imports with the logging module.

on_train_end() None

Called at the end of training before logger experiment is closed.

on_train_epoch_end() None

Called in the training loop at the very end of the epoch.

To access all batch outputs at the end of the epoch, you can cache step outputs as an attribute of the LightningModule and access them in this hook:

class MyLightningModule(L.LightningModule):
    def __init__(self):
        super().__init__()
        self.training_step_outputs = []

    def training_step(self):
        loss = ...
        self.training_step_outputs.append(loss)
        return loss

    def on_train_epoch_end(self):
        # do something with all training_step outputs, for example:
        epoch_mean = torch.stack(self.training_step_outputs).mean()
        self.log("training_epoch_mean", epoch_mean)
        # free up the memory
        self.training_step_outputs.clear()
on_validation_epoch_end() None

Called in the validation loop at the very end of the epoch.

on_test_epoch_end() None

Called in the test loop at the very end of the epoch.

models.common.ENCODER_OUTPUT_SHAPES = {'resnet101': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'resnet152': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'resnet18': [(64, 112, 112), (64, 56, 56), (128, 28, 28), (256, 14, 14), (512, 7, 7)], 'resnet34': [(64, 112, 112), (64, 56, 56), (128, 28, 28), (256, 14, 14), (512, 7, 7)], 'resnet50': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'resnext101_32x16d': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'resnext101_32x32d': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'resnext101_32x48d': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'resnext101_32x4d': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'resnext101_32x8d': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'resnext50_32x4d': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'se_resnet101': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'se_resnet152': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'se_resnet50': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'se_resnext101_32x4d': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'se_resnext50_32x4d': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'senet154': [(128, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'tscse_resnet101': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'tscse_resnet152': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'tscse_resnet50': [(64, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)], 'tscsenet154': [(128, 112, 112), (256, 56, 56), (512, 28, 28), (1024, 14, 14), (2048, 7, 7)]}

Output shapes for the different ResNet models. The output shapes are used to calculate the number of output channels for each 1D temporal convolutional block.

models.default_unet module

Contains the default U-Net implementation LightningModule wrapper.

class models.default_unet.LightningUnetWrapper(batch_size: int, metric: Metric | None = None, num_frames: int = 30, loss: Module | str | None = None, model_type: ModelType = ModelType.UNET, encoder_name: str = 'resnet34', encoder_depth: int = 5, encoder_weights: str | None = 'imagenet', in_channels: int = 90, classes: int = 4, weights_from_ckpt_path: str | None = None, optimizer: Optimizer | str = 'adamw', optimizer_kwargs: dict[str, Any] | None = None, scheduler: LRScheduler | str = 'gradual_warmup_scheduler', scheduler_kwargs: dict[str, Any] | None = None, multiplier: int = 2, total_epochs: int = 50, alpha: float = 1.0, _beta: float = 0.0, learning_rate: float = 0.0001, dl_classification_mode: ClassificationMode = ClassificationMode.MULTICLASS_MODE, eval_classification_mode: ClassificationMode = ClassificationMode.MULTILABEL_MODE, loading_mode: LoadingMode = LoadingMode.RGB, dump_memory_snapshot: bool = False, dummy_predict: DummyPredictMode = DummyPredictMode.NONE, metric_mode: MetricMode = MetricMode.INCLUDE_EMPTY_CLASS, metric_div_zero: float = 1.0)

Bases: CommonModelMixin

LightningModule wrapper for U-Net model.

on_train_start()

Called at the beginning of training after sanity check.

forward(x: Tensor) Tensor

Same as torch.nn.Module.forward().

Parameters:
  • *args – Whatever you decide to pass into the forward method.

  • **kwargs – Keyword arguments are also possible.

Returns:

Your model’s output

log_metrics(prefix) None

Implement shared metric logging epoch end here.

Note: This is to prevent circular imports with the logging module.

training_step(batch: tuple[Tensor, Tensor, str], batch_idx: int) Tensor

Forward pass for the model with dataloader batches.

Parameters:
  • batch – Batch of frames, masks, and filenames.

  • batch_idx – Index of the batch in the epoch.

Returns:

Training loss.

Return type:

torch.tensor

Raises:

AssertionError – Prediction shape and ground truth mask shapes are different.

validation_step(batch: tuple[Tensor, Tensor, str], batch_idx: int)

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one val dataloader:
def validation_step(self, batch, batch_idx): ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to validate you don’t need to implement this method.

Note

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

test_step(batch: tuple[Tensor, Tensor, str], batch_idx: int) None

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one test dataloader:
def test_step(self, batch, batch_idx): ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to test you don’t need to implement this method.

Note

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

predict_step(batch: tuple[Tensor, Tensor, str | list[str]], batch_idx: int, dataloader_idx: int = 0)

Forward pass for the model for one minibatch of a test epoch.

Parameters:
  • batch – Batch of frames, masks, and filenames.

  • batch_idx – Index of the batch in the epoch.

  • dataloader_idx – Index of the dataloader.

Returns:

Mask predictions, original images,

and filename.

Return type:

tuple[torch.tensor, torch.tensor, str]

configure_optimizers()

Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple. Optimization with multiple optimizers only works in the manual optimization mode.

Returns:

Any of these 6 options.

  • Single optimizer.

  • List or Tuple of optimizers.

  • Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple lr_scheduler_config).

  • Dictionary, with an "optimizer" key, and (optionally) a "lr_scheduler" key whose value is a single LR scheduler or lr_scheduler_config.

  • None - Fit will run without any optimizer.

The lr_scheduler_config is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.

lr_scheduler_config = {
    # REQUIRED: The scheduler instance
    "scheduler": lr_scheduler,
    # The unit of the scheduler's step size, could also be 'step'.
    # 'epoch' updates the scheduler on epoch end whereas 'step'
    # updates it after a optimizer update.
    "interval": "epoch",
    # How many epochs/steps should pass between calls to
    # `scheduler.step()`. 1 corresponds to updating the learning
    # rate after every epoch/step.
    "frequency": 1,
    # Metric to to monitor for schedulers like `ReduceLROnPlateau`
    "monitor": "val_loss",
    # If set to `True`, will enforce that the value specified 'monitor'
    # is available when the scheduler is updated, thus stopping
    # training if not found. If set to `False`, it will only produce a warning
    "strict": True,
    # If using the `LearningRateMonitor` callback to monitor the
    # learning rate progress, this keyword can be used to specify
    # a custom logged name
    "name": None,
}

When there are schedulers in which the .step() method is conditioned on a value, such as the torch.optim.lr_scheduler.ReduceLROnPlateau scheduler, Lightning requires that the lr_scheduler_config contains the keyword "monitor" set to the metric name that the scheduler should be conditioned on.

# The ReduceLROnPlateau scheduler requires a monitor
def configure_optimizers(self):
    optimizer = Adam(...)
    return {
        "optimizer": optimizer,
        "lr_scheduler": {
            "scheduler": ReduceLROnPlateau(optimizer, ...),
            "monitor": "metric_to_track",
            "frequency": "indicates how often the metric is updated",
            # If "monitor" references validation metrics, then "frequency" should be set to a
            # multiple of "trainer.check_val_every_n_epoch".
        },
    }


# In the case of two optimizers, only one using the ReduceLROnPlateau scheduler
def configure_optimizers(self):
    optimizer1 = Adam(...)
    optimizer2 = SGD(...)
    scheduler1 = ReduceLROnPlateau(optimizer1, ...)
    scheduler2 = LambdaLR(optimizer2, ...)
    return (
        {
            "optimizer": optimizer1,
            "lr_scheduler": {
                "scheduler": scheduler1,
                "monitor": "metric_to_track",
            },
        },
        {"optimizer": optimizer2, "lr_scheduler": scheduler2},
    )

Metrics can be made available to monitor by simply logging it using self.log('metric_to_track', metric_val) in your LightningModule.

Note

Some things to know:

  • Lightning calls .backward() and .step() automatically in case of automatic optimization.

  • If a learning rate scheduler is specified in configure_optimizers() with key "interval" (default “epoch”) in the scheduler configuration, Lightning will call the scheduler’s .step() method automatically in case of automatic optimization.

  • If you use 16-bit precision (precision=16), Lightning will automatically handle the optimizer.

  • If you use torch.optim.LBFGS, Lightning handles the closure function automatically for you.

  • If you use multiple optimizers, you will have to switch to ‘manual optimization’ mode and step them yourself.

  • If you need to control how often the optimizer steps, override the optimizer_step() hook.

models.transunet module

TransU-Net model customisation.

Based on the implementation at https://github.com/Beckschen/TransUNet

class models.transunet.ResNetV2(*args: Any, **kwargs: Any)

Bases: ResNetV2

Implementation of pre-activation (v2) ResNet mode with paramaterised in_channels.

class models.transunet.Embeddings(*args: Any, **kwargs: Any)

Bases: Embeddings

Embeddings from patch, position embeddings with parameterised in_channels.

class models.transunet.Transformer(*args: Any, **kwargs: Any)

Bases: Transformer

Transformer model with parameterised in_channels.

class models.transunet.TransUnet(*args: Any, **kwargs: Any)

Bases: VisionTransformer

TransU-Net model.

models.two_plus_one module

2+1D U-Net model.

class models.two_plus_one.TemporalConvolutionalType(*values)

Bases: Enum

1D Temporal Convolutional Layer type.

ORIGINAL = 1

Original 1D convolutional operation with significant use of reshape.

DILATED = 2

Modified 1D convolutional operation to replace stride with dilation.

TEMPORAL_3D = 3

Uses a 3D convolutional operation to reduce calls to reshape.

get_class()

Get the class of the convolutional layer for instantiation.

models.two_plus_one.get_temporal_conv_type(query: str) TemporalConvolutionalType

Get the temporal convolutional type from a string input.

Parameters:

query – The temporal convolutional type.

Raises:

KeyError – If the type is not an implemented type.

class models.two_plus_one.OneD(in_channels: int, out_channels: int, num_frames: int, flat: bool = False, activation: str | Type[Module] | None = None)

Bases: Module

1D Temporal Convolutional Block.

forward(x: Tensor) Tensor

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class models.two_plus_one.DilatedOneD(in_channels: int, out_channels: int, num_frames: int, sequence_length: int, flat: bool = False, activation: str | type[Module] | None = None)

Bases: Module

1D Temporal Convolutional Block with dilations.

forward(x: Tensor) Tensor

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class models.two_plus_one.Temporal3DConv(in_channels: int, out_channels: int, num_frames: int, flat: bool = False, activation: str | type[Module] | None = None)

Bases: Module

1D Temporal Convolution for 5D Tensor input.

forward(x: Tensor) Tensor

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

models.two_plus_one.compress_2(stacked_outputs: Tensor, block: OneD) Tensor

Apply the OneD temporal convolution on the stacked outputs.

Parameters:
  • stacked_outputs – 5D tensor of shape (num_frames, batch_size, num_channels, h, w).

  • block – 1d temporal convolutional block.

Returns:

4D tensor of shape (batch_size, num_channels, h, w).

models.two_plus_one.compress_dilated(stacked_outputs: Tensor, block: DilatedOneD) Tensor

Apply the DilatedOneD temporal convolution on the stacked outputs.

Parameters:
  • stacked_outputs – 5D tensor of shape (num_frames, batch_size, num_channels, h, w).

  • block – 1d temporal convolutional block.

Returns:

4D tensor of shape (batch_size, num_channels, h, w).

class models.two_plus_one.TwoPlusOneUnet(*args, **kwargs)

Bases: SegmentationModel

2+1D U-Net model.

initialize() None

Initialize the model.

This method initializes the decoder and the segmentation head. It also initializes the 1D temporal convolutional blocks with the correct number of output channels for each layer of the encoder.

forward(x: Tensor) Tensor

Forward pass of the model.

Parameters:

x – 5D tensor of shape (batch_size, num_frames, channels, height, width).

Returns:

4D tensor of shape (batch_size, classes, height, width).

predict(x: Tensor) Tensor

Inference method.

Parameters:

x – 4D torch tensor with shape (batch_size, channels, height, width)

Returns:

4D torch tensor with shape (batch_size, classes, height, width).

class models.two_plus_one.TwoPlusOneUnetLightning(batch_size: int, metric: Metric | None = None, loss: Module | str | None = None, model_type: ModelType = ModelType.UNET, encoder_name: str = 'resnet34', encoder_depth: int = 5, encoder_weights: str | None = 'imagenet', in_channels: int = 3, classes: int = 1, num_frames: Literal[5, 10, 15, 20, 30] = 5, weights_from_ckpt_path: str | None = None, temporal_conv_type: TemporalConvolutionalType = TemporalConvolutionalType.ORIGINAL, optimizer: Optimizer | str = 'adamw', optimizer_kwargs: dict[str, Any] | None = None, scheduler: LRScheduler | str = 'gradual_warmup_scheduler', scheduler_kwargs: dict[str, Any] | None = None, multiplier: int = 2, total_epochs: int = 50, alpha: float = 1.0, _beta: float = 0.0, learning_rate: float = 0.0001, dl_classification_mode: ClassificationMode = ClassificationMode.MULTICLASS_MODE, eval_classification_mode: ClassificationMode = ClassificationMode.MULTICLASS_MODE, loading_mode: LoadingMode = LoadingMode.RGB, dump_memory_snapshot: bool = False, flat_conv: bool = False, unet_activation: str | None = None, metric_mode: MetricMode = MetricMode.INCLUDE_EMPTY_CLASS, metric_div_zero: float = 1.0)

Bases: CommonModelMixin

A LightningModule wrapper for the modified 2+1 U-Net architecture.

forward(x: Tensor) Tensor

Same as torch.nn.Module.forward().

Parameters:
  • *args – Whatever you decide to pass into the forward method.

  • **kwargs – Keyword arguments are also possible.

Returns:

Your model’s output

log_metrics(prefix: Literal['train', 'val', 'test']) None

Implement shared metric logging epoch end here.

Note: This is to prevent circular imports with the logging module.

training_step(batch: tuple[Tensor, Tensor, str], batch_idx: int) Tensor

Forward pass for the model with dataloader batches.

Parameters:
  • batch – Batch of frames, masks, and filenames.

  • batch_idx – Index of the batch in the epoch.

Returns:

Training loss.

Raises:

AssertionError – Prediction shape and ground truth mask shapes are different.

validation_step(batch: tuple[Tensor, Tensor, str], batch_idx: int)

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one val dataloader:
def validation_step(self, batch, batch_idx): ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to validate you don’t need to implement this method.

Note

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

test_step(batch: tuple[Tensor, Tensor, str], batch_idx: int)

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one test dataloader:
def test_step(self, batch, batch_idx): ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to test you don’t need to implement this method.

Note

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

predict_step(batch: tuple[Tensor, Tensor, str | list[str]], batch_idx: int, dataloader_idx: int = 0) tuple[Tensor, Tensor, str | list[str]]

Forward pass for the model for one minibatch of a test epoch.

Parameters:
  • batch – Batch of frames, masks, and filenames.

  • batch_idx – Index of the batch in the epoch.

  • dataloader_idx – Index of the dataloader.

Returns:

Mask predictions, original images, and filename.

configure_optimizers()

Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple. Optimization with multiple optimizers only works in the manual optimization mode.

Returns:

Any of these 6 options.

  • Single optimizer.

  • List or Tuple of optimizers.

  • Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple lr_scheduler_config).

  • Dictionary, with an "optimizer" key, and (optionally) a "lr_scheduler" key whose value is a single LR scheduler or lr_scheduler_config.

  • None - Fit will run without any optimizer.

The lr_scheduler_config is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.

lr_scheduler_config = {
    # REQUIRED: The scheduler instance
    "scheduler": lr_scheduler,
    # The unit of the scheduler's step size, could also be 'step'.
    # 'epoch' updates the scheduler on epoch end whereas 'step'
    # updates it after a optimizer update.
    "interval": "epoch",
    # How many epochs/steps should pass between calls to
    # `scheduler.step()`. 1 corresponds to updating the learning
    # rate after every epoch/step.
    "frequency": 1,
    # Metric to to monitor for schedulers like `ReduceLROnPlateau`
    "monitor": "val_loss",
    # If set to `True`, will enforce that the value specified 'monitor'
    # is available when the scheduler is updated, thus stopping
    # training if not found. If set to `False`, it will only produce a warning
    "strict": True,
    # If using the `LearningRateMonitor` callback to monitor the
    # learning rate progress, this keyword can be used to specify
    # a custom logged name
    "name": None,
}

When there are schedulers in which the .step() method is conditioned on a value, such as the torch.optim.lr_scheduler.ReduceLROnPlateau scheduler, Lightning requires that the lr_scheduler_config contains the keyword "monitor" set to the metric name that the scheduler should be conditioned on.

# The ReduceLROnPlateau scheduler requires a monitor
def configure_optimizers(self):
    optimizer = Adam(...)
    return {
        "optimizer": optimizer,
        "lr_scheduler": {
            "scheduler": ReduceLROnPlateau(optimizer, ...),
            "monitor": "metric_to_track",
            "frequency": "indicates how often the metric is updated",
            # If "monitor" references validation metrics, then "frequency" should be set to a
            # multiple of "trainer.check_val_every_n_epoch".
        },
    }


# In the case of two optimizers, only one using the ReduceLROnPlateau scheduler
def configure_optimizers(self):
    optimizer1 = Adam(...)
    optimizer2 = SGD(...)
    scheduler1 = ReduceLROnPlateau(optimizer1, ...)
    scheduler2 = LambdaLR(optimizer2, ...)
    return (
        {
            "optimizer": optimizer1,
            "lr_scheduler": {
                "scheduler": scheduler1,
                "monitor": "metric_to_track",
            },
        },
        {"optimizer": optimizer2, "lr_scheduler": scheduler2},
    )

Metrics can be made available to monitor by simply logging it using self.log('metric_to_track', metric_val) in your LightningModule.

Note

Some things to know:

  • Lightning calls .backward() and .step() automatically in case of automatic optimization.

  • If a learning rate scheduler is specified in configure_optimizers() with key "interval" (default “epoch”) in the scheduler configuration, Lightning will call the scheduler’s .step() method automatically in case of automatic optimization.

  • If you use 16-bit precision (precision=16), Lightning will automatically handle the optimizer.

  • If you use torch.optim.LBFGS, Lightning handles the closure function automatically for you.

  • If you use multiple optimizers, you will have to switch to ‘manual optimization’ mode and step them yourself.

  • If you need to control how often the optimizer steps, override the optimizer_step() hook.

models.two_stream module

Two Stream U-Net model with LGE and Cine inputs.

class models.two_stream.TwoStreamUnet(*args, **kwargs)

Bases: SegmentationModel

Two Stream U-Net model with LGE and Cine inputs.

initialize()

Initialise the model’s decoder, segmentation head, and classification head.

forward(lge: Tensor, cine: Tensor) Tensor

Forward pass for the Two Stream U-Net model.

Parameters:
  • lge – Late gadolinium enhanced image tensor.

  • cine – Cine image tensor.

class models.two_stream.TwoStreamUnetLightning(batch_size: int, metric: Metric | None = None, loss: Module | str | None = None, model_type: ModelType = ModelType.UNET, encoder_name: str = 'resnet34', encoder_depth: int = 5, encoder_weights: str | None = 'imagenet', in_channels: int = 3, classes: int = 1, num_frames: int = 30, weights_from_ckpt_path: str | None = None, optimizer: Optimizer | str = 'adamw', optimizer_kwargs: dict[str, Any] | None = None, scheduler: LRScheduler | str = 'gradual_warmup_scheduler', scheduler_kwargs: dict[str, Any] | None = None, multiplier: int = 2, total_epochs: int = 50, alpha: float = 1.0, _beta: float = 0.0, learning_rate: float = 0.0001, dl_classification_mode: ClassificationMode = ClassificationMode.MULTICLASS_MODE, eval_classification_mode: ClassificationMode = ClassificationMode.MULTICLASS_MODE, loading_mode: LoadingMode = LoadingMode.RGB, dump_memory_snapshot: bool = False, metric_mode: MetricMode = MetricMode.INCLUDE_EMPTY_CLASS, metric_div_zero: float = 1.0)

Bases: CommonModelMixin

Two stream U-Net for LGE & cine CMR.

log_metrics(prefix) None

Implement shared metric logging epoch end here.

Note: This is to prevent circular imports with the logging module.

training_step(batch: tuple[Tensor, Tensor, Tensor, str], batch_idx: int) Tensor

Forward pass for the model with dataloader batches.

Parameters:
  • batch – Batch of LGE images, cine frames, masks, and filenames.

  • batch_idx – Index of the batch in the epoch.

Returns:

Training loss.

Return type:

torch.tensor

Raises:

AssertionError – Prediction shape and ground truth mask shapes are different.

validation_step(batch: tuple[Tensor, Tensor, Tensor, str], batch_idx: int)

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one val dataloader:
def validation_step(self, batch, batch_idx): ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to validate you don’t need to implement this method.

Note

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

test_step(batch: tuple[Tensor, Tensor, Tensor, str], batch_idx: int)

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one test dataloader:
def test_step(self, batch, batch_idx): ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to test you don’t need to implement this method.

Note

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

predict_step(batch: tuple[Tensor, Tensor, Tensor, str | list[str]], batch_idx: int, dataloader_idx: int = 0)

Step function called during predict(). By default, it calls forward(). Override to add any processing logic.

The predict_step() is used to scale inference on multi-devices.

To prevent an OOM error, it is possible to use BasePredictionWriter callback to write the predictions to disk or database after each batch or on epoch end.

The BasePredictionWriter should be used while using a spawn based accelerator. This happens for Trainer(strategy="ddp_spawn") or training on 8 TPU cores with Trainer(accelerator="tpu", devices=8) as predictions won’t be returned.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

Predicted output (optional).

Example

class MyModel(LightningModule):

    def predict_step(self, batch, batch_idx, dataloader_idx=0):
        return self(batch)

dm = ...
model = MyModel()
trainer = Trainer(accelerator="gpu", devices=2)
predictions = trainer.predict(model, dm)
configure_optimizers()

Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple. Optimization with multiple optimizers only works in the manual optimization mode.

Returns:

Any of these 6 options.

  • Single optimizer.

  • List or Tuple of optimizers.

  • Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple lr_scheduler_config).

  • Dictionary, with an "optimizer" key, and (optionally) a "lr_scheduler" key whose value is a single LR scheduler or lr_scheduler_config.

  • None - Fit will run without any optimizer.

The lr_scheduler_config is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.

lr_scheduler_config = {
    # REQUIRED: The scheduler instance
    "scheduler": lr_scheduler,
    # The unit of the scheduler's step size, could also be 'step'.
    # 'epoch' updates the scheduler on epoch end whereas 'step'
    # updates it after a optimizer update.
    "interval": "epoch",
    # How many epochs/steps should pass between calls to
    # `scheduler.step()`. 1 corresponds to updating the learning
    # rate after every epoch/step.
    "frequency": 1,
    # Metric to to monitor for schedulers like `ReduceLROnPlateau`
    "monitor": "val_loss",
    # If set to `True`, will enforce that the value specified 'monitor'
    # is available when the scheduler is updated, thus stopping
    # training if not found. If set to `False`, it will only produce a warning
    "strict": True,
    # If using the `LearningRateMonitor` callback to monitor the
    # learning rate progress, this keyword can be used to specify
    # a custom logged name
    "name": None,
}

When there are schedulers in which the .step() method is conditioned on a value, such as the torch.optim.lr_scheduler.ReduceLROnPlateau scheduler, Lightning requires that the lr_scheduler_config contains the keyword "monitor" set to the metric name that the scheduler should be conditioned on.

# The ReduceLROnPlateau scheduler requires a monitor
def configure_optimizers(self):
    optimizer = Adam(...)
    return {
        "optimizer": optimizer,
        "lr_scheduler": {
            "scheduler": ReduceLROnPlateau(optimizer, ...),
            "monitor": "metric_to_track",
            "frequency": "indicates how often the metric is updated",
            # If "monitor" references validation metrics, then "frequency" should be set to a
            # multiple of "trainer.check_val_every_n_epoch".
        },
    }


# In the case of two optimizers, only one using the ReduceLROnPlateau scheduler
def configure_optimizers(self):
    optimizer1 = Adam(...)
    optimizer2 = SGD(...)
    scheduler1 = ReduceLROnPlateau(optimizer1, ...)
    scheduler2 = LambdaLR(optimizer2, ...)
    return (
        {
            "optimizer": optimizer1,
            "lr_scheduler": {
                "scheduler": scheduler1,
                "monitor": "metric_to_track",
            },
        },
        {"optimizer": optimizer2, "lr_scheduler": scheduler2},
    )

Metrics can be made available to monitor by simply logging it using self.log('metric_to_track', metric_val) in your LightningModule.

Note

Some things to know:

  • Lightning calls .backward() and .step() automatically in case of automatic optimization.

  • If a learning rate scheduler is specified in configure_optimizers() with key "interval" (default “epoch”) in the scheduler configuration, Lightning will call the scheduler’s .step() method automatically in case of automatic optimization.

  • If you use 16-bit precision (precision=16), Lightning will automatically handle the optimizer.

  • If you use torch.optim.LBFGS, Lightning handles the closure function automatically for you.

  • If you use multiple optimizers, you will have to switch to ‘manual optimization’ mode and step them yourself.

  • If you need to control how often the optimizer steps, override the optimizer_step() hook.

Module contents

Model architectures and implementations for the project.