scripts package
Subpackages
Submodules
scripts.attention_unet module
Attention-based U-Net on residual frame information.
- class scripts.attention_unet.ResidualTwoPlusOneDataModule(data_dir: str = 'data/train_val/', test_dir: str = 'data/test/', indices_dir: str = 'data/indices/', batch_size: int = 2, frames: int = 5, image_size: int | Tuple[int, int] = (224, 224), select_frame_method: Literal['consecutive', 'specific'] = 'specific', classification_mode: ClassificationMode = ClassificationMode.MULTICLASS_MODE, residual_mode: ResidualMode = ResidualMode.SUBTRACT_NEXT_FRAME, num_workers: int = 8, loading_mode: LoadingMode = LoadingMode.RGB, combine_train_val: bool = False, augment: bool = False, dummy_predict: DummyPredictMode = DummyPredictMode.NONE, histogram_equalize: bool = False)
Bases:
LightningDataModule
Datamodule for the Residual TwoPlusOne dataset.
- setup(stage)
Set up datamodule components.
- train_dataloader()
An iterable or collection of iterables specifying training samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
- val_dataloader()
An iterable or collection of iterables specifying validation samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.prepare_data()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
Note
If you don’t need a validation dataset and a
validation_step()
, you don’t need to implement this method.
- test_dataloader()
An iterable or collection of iterables specifying test samples.
For more information about multiple dataloaders, see this section.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Note
If you don’t need a test dataset and a
test_step()
, you don’t need to implement this method.
- predict_dataloader()
An iterable or collection of iterables specifying prediction samples.
For more information about multiple dataloaders, see this section.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
- Returns:
A
torch.utils.data.DataLoader
or a sequence of them specifying prediction samples.
- class scripts.attention_unet.ResidualAttentionCLI(*args, **kwargs)
Bases:
I2RInternshipCommonCLI
CLI class for Residual Attention task.
- before_instantiate_classes() None
Run some code before instantiating the classes.
Sets the torch multiprocessing mode depending on the optical flow method.
- add_arguments_to_parser(parser: LightningArgumentParser)
Add extra arguments to CLI parser.
scripts.cine module
Cine Baseline model training script.
- class scripts.cine.CineBaselineDataModule(frames: int = 30, data_dir: str = 'data/train_val/', test_dir: str = 'data/test/', indices_dir: str = 'data/indices/', batch_size: int = 8, image_size: int | Tuple[int, int] = (224, 224), classification_mode: ClassificationMode = ClassificationMode.MULTICLASS_MODE, num_workers: int = 8, loading_mode: LoadingMode = LoadingMode.RGB, combine_train_val: bool = False, augment: bool = False, dummy_predict: DummyPredictMode = DummyPredictMode.NONE, select_frame_method: Literal['consecutive', 'specific'] = 'consecutive')
Bases:
LightningDataModule
DataModule for the Cine baseline implementation.
- setup(stage)
Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
- Parameters:
stage – either
'fit'
,'validate'
,'test'
, or'predict'
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(self, stage): data = load_data(...) self.l1 = nn.Linear(28, data.num_classes)
- train_dataloader()
Get the training dataloader.
- val_dataloader()
An iterable or collection of iterables specifying validation samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.prepare_data()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
Note
If you don’t need a validation dataset and a
validation_step()
, you don’t need to implement this method.
- test_dataloader()
An iterable or collection of iterables specifying test samples.
For more information about multiple dataloaders, see this section.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Note
If you don’t need a test dataset and a
test_step()
, you don’t need to implement this method.
- predict_dataloader()
An iterable or collection of iterables specifying prediction samples.
For more information about multiple dataloaders, see this section.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
- Returns:
A
torch.utils.data.DataLoader
or a sequence of them specifying prediction samples.
- class scripts.cine.CineCLI(*args, **kwargs)
Bases:
I2RInternshipCommonCLI
CLI class for cine CMR task.
- add_arguments_to_parser(parser: LightningArgumentParser) None
Set the default arguments and add the arguments to the parser.
scripts.four_stream module
Four stream feature fusion attention-based U-Net on residual frame information.
- class scripts.four_stream.FourStreamDataModule(data_dir: str = 'data/train_val', test_dir: str = 'data/test', indices_dir: str = 'data/indices', batch_size: int = 2, frames: int = 5, image_size: int | Tuple[int, int] = (224, 224), select_frame_method: Literal['consecutive', 'specific'] = 'specific', classification_mode: ClassificationMode = ClassificationMode.BINARY_CLASS_3_MODE, residual_mode: ResidualMode = ResidualMode.SUBTRACT_NEXT_FRAME, num_workers: int = 8, loading_mode: LoadingMode = LoadingMode.GREYSCALE, combine_train_val: bool = False, augment: bool = False, dummy_predict: DummyPredictMode = DummyPredictMode.NONE, dummy_text: bool = False)
Bases:
LightningDataModule
Lightning datamodule for the four stream task.
- setup(stage: str)
Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
- Parameters:
stage – either
'fit'
,'validate'
,'test'
, or'predict'
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(self, stage): data = load_data(...) self.l1 = nn.Linear(28, data.num_classes)
- train_dataloader()
An iterable or collection of iterables specifying training samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
- val_dataloader()
An iterable or collection of iterables specifying validation samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.prepare_data()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
Note
If you don’t need a validation dataset and a
validation_step()
, you don’t need to implement this method.
- test_dataloader()
An iterable or collection of iterables specifying test samples.
For more information about multiple dataloaders, see this section.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Note
If you don’t need a test dataset and a
test_step()
, you don’t need to implement this method.
- predict_dataloader()
An iterable or collection of iterables specifying prediction samples.
For more information about multiple dataloaders, see this section.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
- Returns:
A
torch.utils.data.DataLoader
or a sequence of them specifying prediction samples.
- class scripts.four_stream.FourStreamAttentionCLI(*args, **kwargs)
Bases:
I2RInternshipCommonCLI
CLI class for 4-stream task.
- add_arguments_to_parser(parser: LightningArgumentParser)
Add extra arguments to CLI parser.
scripts.lge module
LGE Baseline model training script.
- class scripts.lge.LGEBaselineDataModule(data_dir: str = 'data/train_val/', test_dir: str = 'data/test/', indices_dir: str = 'data/indices/', batch_size: int = 8, image_size: int | Tuple[int, int] = (224, 224), classification_mode: ClassificationMode = ClassificationMode.MULTICLASS_MODE, num_workers: int = 8, loading_mode: LoadingMode = LoadingMode.RGB, combine_train_val: bool = False, augment: bool = False, dummy_predict: DummyPredictMode = DummyPredictMode.NONE)
Bases:
LightningDataModule
LGE MRI image data module.
- setup(stage)
Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
- Parameters:
stage – either
'fit'
,'validate'
,'test'
, or'predict'
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(self, stage): data = load_data(...) self.l1 = nn.Linear(28, data.num_classes)
- train_dataloader()
An iterable or collection of iterables specifying training samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
- val_dataloader()
An iterable or collection of iterables specifying validation samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.prepare_data()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
Note
If you don’t need a validation dataset and a
validation_step()
, you don’t need to implement this method.
- test_dataloader()
An iterable or collection of iterables specifying test samples.
For more information about multiple dataloaders, see this section.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Note
If you don’t need a test dataset and a
test_step()
, you don’t need to implement this method.
- predict_dataloader()
An iterable or collection of iterables specifying prediction samples.
For more information about multiple dataloaders, see this section.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
- Returns:
A
torch.utils.data.DataLoader
or a sequence of them specifying prediction samples.
- class scripts.lge.LGECLI(*args, **kwargs)
Bases:
I2RInternshipCommonCLI
CLI for LGE MRI model training.
- add_arguments_to_parser(parser: LightningArgumentParser) None
Set the default arguments and add the arguments to the parser.
scripts.three_stream module
Three stream feature fusion attention-based U-Net on residual frame information.
- class scripts.three_stream.ThreeStreamDataModule(data_dir: str = 'data/train_val', test_dir: str = 'data/test', indices_dir: str = 'data/indices', batch_size: int = 2, frames: int = 5, image_size: int | Tuple[int, int] = (224, 224), select_frame_method: Literal['consecutive', 'specific'] = 'specific', classification_mode: ClassificationMode = ClassificationMode.BINARY_CLASS_3_MODE, residual_mode: ResidualMode = ResidualMode.SUBTRACT_NEXT_FRAME, num_workers: int = 8, loading_mode: LoadingMode = LoadingMode.GREYSCALE, combine_train_val: bool = False, augment: bool = False, dummy_predict: DummyPredictMode = DummyPredictMode.NONE, dummy_text: bool = False)
Bases:
LightningDataModule
Lightning datamodule for the three stream task.
- setup(stage: str)
Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
- Parameters:
stage – either
'fit'
,'validate'
,'test'
, or'predict'
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(self, stage): data = load_data(...) self.l1 = nn.Linear(28, data.num_classes)
- train_dataloader()
An iterable or collection of iterables specifying training samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
- val_dataloader()
An iterable or collection of iterables specifying validation samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.prepare_data()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
Note
If you don’t need a validation dataset and a
validation_step()
, you don’t need to implement this method.
- test_dataloader()
An iterable or collection of iterables specifying test samples.
For more information about multiple dataloaders, see this section.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Note
If you don’t need a test dataset and a
test_step()
, you don’t need to implement this method.
- predict_dataloader()
An iterable or collection of iterables specifying prediction samples.
For more information about multiple dataloaders, see this section.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
- Returns:
A
torch.utils.data.DataLoader
or a sequence of them specifying prediction samples.
- class scripts.three_stream.ThreeStreamAttentionCLI(*args, **kwargs)
Bases:
I2RInternshipCommonCLI
CLI class for 4-stream task.
- add_arguments_to_parser(parser: LightningArgumentParser)
Add extra arguments to CLI parser.
scripts.tscsenet module
Temporal, Spatial Squeeze, and Excitation model.
- class scripts.tscsenet.TSCSECLI(*args, **kwargs)
Bases:
I2RInternshipCommonCLI
CLI class for cine CMR TSCSE-UNet task.
- add_arguments_to_parser(parser: LightningArgumentParser)
Set the default arguments and add the arguments to the parser.
scripts.two_plus_one module
Two-plus-one architecture training script.
- class scripts.two_plus_one.TwoPlusOneDataModule(data_dir: str = 'data/train_val/', test_dir: str = 'data/test/', indices_dir: str = 'data/indices/', batch_size: int = 4, frames: int = 5, image_size: int | Tuple[int, int] = (224, 224), select_frame_method: Literal['consecutive', 'specific'] = 'specific', classification_mode: ClassificationMode = ClassificationMode.MULTICLASS_MODE, num_workers: int = 8, loading_mode: LoadingMode = LoadingMode.RGB, combine_train_val: bool = False, augment: bool = False)
Bases:
LightningDataModule
Datamodule for the TwoPlusOne dataset for PyTorch Lightning compatibility.
- setup(stage)
Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
- Parameters:
stage – either
'fit'
,'validate'
,'test'
, or'predict'
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(self, stage): data = load_data(...) self.l1 = nn.Linear(28, data.num_classes)
- train_dataloader()
An iterable or collection of iterables specifying training samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
- val_dataloader()
An iterable or collection of iterables specifying validation samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.prepare_data()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
Note
If you don’t need a validation dataset and a
validation_step()
, you don’t need to implement this method.
- test_dataloader()
An iterable or collection of iterables specifying test samples.
For more information about multiple dataloaders, see this section.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
Note
If you don’t need a test dataset and a
test_step()
, you don’t need to implement this method.
- predict_dataloader()
An iterable or collection of iterables specifying prediction samples.
For more information about multiple dataloaders, see this section.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
- Returns:
A
torch.utils.data.DataLoader
or a sequence of them specifying prediction samples.
- class scripts.two_plus_one.TwoPlusOneCLI(*args, **kwargs)
Bases:
I2RInternshipCommonCLI
CLI class for cine CMR 2+1 task.
- add_arguments_to_parser(parser: LightningArgumentParser)
Set the default arguments and add the arguments to the parser.
scripts.urr_attention_unet module
Attention-based U-Net on residual frame information with uncertainty.
Module contents
Training/Validation/Test scripts for various tasks.