Core Models#

Core PyTorch and PyTorch Lightning models.

PyTorch Model#

TextClassificationModel#

Core PyTorch nn.Module combining all components.

class torchTextClassifiers.model.model.TextClassificationModel(classification_head, token_embedder=None, sentence_embedder=None, categorical_variable_net=None)[source]#

Bases: Module

Architecture:

The model combines four main components:

  1. TokenEmbedder: Maps each token to a dense vector (with optional self-attention)

  2. SentenceEmbedder: Aggregates token vectors into a sentence representation

  3. CategoricalVariableNet (optional): Handles categorical features

  4. ClassificationHead: Produces class logits

__init__(classification_head, token_embedder=None, sentence_embedder=None, categorical_variable_net=None)[source]#

Constructor for the FastTextModel class.

Parameters:
  • classification_head (ClassificationHead) – The classification head module.

  • token_embedder (Optional[TextEmbedder]) – The text embedding module. If not provided, assumes that input text is already embedded (as tensors) and directly passed to the classification head.

  • sentence_embedder (SentenceEmbedder | None)

  • categorical_variable_net (Optional[CategoricalVariableNet]) – The categorical variable network module. If not provided, assumes no categorical variables are used.

forward(input_ids, attention_mask, categorical_vars, return_label_attention_matrix=False, **kwargs)[source]#

Memory-efficient forward pass implementation.

Args: output from dataset collate_fn

input_ids (torch.Tensor[Long]), shape (batch_size, seq_len): Tokenized + padded text attention_mask (torch.Tensor[int]), shape (batch_size, seq_len): Attention mask indicating non-pad tokens categorical_vars (torch.Tensor[Long]): Additional categorical features, (batch_size, num_categorical_features) return_label_attention_matrix (bool): If True, returns a dict with logits and label_attention_matrix

Returns:

  • If return_label_attention_matrix is False: torch.Tensor of shape (batch_size, num_classes) containing raw logits (not softmaxed)

  • If return_label_attention_matrix is True: dict with keys:
    • ”logits”: torch.Tensor of shape (batch_size, num_classes)

    • ”label_attention_matrix”: torch.Tensor of shape (batch_size, num_classes, seq_len)

Return type:

Union[torch.Tensor, dict[str, torch.Tensor]]

Example:

from torchTextClassifiers.model import TextClassificationModel
from torchTextClassifiers.model.components import (
    TokenEmbedder, TokenEmbedderConfig,
    SentenceEmbedder, SentenceEmbedderConfig,
    CategoricalVariableNet,
    ClassificationHead,
)

# Create components
token_embedder = TokenEmbedder(TokenEmbedderConfig(
    vocab_size=5000,
    embedding_dim=128,
    padding_idx=0,
))
sentence_embedder = SentenceEmbedder(SentenceEmbedderConfig(aggregation_method="mean"))

cat_net = CategoricalVariableNet(
    categorical_vocabulary_sizes=[10, 20],
    categorical_embedding_dims=[8, 16],
)

classification_head = ClassificationHead(
    input_dim=128 + 24,  # text_dim + cat_dim
    num_classes=5,
)

# Combine into model
model = TextClassificationModel(
    token_embedder=token_embedder,
    sentence_embedder=sentence_embedder,
    categorical_variable_net=cat_net,
    classification_head=classification_head,
)

# Forward pass
logits = model(input_ids, attention_mask, categorical_data)

PyTorch Lightning Module#

TextClassificationModule#

PyTorch Lightning LightningModule for automated training.

class torchTextClassifiers.model.lightning.TextClassificationModule(model, loss, optimizer, optimizer_params, scheduler, scheduler_params, scheduler_interval='epoch', **kwargs)[source]#

Bases: LightningModule

Pytorch Lightning Module for FastTextModel.

Features:

  • Automated training/validation/test steps

  • Metrics tracking (accuracy)

  • Optimizer and scheduler management

  • Logging integration

  • PyTorch Lightning callbacks support

__init__(model, loss, optimizer, optimizer_params, scheduler, scheduler_params, scheduler_interval='epoch', **kwargs)[source]#

Initialize FastTextModule.

Parameters:
  • model (Module) – Model.

  • loss – Loss

  • optimizer – Optimizer

  • optimizer_params – Optimizer parameters.

  • scheduler – Scheduler.

  • scheduler_params – Scheduler parameters.

  • scheduler_interval – Scheduler interval.

forward(batch)[source]#

Perform forward-pass.

Parameters:

batch (List[torch.LongTensor]) – Batch to perform forward-pass on.

Return type:

Tensor

Returns (torch.Tensor): Prediction.

step(batch)[source]#
Return type:

tuple[Tensor, Tensor | list[Tensor]]

training_step(batch, batch_idx)[source]#

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx (int) – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Return type:

Tensor

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary which can include any keys, but must include the key 'loss' in the case of automatic optimization.

  • None - In automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.

In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.

Example:

def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss

To use multiple optimizers, you can switch to ‘manual optimization’ and control their stepping:

def __init__(self):
    super().__init__()
    self.automatic_optimization = False


# Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx):
    opt1, opt2 = self.optimizers()

    # do training_step with encoder
    ...
    opt1.step()
    # do training_step with decoder
    ...
    opt2.step()

Note

When accumulate_grad_batches > 1, the loss returned here will be automatically normalized by accumulate_grad_batches internally.

validation_step(batch, batch_idx)[source]#

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx (int) – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one val dataloader:
def validation_step(self, batch, batch_idx): ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    x, y = batch

    # implement your own
    out = self(x)

    if dataloader_idx == 0:
        loss = self.loss0(out, y)
    else:
        loss = self.loss1(out, y)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs separately for each dataloader
    self.log_dict({f"val_loss_{dataloader_idx}": loss, f"val_acc_{dataloader_idx}": acc})

Note

If you don’t need to validate you don’t need to implement this method.

Note

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

test_step(batch, batch_idx)[source]#

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

Parameters:
  • batch – The output of your data iterable, normally a DataLoader.

  • batch_idx (int) – The index of this batch.

  • dataloader_idx – The index of the dataloader that produced this batch. (only if multiple dataloaders used)

Returns:

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one test dataloader:
def test_step(self, batch, batch_idx): ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    x, y = batch

    # implement your own
    out = self(x)

    if dataloader_idx == 0:
        loss = self.loss0(out, y)
    else:
        loss = self.loss1(out, y)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs separately for each dataloader
    self.log_dict({f"test_loss_{dataloader_idx}": loss, f"test_acc_{dataloader_idx}": acc})

Note

If you don’t need to test you don’t need to implement this method.

Note

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

predict_step(batch, batch_idx=0, dataloader_idx=0)[source]#

Prediction step.

Parameters:
  • batch (List[torch.LongTensor]) – Prediction batch.

  • batch_idx (int) – Batch index.

  • dataloader_idx (int) – Dataloader index.

Returns (torch.Tensor): Predictions.

configure_optimizers()[source]#

Configure optimizer for Pytorch lighting.

Returns: Optimizer and scheduler for pytorch lighting.

Example:

from torchTextClassifiers.model import (
    TextClassificationModel,
    TextClassificationModule
)
import torch.nn as nn
import torch.optim as optim
from pytorch_lightning import Trainer

# Create PyTorch model
model = TextClassificationModel(...)

# Wrap in Lightning module
lightning_module = TextClassificationModule(
    model=model,
    loss=nn.CrossEntropyLoss(),
    optimizer=optim.Adam,
    lr=1e-3,
    scheduler=optim.lr_scheduler.StepLR,
    scheduler_params={"step_size": 10, "gamma": 0.1}
)

# Train with Lightning Trainer
trainer = Trainer(
    max_epochs=20,
    accelerator="auto",
    devices=1
)

trainer.fit(
    lightning_module,
    train_dataloaders=train_dataloader,
    val_dataloaders=val_dataloader
)

# Test
trainer.test(lightning_module, dataloaders=test_dataloader)

Training Steps#

The TextClassificationModule implements standard training/validation/test steps:

Training Step:

def training_step(self, batch, batch_idx):
    input_ids, cat_features, labels = batch
    logits = self.model(input_ids, cat_features)
    loss = self.loss(logits, labels)
    acc = self.compute_accuracy(logits, labels)

    self.log("train_loss", loss)
    self.log("train_acc", acc)

    return loss

Validation Step:

def validation_step(self, batch, batch_idx):
    input_ids, cat_features, labels = batch
    logits = self.model(input_ids, cat_features)
    loss = self.loss(logits, labels)
    acc = self.compute_accuracy(logits, labels)

    self.log("val_loss", loss)
    self.log("val_acc", acc)

Custom Training#

For custom training loops, use the PyTorch model directly:

from torchTextClassifiers.model import TextClassificationModel
import torch.nn as nn
import torch.optim as optim

model = TextClassificationModel(...)
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=1e-3)

# Custom training loop
for epoch in range(num_epochs):
    for batch in dataloader:
        input_ids, cat_features, labels = batch

        # Forward pass
        logits = model(input_ids, cat_features)
        loss = loss_fn(logits, labels)

        # Backward pass
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        print(f"Epoch {epoch}, Loss: {loss.item()}")

See Also#