Model Components#

Modular torch.nn.Module components for building custom architectures.

Text Embedding#

TextEmbedder#

Embeds text tokens with optional self-attention.

class torchTextClassifiers.model.components.text_embedder.TextEmbedder(text_embedder_config)[source]#

Bases: Module

__init__(text_embedder_config)[source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

init_weights()[source]#
forward(input_ids, attention_mask)[source]#

Converts input token IDs to their corresponding embeddings.

Return type:

Tensor

TextEmbedderConfig#

Configuration for TextEmbedder.

class torchTextClassifiers.model.components.text_embedder.TextEmbedderConfig(vocab_size, embedding_dim, padding_idx, attention_config=None)[source]#

Bases: object

vocab_size: int#
embedding_dim: int#
padding_idx: int#
attention_config: Optional[AttentionConfig] = None#
__init__(vocab_size, embedding_dim, padding_idx, attention_config=None)#

Example:

from torchTextClassifiers.model.components import TextEmbedder, TextEmbedderConfig

# Simple text embedder
config = TextEmbedderConfig(
    vocab_size=5000,
    embedding_dim=128,
    attention_config=None
)
embedder = TextEmbedder(config)

# With self-attention
from torchTextClassifiers.model.components import AttentionConfig

attention_config = AttentionConfig(
    n_embd=128,
    n_head=4,
    n_layer=2,
    dropout=0.1
)
config = TextEmbedderConfig(
    vocab_size=5000,
    embedding_dim=128,
    attention_config=attention_config
)
embedder = TextEmbedder(config)

Categorical Features#

CategoricalVariableNet#

Handles categorical features alongside text.

class torchTextClassifiers.model.components.categorical_var_net.CategoricalVariableNet(categorical_vocabulary_sizes, categorical_embedding_dims=None, text_embedding_dim=None)[source]#

Bases: Module

__init__(categorical_vocabulary_sizes, categorical_embedding_dims=None, text_embedding_dim=None)[source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(categorical_vars_tensor)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

CategoricalForwardType#

Enum for categorical feature combination strategies.

class torchTextClassifiers.model.components.categorical_var_net.CategoricalForwardType(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]#

Bases: Enum

SUM_TO_TEXT#

Sum categorical embeddings, concatenate with text.

AVERAGE_AND_CONCAT#

Average categorical embeddings, concatenate with text.

CONCATENATE_ALL#

Concatenate all embeddings (text + each categorical).

SUM_TO_TEXT = 'EMBEDDING_SUM_TO_TEXT'#
AVERAGE_AND_CONCAT = 'EMBEDDING_AVERAGE_AND_CONCAT'#
CONCATENATE_ALL = 'EMBEDDING_CONCATENATE_ALL'#

Example:

from torchTextClassifiers.model.components import (
    CategoricalVariableNet,
    CategoricalForwardType
)

# 3 categorical variables with different vocab sizes
cat_net = CategoricalVariableNet(
    vocabulary_sizes=[10, 5, 20],
    embedding_dims=[8, 4, 16],
    forward_type=CategoricalForwardType.AVERAGE_AND_CONCAT
)

# Forward pass
cat_embeddings = cat_net(categorical_data)

Classification Head#

ClassificationHead#

Linear classification layer(s).

class torchTextClassifiers.model.components.classification_head.ClassificationHead(input_dim=None, num_classes=None, net=None)[source]#

Bases: Module

__init__(input_dim=None, num_classes=None, net=None)[source]#

Classification head for text classification tasks. It is a nn.Module that can either be a simple Linear layer or a custom neural network module.

Parameters:
  • input_dim (int, optional) – Dimension of the input features. Required if net is not provided.

  • num_classes (int, optional) – Number of output classes. Required if net is not provided.

  • net (nn.Module, optional) – Custom neural network module to be used as the classification head. If provided, input_dim and num_classes are inferred from this module. Should be either an nn.Sequential with first and last layers being Linears or nn.Linear.

forward(x)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Example:

from torchTextClassifiers.model.components import ClassificationHead

# Simple linear classifier
head = ClassificationHead(
    input_dim=128,
    num_classes=5
)

# Custom classifier with nested nn.Module
import torch.nn as nn

custom_head_module = nn.Sequential(
    nn.Linear(128, 64),
    nn.ReLU(),
    nn.Dropout(0.2),
    nn.Linear(64, 5)
)

head = ClassificationHead(net=custom_head_module)

Attention Mechanism#

AttentionConfig#

Configuration for transformer-style self-attention.

class torchTextClassifiers.model.components.attention.AttentionConfig(n_layers, n_head, n_kv_head, sequence_len=None, positional_encoding=True, aggregation_method='mean')[source]#

Bases: object

Attributes

n_embd: int#

Embedding dimension.

n_head: int#

Number of attention heads.

n_layer: int#

Number of transformer blocks.

dropout: float#

Dropout rate (default: 0.0).

bias: bool#

Use bias in linear layers (default: False).

n_layers: int#
n_head: int#
n_kv_head: int#
sequence_len: Optional[int] = None#
positional_encoding: bool = True#
aggregation_method: str = 'mean'#
__init__(n_layers, n_head, n_kv_head, sequence_len=None, positional_encoding=True, aggregation_method='mean')#

Block#

Single transformer block with self-attention + MLP.

class torchTextClassifiers.model.components.attention.Block(config, layer_idx)[source]#

Bases: Module

__init__(config, layer_idx)[source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, cos_sin)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

SelfAttentionLayer#

Multi-head self-attention layer.

class torchTextClassifiers.model.components.attention.SelfAttentionLayer(config, layer_idx)[source]#

Bases: Module

__init__(config, layer_idx)[source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, cos_sin=None)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

MLP#

Feed-forward network.

class torchTextClassifiers.model.components.attention.MLP(config)[source]#

Bases: Module

__init__(config)[source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Example:

from torchTextClassifiers.model.components import AttentionConfig, Block

# Configure attention
config = AttentionConfig(
    n_embd=128,
    n_head=4,
    n_layer=3,
    dropout=0.1
)

# Create transformer block
block = Block(config)

# Forward pass (requires rotary embeddings cos, sin)
output = block(embeddings, cos, sin)

Composing Components#

Components can be composed to create custom architectures:

import torch.nn as nn
from torchTextClassifiers.model.components import (
    TextEmbedder, CategoricalVariableNet, ClassificationHead
)

class CustomModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.text_embedder = TextEmbedder(text_config)
        self.cat_net = CategoricalVariableNet(...)
        self.head = ClassificationHead(...)

    def forward(self, input_ids, categorical_data):
        text_features = self.text_embedder(input_ids)
        cat_features = self.cat_net(categorical_data)
        combined = torch.cat([text_features, cat_features], dim=1)
        return self.head(combined)

See Also#