Configuration Classes#
Configuration dataclasses for model and training setup.
ModelConfig#
Configuration for model architecture.
- class torchTextClassifiers.torchTextClassifiers.ModelConfig(embedding_dim, categorical_vocabulary_sizes=None, categorical_embedding_dims=None, num_classes=None, attention_config=None)[source]#
Bases:
objectBase configuration class for text classifiers.
Attributes
- categorical_vocabulary_sizes: List[int] | None#
Vocabulary sizes for categorical variables (optional).
- categorical_embedding_dims: List[int] | int | None#
Embedding dimensions for categorical variables (optional).
- attention_config: AttentionConfig | None#
Configuration for attention mechanism (optional).
-
attention_config:
Optional[AttentionConfig] = None#
- __init__(embedding_dim, categorical_vocabulary_sizes=None, categorical_embedding_dims=None, num_classes=None, attention_config=None)#
Example#
from torchTextClassifiers import ModelConfig
from torchTextClassifiers.model.components import AttentionConfig
# Simple configuration
config = ModelConfig(
embedding_dim=128,
num_classes=3
)
# With categorical features
config = ModelConfig(
embedding_dim=128,
num_classes=5,
categorical_vocabulary_sizes=[10, 20, 5], # 3 categorical variables
categorical_embedding_dims=[8, 16, 4] # Their embedding dimensions
)
# With attention
attention_config = AttentionConfig(
n_embd=128,
n_head=4,
n_layer=2,
dropout=0.1
)
config = ModelConfig(
embedding_dim=128,
num_classes=2,
attention_config=attention_config
)
TrainingConfig#
Configuration for training process.
- class torchTextClassifiers.torchTextClassifiers.TrainingConfig(num_epochs, batch_size, lr, loss=<factory>, optimizer=<class 'torch.optim.adam.Adam'>, scheduler=None, accelerator='auto', num_workers=12, patience_early_stopping=3, dataloader_params=None, trainer_params=None, optimizer_params=None, scheduler_params=None)[source]#
Bases:
objectAttributes
- loss: torch.nn.Module#
Loss function (default: CrossEntropyLoss).
- optimizer: Type[torch.optim.Optimizer]#
Optimizer class (default: Adam).
- scheduler: Type[torch.optim.lr_scheduler._LRScheduler] | None#
Learning rate scheduler class (optional).
- __init__(num_epochs, batch_size, lr, loss=<factory>, optimizer=<class 'torch.optim.adam.Adam'>, scheduler=None, accelerator='auto', num_workers=12, patience_early_stopping=3, dataloader_params=None, trainer_params=None, optimizer_params=None, scheduler_params=None)#
Example#
from torchTextClassifiers import TrainingConfig
import torch.nn as nn
import torch.optim as optim
# Basic configuration
config = TrainingConfig(
num_epochs=20,
batch_size=32,
lr=1e-3
)
# Advanced configuration
config = TrainingConfig(
num_epochs=50,
batch_size=64,
lr=5e-4,
loss=nn.CrossEntropyLoss(weight=torch.tensor([1.0, 2.0, 1.5])),
optimizer=optim.AdamW,
scheduler=optim.lr_scheduler.CosineAnnealingLR,
accelerator="gpu",
patience_early_stopping=10,
optimizer_params={"weight_decay": 0.01},
scheduler_params={"T_max": 50}
)
See Also#
torchTextClassifiers Wrapper - Using configurations with the wrapper
Model Components - AttentionConfig for attention mechanism
Core Models - How configurations affect the model