hookeai.model_architectures.hybrid_base_model.train.training.train_model

train_model(n_max_epochs, dataset, model_init_args, lr_init, opt_algorithm='adam', lr_scheduler_type=None, lr_scheduler_kwargs={}, loss_nature='features_out', loss_type='mse', loss_kwargs={}, batch_size=1, is_sampler_shuffle=False, is_early_stopping=False, early_stopping_kwargs={}, model_load_state=None, save_every=None, dataset_file_path=None, device_type='cpu', seed=None, is_verbose=False)[source]

Training of hybrid model.

Parameters:
  • n_max_epochs (int) – Maximum number of training epochs.

  • dataset (torch.utils.data.Dataset) – Time series data set. Each sample is stored as a dictionary where each feature (key, str) data is a torch.Tensor(2d) of shape (sequence_length, n_features).

  • model_init_args (dict) – Recurrent constitutive model class initialization parameters (check class RecurrentConstitutiveModel).

  • lr_init (float) – Initial value optimizer learning rate. Constant learning rate value if no learning rate scheduler is specified (lr_scheduler_type=None).

  • opt_algorithm ({'adam',}, default='adam') –

    Optimization algorithm:

    ’adam’ : Adam (torch.optim.Adam)

  • lr_scheduler_type ({'steplr', 'explr', 'linlr'}, default=None) –

    Type of learning rate scheduler:

    ’steplr’ : Step-based decay (torch.optim.lr_scheduler.SetpLR)

    ’explr’ : Exponential decay (torch.optim.lr_scheduler.ExponentialLR)

    ’linlr’ : Linear decay (torch.optim.lr_scheduler.LinearLR)

  • lr_scheduler_kwargs (dict, default={}) – Arguments of torch.optim.lr_scheduler.LRScheduler initializer.

  • loss_nature ({'features_out',}, default='features_out') –

    Loss nature:

    ’features_out’ : Based on output features

  • loss_type ({'mse',}, default='mse') –

    Loss function type:

    ’mse’ : MSE (torch.nn.MSELoss)

  • loss_kwargs (dict, default={}) – Arguments of torch.nn._Loss initializer.

  • batch_size (int, default=1) – Number of samples loaded per batch.

  • is_sampler_shuffle (bool, default=False) – If True, shuffles data set samples at every epoch.

  • is_early_stopping (bool, default=False) – If True, then training process is halted when early stopping criterion is triggered.

  • early_stopping_kwargs (dict, default={}) – Early stopping criterion parameters (key, str, item, value).

  • model_load_state ({'default', 'init', int, 'best', 'last'},) –

    default=’default’ Available model state to be loaded from the model directory. Options:

    ’default’ : Model default state file

    ’init’ : Model initial state

    int : Model state of given training epoch

    ’best’ : Model state of best performance

    ’last’ : Model state of latest training epoch

  • save_every (int, default=None) – Save model every save_every epochs. If None, then saves only last epoch and best performance states.

  • dataset_file_path (str, default=None) – Time series data set file path if such file exists. Only used for output purposes.

  • device_type ({'cpu', 'cuda'}, default='cpu') – Type of device on which torch.Tensor is allocated.

  • seed (int, default=None) – Seed used to initialize the random number generators of Python and other libraries (e.g., NumPy, PyTorch) for all devices to preserve reproducibility. Does also set workers seed in PyTorch data loaders.

  • is_verbose (bool, default=False) – If True, enable verbose output.

Returns:

  • model (torch.nn.Module) – Recurrent neural network model.

  • best_loss (float) – Best loss during training process.

  • best_training_epoch (int) – Training epoch corresponding to best loss during training process.