hookeai.material_model_finder.train.training.train_model

train_model(n_max_epochs, specimen_data, specimen_material_state, model_init_args, lr_init, opt_algorithm='adam', lr_scheduler_type=None, lr_scheduler_kwargs={}, is_explicit_model_parameters=False, is_params_stopping=True, params_stopping_kwargs={}, loss_scaling_factor=None, loss_time_weights=None, save_every=None, device_type='cpu', seed=None, is_verbose=False)[source]

Training of recurrent constitutive model.

Parameters:
  • n_max_epochs (int) – Maximum number of training epochs.

  • specimen_data (SpecimenNumericalData) – Specimen numerical data translated from experimental results.

  • specimen_material_state (StructureMaterialState) – FETorch structure material state.

  • model_init_args (dict) – Material model finder class initialization parameters (check class MaterialModelFinder).

  • lr_init (float) – Initial value optimizer learning rate. Constant learning rate value if no learning rate scheduler is specified (lr_scheduler_type=None).

  • opt_algorithm ({'adam',}, default='adam') –

    Optimization algorithm:

    ’adam’ : Adam (torch.optim.Adam)

  • lr_scheduler_type ({'steplr', 'explr', 'linlr'}, default=None) –

    Type of learning rate scheduler:

    ’steplr’ : Step-based decay (torch.optim.lr_scheduler.SetpLR)

    ’explr’ : Exponential decay (torch.optim.lr_scheduler.ExponentialLR)

    ’linlr’ : Linear decay (torch.optim.lr_scheduler.LinearLR)

  • lr_scheduler_kwargs (dict, default={}) – Arguments of torch.optim.lr_scheduler.LRScheduler initializer.

  • is_explicit_model_parameters (bool, default=False) – If True, then activate the explicit handling of model parameters. This includes enforcing available bounds on the parameters during the training procedure and storing the model parameters history for post-processing.

  • is_params_stopping (bool, default=True) – If True, then training process is halted when parameters convergence criterion is triggered.

  • params_stopping_kwargs (dict, default={}) – Parameters convergence stopping criterion parameters.

  • loss_scaling_factor (torch.Tensor(0d), default=None) – Loss scaling factor. If provided, then loss is pre-multiplied by loss scaling factor.

  • loss_time_weights (torch.Tensor(1d), default=None) – Loss time weights stored as torch.Tensor(1d) of shape (n_time). If provided, then each discrete time loss contribution is pre-multiplied by corresponding weight. If None, time weights are set to 1.0.

  • save_every (int, default=None) – Save model every save_every epochs. If None, then saves only last epoch and best performance states.

  • device_type ({'cpu', 'cuda'}, default='cpu') – Type of device on which torch.Tensor is allocated.

  • seed (int, default=None) – Seed used to initialize the random number generators of Python and other libraries (e.g., NumPy, PyTorch) for all devices to preserve reproducibility. Does also set workers seed in PyTorch data loaders.

  • is_verbose (bool, default=False) – If True, enable verbose output.

Returns:

  • model (torch.nn.Module) – Recurrent neural network model.

  • best_loss (float) – Best loss during training process.

  • best_training_epoch (int) – Training epoch corresponding to best loss during training process.