API Reference#
Package#
Training Loop#
Trainer#
|
Creates a trainer object. |
|
Creates an evaluator object. |
|
|
|
A set of methods that defines the training logic. |
|
Base class of Handler. |
|
A set of callback functions to perform device-specific operations. |
|
A base class for collections of device-specific callback functions. |
|
A collections of callback functions for the devices that PyTorch supports by default. |
Extensions Manager#
|
Manages the extensions and the current status. |
|
Manages extensions and the current status in Ignite training loop. |
Extensions#
|
Decorator to make given function into an extension. |
Base class of extensions. |
|
|
Extension and options. |
|
Extension traces the best value of a specific key in the observation. |
|
An extension to evaluate models on a validation set. |
|
An extension to output the accumulated results to a log file. |
|
Extension traces the maximum value of a specific key in the observation. |
|
Calculates micro-average ratio. |
|
Extension traces the maximum value of a specific key in the observation. |
|
Returns an extension to record the learning rate. |
Returns an extension to continuously record a value. |
|
An extension to report parameter statistics. |
|
|
An extension to output plots. |
|
An extension to print the accumulated results. |
An extension to print a progress bar and recent training status. |
|
Writes the profile results to a file. |
|
|
Returns a trainer extension to take snapshots of the trainer. |
|
An extension to communicate with Slack. |
|
An extension to communicate with Slack using Incoming Webhook. |
An extension to plot statistics for |
Triggers#
Trigger for Early Stopping |
|
|
Trigger based on a fixed interval. |
Trigger invoked at specified point(s) of iterations or epochs. |
|
|
Trigger invoked when specific value becomes best. |
|
Trigger invoked when specific value becomes maximum. |
|
Trigger invoked when specific value becomes minimum. |
|
Trigger based on the starting point of the iteration. |
|
Trigger based on a fixed time interval. |
Reporting#
Object to which observed values are reported. |
|
|
Reports observed values with the current reporter object. |
|
Returns a report scope with the current reporter. |
Logging#
|
Returns a child logger to be used by applications. |
Profiler#
|
Context manager to automatically report execution times. |
Distributed Training#
Module for distributed data parallelism |
|
|
Initialize torch.distributed environments with values taken from OpenMPI. |
Check Pointing#
Lazy Modules#
|
Module to check the shape of a tensor. |
|
Checks the shape and type of a tensor. |
|
Linear module with lazy weight initialization. |
|
Conv1d module with lazy weight initialization. |
|
Conv2d module with lazy weight initialization. |
|
Conv3d module with lazy weight initialization. |
|
BatchNorm1d module with lazy weight initialization. |
|
BatchNorm2d module with lazy weight initialization. |
|
BatchNorm3d module with lazy weight initialization. |
ONNX#
Export#
|
Export model into ONNX Graph. |
|
Export model and I/O tensors of the model in protobuf format. |
Annotation#
|
Annotation parameters to the target function. |
|
Annotation applier to the target function |
|
Add anchor node to the scoped modules |
|
Export model into ONNX Graph. |
|
Export model and I/O tensors of the model in protobuf format. |
Datasets#
|
Dataset that caches the load samples in shared memory |
|
An abstract class that represents tabular dataset. |
Config#
|
|
|
NumPy/CuPy Compatibility#
|
Creates a torch.Tensor from a numpy.ndarray or cupy.ndarray. |
|
Creates a numpy.ndarray or cupy.ndarray from torch.Tensor. |
|
Returns a module of ndarray implementation (numpy or cupy) for the given obj. |
|
Returns NumPy dtype for the given PyTorch dtype. |
|
Returns PyTorch dtype for the given NumPy dtype. |
|
Context-manager that selects a given stream. |
Use the PyTorch memory pool in CuPy. |
|
Use the default memory pool in CuPy. |