pytorch_pfn_extras.nn.parallel.distributed.DistributedDataParallel#
- class pytorch_pfn_extras.nn.parallel.distributed.DistributedDataParallel(module, broadcast_buffers=True, negotiate_grads=True, process_group=None, reduce_function=None, broadcast_function=None, **kwargs)#
Bases:
Module
Module for distributed data parallelism
This class synchronizes the gradients and the buffers after backward computations.
- Parameters:
module (Module) – torch.nn.Module object to be trained
broadcast_buffers (bool) – Boolean flag to broadcast buffers after backward computations. Broadcasting buffers may be helpful when the module includes BatchNormalization. However, it will degrade training throughput. (default: True)
negotiate_grads (bool) – Boolean flag to choose gradients to be sent before all-reduce. This flag is necessary when the computation graph of the module is dynamic. (default: True)
process_group (Optional[ProcessGroup]) – Process group used for broadcasting and reducing. (default: torch.distributed.group.WORLD)
reduce_function (Optional[Callable[[Sequence[Tensor], Optional[ProcessGroup]], None]]) – All-reduce function
broadcast_function (Optional[Callable[[Sequence[Tensor], Optional[ProcessGroup]], None]]) – Broadcast function
kwargs (Any) –
This module receives keyword arguments for the compatibility with torch.nn.parallel.DistributedDataParallel. It shows a warning when setting the ignored arguments.
Methods
__init__
(module[, broadcast_buffers, ...])This module receives keyword arguments for the compatibility with torch.nn.parallel.DistributedDataParallel.
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
compile
(*args, **kwargs)Compile this Module's forward using
torch.compile()
.cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(*args, **kwargs)Defines the computation performed at every call.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.get_extra_state
()Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Moves all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse, ...])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
no_sync
()A context manager to disable synchronization after backward
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_comm_hook
(hook)Registers a hook function.
register_forward_hook
(hook, *[, prepend, ...])Registers a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Registers a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Registers a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Registers a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Registers a post hook to be run after module's
load_state_dict
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
register_state_dict_pre_hook
(hook)These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.share_memory
()See
torch.Tensor.share_memory_()
Returns a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device[, recurse])Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Resets gradients of all model parameters.
Attributes
alias of TypeVar('T_destination', bound=
Mapping
[str
,Tensor
])call_super_init
dump_patches
- T_destination#
alias of TypeVar(‘T_destination’, bound=
Mapping
[str
,Tensor
])
- __init__(module, broadcast_buffers=True, negotiate_grads=True, process_group=None, reduce_function=None, broadcast_function=None, **kwargs)#
This module receives keyword arguments for the compatibility with torch.nn.parallel.DistributedDataParallel. It shows a warning when setting the ignored arguments.
- Parameters:
module (Module) –
broadcast_buffers (bool) –
negotiate_grads (bool) –
process_group (Optional[ProcessGroup]) –
reduce_function (Optional[Callable[[Sequence[Tensor], Optional[ProcessGroup]], None]]) –
broadcast_function (Optional[Callable[[Sequence[Tensor], Optional[ProcessGroup]], None]]) –
kwargs (Any) –
- Return type:
None
- forward(*args, **kwargs)#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters:
args (Any) –
kwargs (Any) –
- Return type:
Any
- load_state_dict(state_dict, strict=True, *args)#
Copies parameters and buffers from
state_dict
into this module and its descendants. Ifstrict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Warning
If
assign
isTrue
the optimizer must be created after the call toload_state_dict
.- Parameters:
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
assign (bool, optional) – whether to assign items in the state dictionary to their corresponding keys in the module instead of copying them inplace into the module’s current parameters and buffers. When
False
, the properties of the tensors in the current module are preserved while whenTrue
, the properties of the Tensors in the state dict are preserved. Default:False
args (Any) –
- Returns:
missing_keys is a list of str containing the missing keys
unexpected_keys is a list of str containing the unexpected keys
- Return type:
NamedTuple
withmissing_keys
andunexpected_keys
fields
Note
If a parameter or buffer is registered as
None
and its corresponding key exists instate_dict
,load_state_dict()
will raise aRuntimeError
.
- no_sync()#
A context manager to disable synchronization after backward
- Return type:
Generator[None, None, None]
- register_comm_hook(hook)#
Registers a hook function. This module will invoke the hook before starting the synchronization.
Args: hook: Callable object that will be invoked before synchronization
- Parameters:
hook (Callable[[DistributedDataParallel], None]) –
- Return type:
RemovableHandle
- state_dict()#
Returns a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.Note
The returned object is a shallow copy. It contains references to the module’s parameters and buffers.
Warning
Currently
state_dict()
also accepts positional arguments fordestination
,prefix
andkeep_vars
in order. However, this is being deprecated and keyword arguments will be enforced in future releases.Warning
Please avoid the use of argument
destination
as it is not designed for end-users.- Parameters:
destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an
OrderedDict
will be created and returned. Default:None
.prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default:
''
.keep_vars (bool, optional) – by default the
Tensor
s returned in the state dict are detached from autograd. If it’s set toTrue
, detaching will not be performed. Default:False
.
- Returns:
a dictionary containing a whole state of the module
- Return type:
dict
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight']
- training: bool#