pytorch_pfn_extras.distributed.initialize_ompi_environment#

pytorch_pfn_extras.distributed.initialize_ompi_environment(*, backend='gloo', init_method='tcp', world_size=1, rank=0, local_rank=0, addr='localhost', port='1234', timeout=1800)#

Initialize torch.distributed environments with values taken from OpenMPI.

Parameters:
  • backend (str) – The backend to be used, only "gloo" and "nccl" are supported. Defaults to "gloo".

  • init_method (str) – Initialization method used by torch, only "tcp" and "env" are supported. Defaults to "tcp".

  • world_size (int) – The total world size to be used in case it is not specified in MPI env vars. Defaults to 1.

  • rank (int) – The process rank to be used in case it is not specified in MPI env vars. Defaults to 0.

  • local_rank (int) – The process local rank to be used in case it is not specified in MPI env vars. Defaults to 0.

  • addr (str) – The address of the master process of torch.distributed. Defaults to "localhost"

  • port (str) – The port of the master process of torch.distributed. Defaults to "1234"

  • timeout (int) – Timeout seconds for torch.distributed collective communication. Defaults to 1800.

Return type:

Tuple[int, int, int]