pytorch_pfn_extras.onnx.export_testcase#

pytorch_pfn_extras.onnx.export_testcase(model, args, out_dir, *, output_grad=False, metadata=True, model_overwrite=True, strip_large_tensor_data=False, large_tensor_threshold=100, return_output=False, user_meta=None, export_torch_script=False, export_torch_trace=False, export_chrome_tracing=True, **kwargs)#

Export model and I/O tensors of the model in protobuf format.

Parameters:
  • output_grad (bool or Tensor) – If True, this function will output model’s gradient with names ‘gradient_%d.pb’. If set Tensor, use it as gradient input. The gradient inputs are output as ‘gradient_input_%d.pb’ along with gradient.

  • metadata (bool) – If True, output meta information taken from git log.

  • model_overwrite (bool) – If False and model.onnx has already existed, only export input/output data as another test dataset.

  • strip_large_tensor_data (bool) – If True, this function will strip data of large tensors to reduce ONNX file size for benchmarking

  • large_tensor_threshold (int) – If number of elements of tensor is larger than this value, the tensor is stripped when strip_large_tensor_data is True

  • return_output (bool) – If True, return output values come from the model.

  • export_torch_script (bool) – Output model_script.pt using torch.jit.script

  • export_torch_trace (bool) – Output model_trace.pt using torch.jit.trace

  • model (Union[Module, ScriptModule]) –

  • args (Any) –

  • out_dir (str) –

  • user_meta (Optional[Mapping[str, Any]]) –

  • export_chrome_tracing (bool) –

  • kwargs (Any) –

Return type:

Any

Warning

This function is not thread safe.

Note

When exporting a model whose forward takes keyword arguments of torch.Tensor type, you can pass them by putting a dict as the last element of args. When the keyword arguments have default values, you need to explicitly include them into the dict. Also, you must explicitly specify input_names that are the names of both positional and keyword arguments.