pytorch_pfn_extras.engine.create_evaluator#

pytorch_pfn_extras.engine.create_evaluator(models, *, progress_bar=False, device='cpu', metrics=None, logic=None, handler_class=None, options=None, runtime_options=None, profile=None, distributed=False)#

Creates an evaluator object. The return value of this function is expected to be fed to ppe.engine.create_trainer as an argument.

Parameters:
  • models (Union[Module, Mapping[str, Module]]) – Map of string to torch.nn.Module or an actual Module. In most cases, this arugment is the same as the model arguemnt of ppe.engine.create_trainer.

  • progress_bar (bool) – If True, a progress bar is enabled in evaluation.

  • device (str or torch.device) – Device name used for selecting a corresponding runtime class.

  • metrics (list of metrics) – List of metrics, which computes various quantities and update output for the reporting.

  • logic (Optional[Logic]) – A logic object. If None is given, an logic object is instantiated from the default logic class.

  • handler_class (Optional[Type[Handler]]) – A handler class that instantiates a handler object. If None is given, ppe.handler.Handler is used as a default handler class.

  • options (Optional[Dict[str, Any]]) – Options that are set to the handler and logic object. See the documentation of ppe.handler.Handler and ppe.handler.Logic for details.

  • runtime_options (Optional[Mapping[str, Any]]) – Options that are set to the runtime object. See the documentation of ppe.handler.Handler for details.

  • profile (Optional[profile]) – A torch.profiler.profile object to collect the performance metrics.

  • distributed (bool) – Flag to determine whether to create a distributed-enabled evaluator. If set to True, the created evaluator will support distributed execution.

Return type:

Evaluator