pytorch_pfn_extras.training.extensions.Evaluator#

class pytorch_pfn_extras.training.extensions.Evaluator(self, iterator, target, eval_func=None, *, progress_bar=False)#

Bases: Extension

An extension to evaluate models on a validation set.

This extension evaluates the current models by a given evaluation function. It creates a Reporter object to store values observed in the evaluation function on each iteration. The report for all iterations are aggregated to DictSummary. The collected mean values are further reported to the reporter object of the manager, where the name of each observation is prefixed by the evaluator name. See Reporter for details in naming rules of the reports.

Evaluator has a structure to customize similar to that of StandardUpdater. The main differences are:

  • There are no optimizers in an evaluator. Instead, it holds links to evaluate.

  • An evaluation loop function is used instead of an update function.

  • Preparation routine can be customized, which is called before each evaluation. It can be used, e.g., to initialize the state of stateful recurrent networks.

There are two ways to modify the evaluation behavior besides setting a custom evaluation function. One is by setting a custom evaluation loop via the eval_func argument. The other is by inheriting this class and overriding the evaluate() method. In latter case, users have to create and handle a reporter object manually. Users also have to copy the iterators before using them, in order to reuse them at the next time of evaluation. In both cases, the functions are called in testing mode

This extension is called at the end of each epoch by default.

Parameters:
  • iterator (Union[DataLoader[Any], Dict[str, DataLoader[Any]]]) – Dataset iterator for the validation dataset. It can also be a dictionary of iterators. If this is just an iterator, the iterator is registered by the name 'main'.

  • target (Union[Module, Dict[str, Module]]) – torch.nn.Module object or a dictionary of links to evaluate. If this is just a layer object, the link is registered by the name 'main'.

  • eval_func (Optional[Callable[[...], Any]]) – Evaluation function called at each iteration. The target link to evaluate as a callable is used by default.

  • progress_bar – Boolean flag to show a progress bar while training, which is similar to ProgressBar. (default: False)

  • metrics – List of callables that are called every batch to calculate metrics such as accuracy, roc_auc or others The signature of the callable is: def metric_fn(batch, output, last_iteration) (default: [])

  • eval_hook (Optional[Callable[[Evaluator], None]]) –

  • kwargs (Any) –

Warning

The argument progress_bar is experimental. The interface can change in the future.

eval_hook#

Function to prepare for each evaluation process.

eval_func#

Evaluation function called at each iteration.

Parameters:
  • args (Any) –

  • kwargs (Any) –

Return type:

Any

Methods

__init__(iterator, target[, eval_hook, ...])

add_metric(metric_fn)

Adds a custom metric to the evaluator.

eval_func(*args, **kwargs)

evaluate()

Evaluates the model and returns a result dictionary.

finalize(manager)

Finalizes the extension.

get_all_iterators()

Returns a dictionary of all iterators.

get_all_targets()

Returns a dictionary of all target links.

get_iterator(name)

Returns the iterator of the given name.

get_target(name)

Returns the target link of the given name.

initialize(manager)

Initializes up the manager state.

load_state_dict(to_load)

on_error(manager, exc, tb)

Handles the error raised during training before finalization.

state_dict()

Serializes the extension state.

Attributes

default_name

is_async

name

needs_model_state

priority

trigger

__call__(manager=None)#

Executes the evaluator extension.

Unlike usual extensions, this extension can be executed without passing a manager object. This extension reports the performance on validation dataset using the report() function. Thus, users can use this extension independently from any manager by manually configuring a Reporter object.

Parameters:

manager (ExtensionsManager) – Manager object that invokes this extension.

Returns:

Result dictionary that contains mean statistics of values reported by the evaluation function.

Return type:

dict

__init__(iterator, target, eval_hook=None, eval_func=None, **kwargs)#
Parameters:
  • iterator (Union[DataLoader[Any], Dict[str, DataLoader[Any]]]) –

  • target (Union[Module, Dict[str, Module]]) –

  • eval_hook (Optional[Callable[[Evaluator], None]]) –

  • eval_func (Optional[Callable[[...], Any]]) –

  • kwargs (Any) –

Return type:

None

add_metric(metric_fn)#

Adds a custom metric to the evaluator.

The metric is a callable that is executed every batch with the following signature: def metric_fn(batch, output, last_iteration)

Batch is the input batch passed to the model. Output is the result of evaluating batch, last_iteration is a boolean flag that indicates if its the last batch in the evaluation.

Parameters:

metric_fn (Callable[[Any, Any, Any], None]) –

Return type:

None

default_name = 'validation'#
eval_func(*args, **kwargs)#
Parameters:
  • args (Any) –

  • kwargs (Any) –

Return type:

Any

evaluate()#

Evaluates the model and returns a result dictionary.

This method runs the evaluation loop over the validation dataset. It accumulates the reported values to DictSummary and returns a dictionary whose values are means computed by the summary.

Users can override this method to customize the evaluation routine.

Returns:

Result dictionary. This dictionary is further reported via report() without specifying any observer.

Return type:

dict

get_all_iterators()#

Returns a dictionary of all iterators.

Return type:

Dict[str, DataLoader[Any]]

get_all_targets()#

Returns a dictionary of all target links.

Return type:

Dict[str, Module]

get_iterator(name)#

Returns the iterator of the given name.

Parameters:

name (str) –

Return type:

DataLoader[Any]

get_target(name)#

Returns the target link of the given name.

Parameters:

name (str) –

Return type:

Module

priority: int = 300#
trigger: TriggerLike = (1, 'epoch')#