Description
Description
PB2 is running smoothly as long as no error in a single trial occurs. As very often certain combinations of hyperparameters are not tested out beforehand - this is exactly why users turn towards a tuning algorithm - they could lead to a trial that errors out. Specifically in PB2|PBT this is very severe because the results from all runs are used to get the best breeds over certain periods of running.
If now trials error out, the PB2 algorithm runs with only (num_samples - num_errored_out)
trials which could in edge cases lead to runs with only a single trial - and then there is no sense anymore of PB2.
Use case
In some cases it is not obvious which hyperparameter cases are actually erroring out and figuring out which do, costs a lot of resources. In such cases users would prefer to run PB2 with many hyperparameter combinations and let the tuning algorithm simply "step over" errored trials by starting a new one with another randomly drawn hyperparameter combination (maybe with an appropriate bayesian approach).
This would could help to find a good combination of hyperparameters even in difficult settings.