8000 `GridSearch` fails when using `grid_search(pipeline_space)` · Issue #187 · automl/neps · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

GridSearch fails when using grid_search(pipeline_space) #187

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
timurcarstensen opened this issue Feb 3, 2025 · 1 comment
Open

GridSearch fails when using grid_search(pipeline_space) #187

timurcarstensen opened this issue Feb 3, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@timurcarstensen
Copy link
Member
@dataclass
class GridSearch:
    """Evaluates a fixed list of configurations in order."""

    pipeline_space: SearchSpace
    """The search space from which the configurations are derived."""

    configs_list: list[dict[str, Any]]
    """The list of configurations to evaluate."""

    def __call__(
        self,
        trials: Mapping[str, Trial],
        budget_info: BudgetInfo | None,
        n: int | None = None,
    ) -> SampledConfig | list[SampledConfig]:
        assert n is None, "TODO"
        _num_previous_configs = len(trials)
        if _num_previous_configs > len(self.configs_list) - 1:
            raise ValueError("Grid search exhausted!")

        rng = random.Random()
        configs = rng.sample(self.configs_list, len(self.configs_list))

        config = configs[_num_previous_configs]
        config_id = str(_num_previous_configs)
        return SampledConfig(config=config, id=config_id, previous_config_id=None)

budget_info: BudgetInfo | None should be budget_info: BudgetInfo | None = None, right?
@eddiebergman

@timurcarstensen timurcarstensen added the bug Something isn't working label Feb 3, 2025
@eddiebergman
Copy link
Contributor

Nope, the signature is this:

class AskFunction(Protocol):
"""Interface to implement the ask of optimizer."""
@abstractmethod
def __call__(
self,
trials: Mapping[str, Trial],
budget_info: BudgetInfo | None,
n: int | None = None,
) -> SampledConfig | list[SampledConfig]:
"""Sample a new configuration.
Args:
trials: All of the trials that are known about.
budget_info: information about the budget constraints.
n: The number of configurations to sample. If you do not support
sampling multiple configurations at once, you should raise
a `ValueError`.
Returns:
The sampled configuration(s)
"""
...

Are you using it in the vanilla way, i.e. neps.run(..., algorithm="grid_search")

Ahh, I see I never added this to the set of optimizers that gets tested, I would have assumed it work but apparently it wasn't there:

JUST_SKIP = [
"multifidelity_tpe",
]
OPTIMIZER_FAILS_WITH_FIDELITY = [
"random_search",
"bayesian_optimization_cost_aware",
"bayesian_optimization",
"bayesian_optimization_prior",
"pibo",
"cost_cooling_bayesian_optimization",
"cost_cooling",
]
# There's no programattic way to check if a class requires a fidelity.
# See issue #118, #119, #120
OPTIMIZER_REQUIRES_FIDELITY = [
"successive_halving",
"successive_halving_prior",
"asha",
"asha_prior",
"hyperband",
"hyperband_prior",
"async_hb",
"async_hb_prior",
"priorband",
"priorband_sh",
"priorband_asha",
"priorband_async",
"priorband_bo",
"bayesian_optimization_cost_aware",
"mobster",
"ifbo",
]
REQUIRES_PRIOR = {
"priorband",
"priorband_bo",
"priorband_asha",
"priorband_asha_hyperband",
}
REQUIRES_COST = ["cost_cooling_bayesian_optimization", "cost_cooling"]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: No status
Development

No branches or pull requests

2 participants
0