8000 GitHub - XAI-liacs/BLADE: Benchmarking Llm Assisted Design & Evolution of algorithms
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

XAI-liacs/BLADE

Repository files navigation

Shows the BLADE logo.

IOH-BLADE: Benchmarking LLM-driven Automated Design and Evolution of Iterative Optimization Heuristics

PyPI version Maintenance Python 3.10+ CodeCov

Tip

See also the Documentation.

Table of Contents

πŸ”₯ News

  • 2025.03 ✨✨ BLADE v0.0.1 released!

Introduction

BLADE (Benchmark suite for LLM-driven Automated Design and Evolution) provides a standardized benchmark suite for evaluating automatic algorithm design algorithms, particularly those generating metaheuristics by large language models (LLMs). It focuses on continuous black-box optimization and integrates a diverse set of problems and methods, facilitating fair and comprehensive benchmarking.

Features

  • Comprehensive Benchmark Suite: Covers various classes of black-box optimization problems.
  • LLM-Driven Evaluation: Supports algorithm evolution and design using large language models.
  • Built-In Baselines: Includes state-of-the-art metaheuristics for comparison.
  • Automatic Logging & Visualization: Integrated with IOHprofiler for performance tracking.

Included Benchmark Function Sets

BLADE incorporates several b C4D4 enchmark function sets to provide a comprehensive evaluation environment:

Name Short Description Number of Functions Multiple Instances
BBOB (Black-Box Optimization Benchmarking) A suite of 24 noiseless functions designed for benchmarking continuous optimization algorithms. Reference 24 Yes
SBOX-COST A set of 24 boundary-constrained functions focusing on strict box-constraint optimization scenarios. Reference 24 Yes
MA-BBOB (Many-Affine BBOB) An extension of the BBOB suite, generating functions through affine combinations and shifts. Reference Generator-Based Yes
GECCO MA-BBOB Competition Instances A collection of 1,000 pre-defined instances from the GECCO MA-BBOB competition, evaluating algorithm performance on diverse affine-combined functions. Reference 1,000 Yes

In addition, several real-world applications are included such as several photonics problems.

Included Search Methods

The suite contains the state-of-the-art LLM-assisted search algorithms:

Algorithm Description Link
LLaMEA Large Langugage Model Evolutionary Algorithm code paper
EoH Evolution of Heuristics code paper
FunSearch Google's GA-like algorithm code paper
ReEvo Large Language Models as Hyper-Heuristics with Reflective Evolution code paper

Note, some of these algorithms are currently not yet integrated, but they are planned for integration soonn.

Supported LLM APIs

BLADE supports integration with various LLM APIs to facilitate automated design of algorithms:

LLM Provider Description Integration Notes
Gemini Google's multimodal LLM designed to process text, images, audio, and more. Reference Accessible via the Gemini API, compatible with OpenAI libraries. Reference
OpenAI Developer of GPT series models, including GPT-4, widely used for natural language understanding and generation. Reference Integration through OpenAI's REST API and client libraries.
Ollama A platform offering access to various LLMs, enabling local and cloud-based model deployment. Reference Integration details can be found in their official documentation.

Evaluating against Human Designed baselines

An important part of BLADE is the final evaluation of generated algorithms against state-of-the-art human designed algorithms. In the iohblade.baselines part of the package, several well known SOTA black-box optimizers are imolemented to compare against. Including but not limited to CMA-ES and DE variants.

For the final validation BLADE uses IOHprofiler, providing detailed tracking and visualization of performance metrics.

🎁 Installation

It is the easiest to use BLADE from the pypi package (iohblade).

  pip install iohblade

Important

The Python version must be larger or equal to Python 3.10. You need an OpenAI/Gemini/Ollama API key for using LLM models.

You can also install the package from source using Poetry (1.8.5).

  1. Clone the repository:
    git clone https://github.com/XAI-liacs/BLADE.git
    cd BLADE
  2. Install the required dependencies via Poetry:
    poetry install

πŸ’» Quick Start

  1. Set up an OpenAI API key:

    • Obtain an API key from OpenAI or Gemini or another LLM provider.
    • Set the API key in your environment variables:
      export OPENAI_API_KEY='your_api_key_here'
  2. Running an Experiment

    To run a benchmarking experiment using BLADE:

    from iohblade import Experiment
    from iohblade import Ollama_LLM
    from iohblade.methods import LLaMEA, RandomSearch
    from iohblade.problems import BBOB_SBOX
    from iohblade.loggers import ExperimentLogger
    import os
    
    llm = Ollama_LLM("qwen2.5-coder:14b") #qwen2.5-coder:14b, deepseek-coder-v2:16b
    budget = 50 #short budget for testing
    
    RS = RandomSearch(llm, budget=budget) #Random Search baseline
    LLaMEA_method = LLaMEA(llm, budget=budget, name="LLaMEA", n_parents=4, n_offspring=12, elitism=False) #LLamEA with 4,12 strategy
    methods = [RS, LLaMEA_method]
    
    problems = []
    # include all SBOX_COST functions with 5 instances for training and 10 for final validation as the benchmark problem.
    training_instances = [(f, i) for f in range(1,25) for i in range(1, 6)]
    test_instances = [(f, i) for f in range(1,25) for i in range(5, 16)]
    problems.append(BBOB_SBOX(training_instances=training_instances, test_instances=test_instances, dims=[5], budget_factor=2000, name=f"SBOX_COST"))
    # Set up the experiment object with 5 independent runs per method/problem. (in this case 1 problem)
    logger = ExperimentLogger("results/SBOX")
    experiment = Experiment(methods=methods, problems=problems, llm=llm, runs=5, show_stdout=True, exp_logger=logger) #normal run
    experiment() #run the experiment, all data is logged in the folder results/SBOX/

🌐 Webapp

After running experiments you can browse them using the built-in Streamlit app:

poetry run streamlit run webapp.py

The app lists available experiments from the results directory, displays their progress, and shows convergence plots.


πŸ’» Examples

See the files in the examples folder for examples on experiments and visualisations.


πŸ€– Contributing

Contributions to BLADE are welcome! Here are a few ways you can help:

  • Report Bugs: Use GitHub Issues to report bugs.
  • Feature Requests: Suggest new features or improvements.
  • Pull Requests: Submit PRs for bug fixes or feature additions.

Please refer to CONTRIBUTING.md for more details on contributing guidelines.

πŸͺͺ License

Distributed under the MIT License. See LICENSE for more information.

✨ Citation

TBA


Happy Benchmarking with IOH-BLADE! πŸš€

About

Benchmarking Llm Assisted Design & Evolution of algorithms

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  
0