Welcome to the Mercado Libre First Optimization Challenge repository! This challenge is part of the LVII Brazilian Symposium on Operations Research (SBPO 2025). For further details, please read the post on Medium (Portuguese version; Spanish version). In this repository, you will find the base code for the framework, documentation, and other resources related to the challenge.
- 16-04-2025: Sprint phase results updated. See section Challenge Results Explanation for more details.
- 15-04-2025: Sprint phase results are now available.
- 15-04-2025: Dataset
B
is now available. - 05-03-2025: Updated the challenge rules to clarify that, due to allowing a multithread environment, no seed for random generation will be provided.
- 27-02-2025: Updated the challenge rules to include specific details of the computer environment in which the challenge will be run.
- 21-02-2025: Corrected OR-Tools version to 9.11.
- 17-01-2025: Base framework code, documentation and dataset
A
.
Below is a general explanation of the different scores you may see in the rankings (for a detailed explanation, please refer to the challenge rules):
A positive score indicates successful submissions that produced valid solutions, although there may be some instances that produced invalid solutions or errors.
Teams with a score of 0 could have encountered one or more of the following issues:
- Compilation Success, no valid solutions: your code compiled successfully, but no solutions met the feasibility criteria across any test cases.
- Timeouts: your program successfully compiled but exceeded the time limit (600 seconds) on the test instances.
- Empty output files: your program ran but produced empty output files or failed to generate any output.
- Invalid Format: your outputs did not follow the required format and could not be processed by the evaluation system.
Teams with a negative score typically encountered:
-
Compilation errors: the submitted code failed to compile using the standard Maven build process. Common causes include:
- Incompatible Java or library (e.g., CPLEX or OR-Tools) versions
- Missing files or classes
- Syntax errors
- References to libraries that weren't included in the submission
- Dependency issues
-
Runtime errors: the program compiled but encountered errors during execution, such as:
- Null pointer exceptions
- Array index out of bounds
- Class not found exceptions
- Other runtime exceptions
Spanish and Portuguese versions of the challenge rules and problem description can be found in the docs
directory:
-
Spanish:
-
Portuguese:
src/main/java/org/sbpo2025/challenge
Challenge.java
⟶ Main Java class for reading an input, solving the challenge, and writing the output.ChallengeSolver.java
⟶ Java class responsible for solving the wave order picking problem. Most of the solving logic should be implemented here.ChallengeSolution.java
⟶ Java class representing the solution to the wave order picking problem.
datasets/
⟶ Directory containing input instance files.run_challenge.py
⟶ Python script to compile code, run benchmarks, and evaluate solutions.checker.py
⟶ Python script for evaluating the feasibility and objective value of solutions.
- Java 17
- Maven
- Python 3.8 or higher
- CPLEX 22.11 (optional)
- OR-Tools 9.11 (optional)
- Clone the repository:
git clone https://github.com/mercadolibre/challenge-sbpo-2025
- Set the paths to CPLEX and OR-Tools libraries in
run_challenge.py
if needed, e.g.:cplex_path = "$HOME/CPLEX_Studio2211/opl/bin/arm64_osx/" or_tools_path = "$HOME/Documents/or-tools/build/lib/"
To compile the code and run benchmarks, use the following command:
python run_challenge.py <source_folder> <input_folder> <output_folder>
Where <source_folder>
is the path to the Java source code, more specifically, where the pom.xml
file is located.
In order to run this script you will need the timeout
(or gtimeout
on macOS) command installed. You can install it using apt-get install coreutils
(or equivalent) on Linux or brew install coreutils
on macOS.
To check the feasibility and objective value of a solution, use the following command:
python checker.py <input_file> <solution_file>
-
Compile and run benchmarks:
python run_challenge.py src/main/java/org/sbpo2025/challenge src/main/resources/instances output
-
Check solution viability:
python checker.py src/main/resources/instances/instance_001.txt output/instance_001.txt