8000 GitHub - alimama-tech/AuctionNet: AuctionNet: A Novel Benchmark for Decision-Making in Large-Scale Games
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

alimama-tech/AuctionNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

17 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AuctionNet: A Novel Benchmark for Decision-Making in Large-Scale Games

Static Badge Β Β Β Β  Static BadgeΒ Β Β Β  Static BadgeΒ Β Β Β  Static Badge


Decision-making in large-scale games is an essential research area in artificial intelligence (AI) with significant real-world impact. AuctionNet is a benchmark for bid decision-making in large-scale ad auctions derived from a real-world online advertising platform. AuctionNet is composed of three parts:

  • 🌏️ Ad Auction Environment: The environment effectively replicates the integrity and complexity of real-world ad auctions with the interaction of several modules: the ad opportunity generation module, the bidding module, and the auction module.

  • πŸ”’ Pre-Generated Dataset: We pre-generated a substantial dataset based on the auction environment. The dataset contains trajectories with 48 diverse agents competing with each other, totaling over 500 million records and 80GB in size.

  • ✴️ Several Baseline Bid Decision-Making Algorithms: We implemented a variety of baseline algorithms such as linear programming, reinforcement learning, and generative models.

We note that AuctionNet is applicable not only to research on bid decision-making algorithms in ad auctions but also to the general area of decision-making in large-scale games. It can also benefit researchers in a broader range of areas such as reinforcement learning, generative models, operational research, and mechanism design.

πŸ”₯ News


  • [2024-12-14] πŸ”₯ The AuctionNet-1.0 code has been officially open-sourced. We welcome everyone to give it a thumbs up and share valuable feedback.
  • [2024-10-24] πŸ’« NeurIPS 2024 Competition: Auto-Bidding in Large-Scale Auctions has officially ended. The competition attracted more than 1,500 teams to participate. The auction environment for evaluation, dataset, baseline algorithms used in the competition are derived from this project.
  • [2024-09-26] 🎁 Our paper AuctionNet has been accepted by NeurIPS 2024 Datasets and Benchmark Track!

πŸ₯ Background



Bid decision-making in large-scale ad auctions is a concrete example of decision-making in large-scale games.
Numbers 1 through 5 illustrate how an auto-bidding agent helps advertisers optimize performance.
For each advertiser's unique objective (I), the auto-bidding agent makes bid decision-making (II) for continuously arriving ad opportunities and competes against each other in the ad auction (III).
Then, each agent may win some impressions (IV), which may be exposed to users and potentially result in conversions. Finally, the agents' performance (V) will be reported to advertisers.

πŸ›οΈŽ Project Structure


β”œβ”€β”€ config                        # Configuration files for setting up the hyperparameters.
β”œβ”€β”€ main_test.py                  # Main entry point for running evaluations.
β”œβ”€β”€ run                           # Core logic for executing tests.

β”œβ”€β”€ simul_bidding_env             # Ad Auction Environment

β”‚   β”œβ”€β”€ Controller                # Module controlling the simulation flow and logic.
β”‚   β”œβ”€β”€ Environment               # The auction module.
β”‚   β”œβ”€β”€ PvGenerator               # The ad opportunity generation module.
β”‚   β”œβ”€β”€ Tracker                   # Tracking components for monitoring and analysis.
β”‚   β”‚   β”œβ”€β”€ BiddingTracker.py     # Tracks the bidding process and generates raw data on ad opportunities granularity.
β”‚   β”‚   β”œβ”€β”€ PlayerAnalysis.py     # Implements metrics to evaluate the performance of user-defined strategies.
β”‚   └── strategy                  # The bidding module (competitors’ strategies).


β”œβ”€β”€ pre_generated_dataset         # Pre-generated dataset.


β”œβ”€β”€ strategy_train_env            # Several baseline bid decision-making algorithms.

β”‚   β”œβ”€β”€ README_strategy_train.md  # Documentation on how to train the bidding strategy.
β”‚   β”œβ”€β”€ bidding_train_env         # Core components for training bidding strategies.
β”‚   β”‚   β”œβ”€β”€ baseline              # Implementation of baseline bid decision-making algorithms.
β”‚   β”‚   β”œβ”€β”€ common                # Common utilities used across modules.
β”‚   β”‚   β”œβ”€β”€ train_data_generator  # Reads raw data and constructs training datasets.
β”‚   β”‚   β”œβ”€β”€ offline_eval          # Components required for offline evaluation.
β”‚   β”‚   └── strategy              # Unified bidding strategy interface.
β”‚   β”œβ”€β”€ data                      # Directory for storing training data.
β”‚   β”œβ”€β”€ main                      # Main scripts for executing training processes.
β”‚   β”œβ”€β”€ run                       # Core logic for executing training processes.
β”‚   β”œβ”€β”€ saved_model               # Directory for saving trained models.

πŸ§‘β€πŸ’» Quickstart


Create and activate conda environment

$ conda create -n AuctionNet python=3.9.12 pip=23.0.1
$ conda activate AuctionNet

Install requirements

$ pip install -r requirements.txt

Train Strategy & Offline Evaluation

For detailed usage, please refer to strategy_train_env/README_strategy_train.md.

cd strategy_train_env  # Enter the strategy_train directory

Data Processing

Run this script to convert the raw data on ad opportunities granularity into trajectory data required for model training.

python  bidding_train_env/train_data_generator/train_data_generator.py

Strategy Training

Load the training data and train the xxx (for example, IQL) bidding strategy.

python main/main_iql.py 

Use the xxxBiddingStrategy as the PlayerBiddingStrategy for evaluation.

bidding_train_env/strategy/__init__.py
from .iql_bidding_strategy import IqlBiddingStrategy as PlayerBiddingStrategy

Offline Evaluation

Load the raw data on ad opportunities granularity to construct an offline evaluation environment for assessing the bidding strategy offline.

python main/main_test.py

Online Evaluation

Set up the hyperparameters for the online evaluation process.

config/test.gin

Run online evaluation.

# Return to the root directory
$ python main_test.py

πŸ“– User Case

Train your own bidding strategy 'awesome_xx'

Refer to the baseline algorithm implementation and complete the following files.

β”œβ”€β”€ strategy_train_env
β”‚   β”œβ”€β”€ bidding_train_env
β”‚   β”‚   β”œβ”€β”€ baseline
β”‚   β”‚   β”‚   └── awesome_xx
β”‚   β”‚   β”‚       └──awesome_xx.py                # Implement model-related components.
β”‚   β”‚   β”œβ”€β”€ train_data_generator
β”‚   β”‚   β”‚   └── train_data_generator.py         # Custom-built training Data generation Pipeline.
β”‚   β”‚   └── strategy
β”‚   β”‚       └── awesome_xx_bidding_strategy.py  # Implement Unified bidding strategy interface.
β”‚   β”œβ”€β”€ main
β”‚   β”‚   └── main_awesome_xx.py                  # Main scripts for executing training processes.
β”‚   └── run
β”‚       └── run_awesome_xx.py                   # Core logic for executing training processes.

Evaluate your own bidding strategy 'awesome_xx'

Use the awesome_xxBiddingStrategy as the PlayerBiddingStrategy for evaluation.

bidding_train_env/strategy/__init__.py
from .awesome_xx_bidding_strategy import awesome_xxBiddingStrategy as PlayerBiddingStrategy

Run the evaluation process.

# Return to the root directory
$ python main_test.py

Generate new dataset

Set the hyperparameters and run the evaluation process.

config/test.gin
GENERATE_LOG = True

python main_test.py

The newly generated data will be stored in the /data folder.

Customize new auction environment

We adhere to the programming principles of high cohesion and low coupling to encapsulate each module, making it convenient for users to modify various modules in the auction environment according to their needs.

β”œβ”€β”€ simul_bidding_env             # Ad Auction Environment

β”‚   β”œβ”€β”€ Environment               # The auction module.
β”‚   β”œβ”€β”€ PvGenerator               # The ad opportunity generation module.
β”‚   β”œβ”€β”€ Tracker                   
β”‚   β”‚   β”œβ”€β”€ PlayerAnalysis.py     # Implements metrics to evaluate the performance.
β”‚   └── strategy                  # The bidding module (competitors’ strategies).

🎑 Implemented Bid Decision-Making Algorithms


Category Strategy Status
Reinforcement Learning IQL βœ…
BC βœ…
BCQ βœ…
TD3_BC βœ…
Online Linear Programming OnlineLp βœ…
Generative Model Decision-Transformer βœ…
Generative Model Diffbid To be implemented
Other Abid (fixed bid rate) βœ…
PID βœ…

✌ Contributing


The field of decision intelligence is a fascinating area, and we welcome like-minded individuals to contribute their wisdom and creativity to optimize this project. If you have great ideas, feel free to fork the repo and create a pull request.

  1. Fork the project.
  2. Create your feature branch (git checkout -b new-branch).
  3. Commit your changes (git commit -m 'Add some feature').
  4. Push to the branch (git push origin new-branch).
  5. Open a pull request.

🏷️ License


Distributed under the Apache License 2.0. See LICENSE.txt for more information.

πŸ’“ Reference


πŸ§‘β€πŸ€β€πŸ§‘ Contributors


Shuai Dou β€’ Yusen Huo β€’ Zhilin Zhang β€’ Yeshu Li β€’ Zhengye Han β€’ Kefan Su
β€’ Zongqing Lu β€’ Chuan Yu β€’ Jian Xu β€’ Bo Zheng

βœ‰οΈ Contact


For any questions, please feel free to email doushuai.ds@taobao.com.

πŸ“ Citation


If you find our work useful, please consider citing:

@inproceedings{
su2024a,
title={AuctionNet: A Novel Benchmark for Decision-Making in Large-Scale Games},
author={Kefan Su and Yusen Huo and Zhilin Zhang and Shuai Dou and Chuan Yu and Jian Xu and Zongqing Lu and Bo Zheng},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://arxiv.org/abs/2412.10798}
}

About

AuctionNet: A Novel Benchmark for Decision-Making in Large-Scale Games

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0