The Boundless Prover Node is a computational proving system that participates in the Boundless decentralized proving market. Provers stake USDC, bid on computational tasks, generate zero-knowledge proofs using GPU acceleration, and earn rewards for successful proof generation.
This guide covers both automated and manual installation methods for Ubuntu 20.04/22.04 systems.
- Boundless Prover Market
- Notes
- Requirements
- Rent GPU
- Automated Setup
- Manual Setup
- Bento (Prover) & Broker Optimizations
- Safe Update or Stop Prover
- Debugging
First, you need to know how Boundless Prover market actually works to realize what you are doing.
- Request Submission: Developers submit computational tasks as "orders" on Boundless, offering ETH/ERC-20 rewards
- Prover Stakes USDC: Provers must deposit
USDC
as stake before bidding on orders - Bidding Process: Provers detect orders and submit competitive bids (
mcycle_price
) - Order Locking: Winning provers lock orders using staked USDC, committing to prove within deadline
- Proof Generation: Provers compute and submit proofs using GPU acceleration
- Rewards/Slashing: Valid proofs earn rewards; invalid/late proofs result in stake slashing
- The prover is in beta phase, while I admit that my guide is really perfect, you may get some troubles in the process of running it, so you can wait until official incentivized testnet with more stable network and more updates to this guide, or start exprimenting now.
- I advice to start with testnet networks due to loss of stake funds
- I will update this github guide constantly, so you always have to check back here later and follow me on X for new updates.
- CPU - 16 threads, reasonable single core boost performance (>3Ghz)
- Memory - 32 GB
- Disk - 100 GB NVME/SSD
- GPU
- Minimum: one 8GB vRAM GPU
- Recommended to be competetive: 10x GPU with min 8GB vRAM
- Recomended GPU models are 4090, 5090 and L4.
- You better test it out with single GPUs by lowering your configurations later by reading the further sections.
- Supported: Ubuntu 20.04/22.04
- No support: Ubuntu 24.04
- If you are running on Windows os locally, install Ubuntu 22 WSL using this Guide
Recommended GPU Providers
- Vast.ai: SSH-Key needed
For an automated installation and prover management, you can use this script that handles all dependencies, configuration, setup, and prover management automatically.
# Update packages
apt update && apt upgrade -y
# download wget
apt install wget
# Download the installation script
wget https://raw.githubusercontent.com/0xmoei/boundless/main/install_prover.sh -O install_prover.sh
# Make it executable
chmod +x install_prover.sh
# Run the installer
./install_prover.sh
- Installation may take time since we are installing drivers and building big files, so no worries.
- The script will automatically detect your GPU configuration
- You'll be prompted for:
- Network selection (mainnet/testnet)
- RPC URL: Read Get RPC for more details
- Private key (input is hidden)
- Broker config parameters: Visit Broker Optimization to read parameters details
After installation, to Run or Configure your Prover, you have to navigate to the installation directory and run Management Script prover.sh
:
cd ~/boundless
./prover.sh
The management script provides a menu with:
- Service Management: Start/stop broker, view logs, health checks
- Configuration: Change network, update private key, edit broker config
- Stake Management: Deposit USDC stake, check balance
- Performance Testing: Run benchmarks with order IDs
- Monitoring: Real-time GPU monitoring
The prover.sh
script manages all broker configurations (.e.g broker.toml
), but to optimize and add some RAM and CPU to your compose.yml
, you can navigate to x-exec-agent-common & gpu-prove-agent sections
- Re-run your broker after doing configurations to
compose.yml
Even if you setup using automated script, I recommend you to read Manual Setup and Bento (Prover) & Broker Optimizations sections to learn to optimize your prover.
Here is the step by step guide to Install and run your Prover smoothly, but please pay attention to these notes:
- Read every single word of this guide, if you really want to know what you are doing.
- There is an Prover+Broker Optimization section where you need to read after setting up prover.
apt update && apt upgrade -y
apt install curl iptables build-essential git wget lz4 jq make gcc nano automake autoconf tmux htop nvme-cli libgbm1 pkg-config libssl-dev tar clang bsdmainutils ncdu unzip libleveldb-dev libclang-dev ninja-build -y
git clone https://github.com/boundless-xyz/boundless
cd boundless
git checkout release-0.10
To run a Boundless prover, you'll need the following dependencies:
- Docker compose
- GPU Drivers
- Docker Nvidia Support
- Rust programming language
Just
command runner- CUDA Tollkit
For a quick set up of Boundless dependencies on Ubuntu 22.04 LTS, you can run:
bash ./scripts/setup.sh
However, we need to install some dependecies manually:
\\ Execute command lines one by one
# Install rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
. "$HOME/.cargo/env"
# Update rustup:
rustup update
# Install the Rust Toolchain:
apt update
apt install cargo
# Verify Cargo:
cargo --version
# Install rzup:
curl -L https://risczero.com/install | bash
source ~/.bashrc
# Verify rzup:
rzup --version
# Install RISC Zero Rust Toolchain:
rzup install rust
# Install cargo-risczero:
cargo install cargo-risczero
rzup install cargo-risczero
# Update rustup:
rustup update
# Install Bento-client:
cargo install --git https://github.com/risc0/risc0 bento-client --bin bento_cli
echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
# Verify Bento-client:
bento_cli --version
# Install Boundless CLI:
cargo install --locked boundless-cli
export PATH=$PATH:/root/.cargo/bin
source ~/.bashrc
# Verify boundless-cli:
boundless -h
- In the beginning, to configure your Prover, You need to know what's your GPUs IDs (if multiple GPUs), CPU cores and RAM.
- Also the following tools are best to monitor your hardware during proving.
- If your Nvidia driver and Cuda tools are installed succesfully, run the following command to see your GPUs status:
nvidia-smi
- You can now monitor Nvidia driver & Cuda Version, GPU utilization & memory usage.
- In the image below, there are four GPUs with 0-3 IDs, you'll need it when adding GPU to your configuration.
- Check your system GPUs IDs (e.g. 0 through X):
nvidia-smi -L
lscpu
To see the status of your CPU and RAM.
htop
The best for real-time monitoring your GPUs in a seprated terminal while your prover is proving.
nvtop
The default compose.yml
file defines all services within Prover.
- Default
compose.yml
only supporting single-GPU and default CPU, RAM utilization. - Edit
compose.yml
by this command:nano compose.yml
- The current
compose.yml
is set for1
GPU by default, you can bypass editing it if you only have one GPU. - In single GPUs, you can increase the RAM & CPU of the
x-exec-agent-common
andgpu_prove_agent0
services incompose.yml
instead to maximize the utilization of your system
-
4 GPUs: To add more GPUs or modify CPU and RAM sepcified to each GPU, replace the current compose file with my custom compose.yml with 4 custom GPUs
-
More/Less than 4 GPUs: Follow this detailes step by step guide to add or remove the number of 4 GPUs in my custom
compose.yml
file
x-exec-agent-common
service in yourcompose.yml
is doing the preflight process of orders to estimate if prover can bid on them or not.- More exec agents will be able to preflight and prove more orders concurrently.
- Increasing is from default value:
2
depends on how many concurrent proofs you want to allow. - You see smth like below code as your
x-exec-agent-common
service in yourcompose.yml
where you can increase it's memory and cpu cores:
x-exec-agent-common: &exec-agent-common
<<: *agent-common
mem_limit: 4G
cpus: 3
environment:
<<: *base-environment
RISC0_KECCAK_PO2: ${RISC0_KECCAK_PO2:-17}
entrypoint: /app/agent -t exec --segment-po2 ${SEGMENT_SIZE:-21}
gpu_prove_agent
service in yourcompose.yml
handles proving the orders after they got locked by utilizing your GPUs.- In single GPUs, you can increase performance by increasing CPU/RAM of GPU agents.
- The default number of its CPU and RAM is fine but if you have good system spec, you can increase them for each GPU.
- You see smth like below code as your
gpu_prove_agentX
service in yourcompose.yml
where you can increase the memory and cpu cores of each gpu agent.gpu_prove_agent0: <<: *agent-common runtime: nvidia mem_limit: 4G cpus: 4 entrypoint: /app/agent -t prove deploy: resources: reservations: devices: - driver: nvidia device_ids: ['0'] capabilities: [gpu]
- While the default CPU/RAM for each GPU is enough, for single GPUs, you can increase them to increase efiiciency, but don't maximize and always keep some CPU/RAML for other jobs.
Note: SEGMENT_SIZE
of the prover is set to 21
by default in x-exec-agent-common
service which applies on all GPUs, and 21
is compatible only with >20GB vRAM
GPUs, if you have less vRAM, you have to modify it, you can read the Segment Size section of the guide to modify it.
Boundless is comprised of two major components:
Bento
is the local proving infrastructure. Bento will take the locked orders fromBroker
, prove them and return the result toBroker
.Broker
interacts with the Boundless market.Broker
can submit or request proves from the market.
To get started with a test proof on a new proving machine, let's run Bento
to benchmark our GPUs:
just bento
- This will spin up
bento
without thebroker
.
Check the logs :
just bento logs
Run a test proof:
RUST_LOG=info bento_cli -c 32
- If everything works well, you should see something like the following as
Job Done!
:
- To check if all your GPUs are utilizing:
- Increase
32
to1024
/2048
/4096
- Open new terminal with
nvtop
command - Run the test proof and monitor your GPUs utilization.
- Increase
Boundless is available on Base Mainnet
, Base Sepolia
and Ethereum Sepolia
, marking the first time real funds will be used in Boundless.
There are three .env
files with the official configurations of each network (.env.base
, .env.base-sepolia
, .env.eth-sepolia
).
- According to what network you want to run your prover on, you'll need an RPC endpoint that supports
eth_newBlockFilter
event.- You can search for
eth_newBlockFilter
in the documents of third-party RPC providers to see if they support it or not.
- You can search for
RPC providers I know they support eth_newBlockFilter
and I recommend:
- BlockPi:
- Support free Base Mainnet, Base Sepolia. ETH sepolia costly as $49
- Alchemy:
- Team recommends but I couldn't pass Cloudflare puzzle yet. Try it out yourself.
- Chainstack:
- You have to change the value of
lookback_blocks
from300
to0
, because chainstack's free plan doesn't supporteth_getlogs
, so you won't be able to check last 300 blocks for open orders at startup (which is not very important i believe) - Check Broker Optimization section to know how to change
lookback_blocks
value inbroker.toml
- You have to change the value of
- Run your own RPC node:
- This is actually the best way but costly in terms of needing ~550-650 GB Disk
- Guide for ETH Sepolia
- Quicknode supports
eth_newBlockFilter
but was NOT compatible with prover somehow idk. It blew up my prover.
-
In this step I modify
.env.base
, you can replace it with any of above (Sepolia networks). -
Currently, Base mainnet has very low demand of orders, you may want to go for Base Sepolia by modifying
.env.base-sepolia
or ETH Sepolia by modifying.env.eth-sepolia
-
Configure
.env.base
file:
nano .env.base
Add the following variables to the .env.base
.
export RPC_URL=""
:- RPC has to be between
""
- RPC has to be between
export PRIVATE_KEY=
: Add your EVM wallet private key
- Inject
.env.base
to prover:
source .env.base
- After each terminal close or before any prover startup, you have to run this to inject the network before running
broker
or doingDeposit
commands (both in next steps).
.env.broker
is a custom environment file same as previous .env
files but with more options to configure, you can also use it but you have to refer to Deployments page to replace contract addresses of each network.
-
I recommend to bypass using it, since you may want to switch between network sometimes. It's easier to swap among those above preserved .env files.
-
Create
.env.broker
:
cp .env.broker-template .env.broker
- Configure
.env.broker
file:
nano .env.broker
Add the following variables to the .env.broker
.
-
export RPC_URL=""
: To get Base network rpc url, Use third-parties .e.g Alchemy or paid ones.- RPC has to be between
""
- RPC has to be between
-
export PRIVATE_KEY=
: Add your EVM wallet private key -
Find the value of following variables here:
export BOUNDLESS_MARKET_ADDRESS=
export SET_VERIFIER_ADDRESS=
export VERIFIER_ADDRESS=
(add it to .env manually)export ORDER_STREAM_URL=
-
Inject
.env.broker
changes to prover:
source .env.broker
- After each terminal close, you have to run this to inject the network before running
broker
or doingDeposit
commands (both in next steps).
Provers will need to deposit USDC
to the Boundless Market contract to use as stake when locking orders.
Note that USDC
has a different address on each network. Refer to the Deployments page for the addresses. USDC can be obtained on testnets from the Circle Faucet. You can alsi Bridge USDC.
Add boundless
CLI to bash:
source ~/.bashrc
Deposit Stake:
boundless account deposit-stake STAKE_AMOUNT
- Deposit Stake Balance:
boundless account stake-balance
You can now start broker
(which runs both bento
+ broker
i.e. the full proving stack!):
just broker
Check the total proving logs:
just broker logs
Check the broker
logs, which has the most important logs of your order
lockings and fulfillments:
docker compose logs -f broker
# For last 100 logs
docker compose logs -fn 100
- You may stuck at
Subscribed to offchain Order stream
, but it starts detecting orders soon.
There are many factors to be optimized to win in provers competetion where you can read the official guide for broker or prover
Here I simplified everything with detailed steps:
Larger segment sizes more proving (bento) performance, but require more GPU VRAM. To pick the right SEGMENT_SIZE
for your GPU VRAM, see the official performance optimization page.
SEGMENT_SIZE
incompose.yml
under thex-exec-agent-common
service is21
by default.- Also you can change the value of
SEGMENT_SIZE
in.env.broker
before running the prover. - Note, when you set a number for
SEGMENT_SIZE
in env or default yml files, it sets that number for each GPU identically. - You can add
SEGMENT_SIZE
variable with its value to the preserved network.env
s like.env.base-sepolia
, etc if you are using them. - If you changed
SEGMENT_SIZE
in.env.broker
, then head back to network configuration section to use.env.broker
as your network configurationn.
Install psql:
apt update
apt install postgresql postgresql-client
psql --version
- Recommended: Benchmark by simulating an order id: (make sure Bento is running):
boundless proving benchmark --request-ids <IDS>
- You can use the order IDs listed here
- You can add multiples by adding comma-seprated ones.
- Recommended to pick a few requests of varying sizes and programs, biased towards larger proofs for a more representative benchmark.
- As in the image above, the prover is estimated to handle ~430,000 cycles per second (~430 khz). 6D40
- Use a lower amount of the recommented
peak_prove_khz
in yourbroker.toml
(I explain it more in the next step)
You can use
nvtop
command in a seprated terminal to check your GPU utilizations.
- Benchmark using Harness Test
- Optionally you can benchmark GPUs by a ITERATION_COUNT:.
RUST_LOG=info bento_cli -c <ITERATION_COUNT>
<ITERATION_COUNT>
is the number of times the synthetic guest is executed. A value of 4096
is a good starting point, however on smaller or less performant hosts, you may want to reduce this to 2048
or 1024
while performing some of your experiments. For functional testing, 32
is sufficient.
- Check
khz
&cycles
proved in the harness test
bash scripts/job_status.sh JOB_ID
- replace
JOB_ID
with the one prompted to you when running a test. - Now you get the
hz
which has to be devided by 1000x to be inkhz
and thecycles
it proved. - If got error
not_found
, it's cuz you didn't create.env.broker
and the script is using theSEGMENT_SIZE
value in.env.broker
to query your Segment size, docp .env.broker-template .env.broker
to fix.
- Broker is one of the containers of the prover, it's not proving itself, it's for onchain activities, and initializing with orders like locking orders or setting amount of stake bids, etc.
broker.toml
has the settings to configure how your broker interact on-chain and compete with other provers.
Copy the template to the main config file:
cp broker-template.toml broker.toml
Edit broker.toml file:
nano broker.toml
- You can see an example of the official
broker.toml
here
Once your broker is running, before the gpu-based prover gets into work, broker must compete with other provers to lock-in the orders. Here is how to optimize broker to lock-in faster than other provers:
- Decreasing the
mcycle_price
would tune your Broker tobid
at lower prices for proofs.
- Once an order detected, the broker runs a preflight execution to estimate how many
cycles
the request needs. As you see in the image, a prover proved orders with millions or thousands of cycles. mcycle_price
is actually price of a prover for proving each 1 million cycles. Final price =mcycle_price
xcycles
- The less you set
mcycle_price
, the higher chance you outpace other provers.
- To get idea of what
mcycle_price
are other provers using, find an order in explorer with your prefered network, go to details page of the order and look forETH per Megacycle
- Increasing
lockin_priority_gas
to consume more gas to outrun other bidders. You might need to first remove#
to uncomment it's line, then set the gas. It's based on Gwei.
Read more about them in official doc
-
peak_prove_khz
: Maximum number of cycles per second (in kHz) your proving backend can operate.- You can set the
peak_prove_khz
by following the previous step (Benchmarking Bento)
- You can set the
-
max_concurrent_proofs
: Maximum number of orders the can lock. Increasing it, increase the ability to lock more orders, but if you prover cannot prove them in the specified deadline, your stake assets will get slashed.- When the numbers of running proving jobs reaches that limit, the system will pause and wait for them to get finished instead of locking more orders.
- It's set as
2
by default, and really depends on your GPU and your configuration, you have to test it out if you want to inscrease it.
-
min_deadline
: Min seconds left before the deadline of the order to consider bidding on a request or not.- Requesters set a deadline for their order, if a prover can't prove during this, it gets slashed.
- By setting the min deadline, your prover won't accept requests with a deadline less than that.
- As in the following image of an order in explorer, the order fullfilled after the deadline and prover got slashed because of the delay in delivering
Ensure either through the broker
logs or through indexer page of your prover that your broker does not have any incomplete locked orders before stopping or update, othervise you might get slashed for your staked assets.
- Optionally to not accept more order requests by your prover temporarily, you can set
max_concurrent_proofs
to0
, wait forlocked
orders to befulfilled
, then go through the next step to stop the node.
# Optional, no need if you don't want to upgrade the node's repository
just broker clean
# Or stop the broker without cleaning volumes
just broker down
See releases for latest tag to use.
git checkout <new_version_tag>
# Example: git checkout v0.10.0
just broker
During the build process of just broker
, you might endup to Too many open files (os error 24)
error.
nano /etc/security/limits.conf
- Add:
* soft nofile 65535
* hard nofile 65535
nano /lib/systemd/system/docker.service
- Add or modify the following under
[Service]
section.
LimitNOFILE=65535
systemctl daemon-reload
systemctl restart docker
- Now restart terminal and rerun your inject network command, then run
just broker
Getting tens of Locked
orders on prover's explorer
- It's due to RPC issues, check your logs.
- You can increase
txn_timeout = 45
inbroker.toml
file to increase the seconds of transactions confirmations.