This is the Flower tutorial repository for PyCon DE & PyData 2025 talk "The Future of AI is Federated". It describes the prerequisites to setup your tutorial environment and outlines the 3 parts of the tutorials. It also includes a bonus part 4 that walks you through setting up a local deployment with both node and secure TLS connection:
- Create a Flower App and run it using the Simulation Runtime
- Run a Flower App on a remote SuperLink
- Deploy and run a Flower App using the Deployment Runtime and Docker
- (Bonus) Deploy SuperNodes and a SuperLink with node and TLS authentication
At the end of this README, we've included a flwr
CLI cheatsheet to summarize the basic commands used in this tutorial.
Let's get started π!
The easiest way to start using this repository is to use GitHub Codespaces. The only requirement is that you need to have an active GitHub account. Click on the badge below to launch your codespace with all of the code contents in this repository.
Additionally, you should have Docker installed on your system.
The two alternatives to using Codespaces are:
- Clone this repository and run the Dev Container from your VS Code.
- Clone this repository and install the latest version of Flower in a new Python environment with
pip install -U "flwr[simulation]"
.
If you choose the manual option to setup your tutorial environment, here are the prerequisites:
- Use macOS or Ubuntu
- Have a Python environment (minimum is Python 3.9, but Python 3.10, 3.11, or 3.12 is recommended)
- Have
flwr
installed: 8000pip install -U "flwr[simulation]"
- Have an IDE, e.g. VS Code, and install the VS Code Containers Extension.
π― Feature Highlights:
- Create a new Flower app from templates using
flwr new
- Start a Flower app using
flwr run
- Understand basic federated learning workflow with Flower
- Customize the hyperparameters of your workflow
Let's begin by creating a Flower app. This can be done easily using flwr new
and then choosing one of the available templates. Let's use the NumPy
template.
flwr new awesomeapp
# Then follow the prompt
The above command would create the following directory and content:
awesomeapp
βββ README.md
βββ awesomeapp
β βββ __init__.py
β βββ client_app.py # Defines your ClientApp
β βββ server_app.py # Defines your ServerApp
β βββ task.py # Defines your model, training, and data loading
βββ pyproject.toml # Project metadata like dependencies and configs
Assuming you have already installed the dependencies for your app, you can run the app by doing:
cd path/to/app_dir # the directory where the pyproject.toml is
flwr run .
Tip
This section uses one of the pre-built templates available in the Flower platform. Learn more about other quickstart tutorials in quickstart documentation.
The Run config sets hyperparameters for your app at runtime. These are defined in the [tool.flwr.app.config]
section of your app's pyproject.toml
, which you can extend. Let's first add another variable to the run config:
[tool.flwr.app.config]
num-server-rounds = 3
fraction-fit=0.5 # Add this line
The run config can be then be overridden directly from the CLI:
flwr run . --run-config="num-server-rounds=5 fraction-fit=0.333"
Tip
This section provides a quick overview of how to modify the default simulation settings. Learn more about Flower's Simulation Runtime in the documentation.
The templates available through flwr new
create a relatively small simulation with just 10 nodes. This is defined in the pyproject.toml
and should look as follows:
[tool.flwr.federations.local-simulation]
options.num-supernodes = 10
You can make your simulation larger (as large as you want!) by increasing the number of supernodes. Additionally, you can control how many compute and memory resources these get assigned. Let's do this by defining a new federation that we'll name "simulation-xl"
(note that you can choose any other name):
[tool.flwr.federations.simulation-xl]
options.num-supernodes = 200
options.backend.client-resources.num-cpus = 1 # each ClientApp assumes to use 1 CPU
Then, to run the app on this new federation you execute (the second argument to flwr run indicates the federation to use):
flwr run . simulation-xl
π Feature Highlights:
- Login to the SuperLink using
flwr login
- Run a federated learning simulation remotely
In Part 1, we ran a federated learning simulation locally on your system. When experimenting with your federated learning system, it is useful to be able to run the simulations on a remote machine with more resources (such as GPUs and CPUs). To do so without directly connecting to the remote machine, we can spin up a Flower SuperLink on it and then run flwr run
using the address of the remote machine. In this way, you can submit multiple runs to the remote machine and let the SuperLink coordinate the executions of your submitted Flower apps π€©!
Note
This section explains how you can run a Flower app on a remote server as an authenticated user. To access the server and try it out, please register a Flower account by going to flower.ai, click on the yellow "Sign Up" button on the top right corner of the webpage, and complete the sign up process.
For this tutorial, we've setup a temporary SuperLink which you can connect to, which is at pyconde25.flower.ai
. You can also try to create and run other templates from flwr new
. The list of supported templates that have been preinstalled in this SuperLink is: PyTorch, TensorFlow, sklearn, JAX, and NumPy. To use the remote SuperLink, add a new federation table called [tool.flwr.federations.pyconde25]
to your pyproject.toml
:
[tool.flwr.federations.pyconde25]
address = "pyconde25.flower.ai" # Sets the address of the remote SuperLink
enable-user-auth = true # Enables user authentication
options.num-supernodes = 10
Next, ensure that you're logged in (so that you can run your Flower app in an authenticated user session), run:
flwr login . pyconde25
Click on the URI and login with your credentials that you provided during the sign up process. Then, you can run the app on the remote server by doing:
flwr run . pyconde25 --stream
Note that the --stream
option is to stream the logs from the Flower app. You can safely run CTRL+C
without interrupting the execution since it is running remotely on the server. The run statuses can be viewed by running:
flwr ls . pyconde25 # View the statuses of all Flower apps on the SuperLink
flwr ls . pyconde25 <run_id> # View the status of <run_id> on the SuperLink
You can also view the logs of your ongoing/completed run by running:
flwr log <run_id> . pyconde25 --stream
π³ Feature Highlights
- Deploy SuperNode on your device using Docker and connect it to a remote SuperLink
- Enable secure TLS connection between SuperNodes and SuperLink
Note
This section introduces the relevant components to run Flower in deployment without making use of node authentication. This will be presented in the next section. Read more about the Flower Architecture in the documentation.
In part 3, we'll move from the simulation/research approach and deploy our Flower apps so that federated learning will take place on a cross-device setting.
To deploy your Flower app, we first need to launch the two long-running components: the server, i.e. SuperLink, and clients, i.e. SuperNodes. Both SuperLink and SuperNodes can be launched in either --isolation subprocess
mode (the default) , or the --isolation process
mode. The subprocess
mode allows you to run the ServerApp
and ClientApp
s in the same process of the SuperLink and SuperNodes, respectively. This has the benefit of a minimal deployment since all of the app dependencies can be packaged into the SuperLink and SuperNode images. For the process
mode, the ServerApp
and ClientApp
will run a separate externally-managed processes. This allows, for example, to run SuperNode and ClientApp
in separate Docker containers with different sets of dependencies installed, allowing the SuperLink and SuperNode run with the absolute minimal image requirements.
For the purposes of this tutorial, we have deployed another SuperLink for you at 91.99.49.68
. We have also enabled secure TLS connection using self-signed certificates, which we have already generated for you.
Caution
Using self-signed certificates is for testing purposes only and not recommended for production.
Now, in this interactive part of the tutorial, you can participate in the first PyCon DE 2025 Flower federation by spinning up a SuperNode on your local machine. To do so, from the parent directory of this repo, run:
docker run \
--rm \
--volume "$(pwd)/certificates:/certificates:ro" \
flwr/supernode:1.18.0 \
--superlink="91.99.49.68:9092" \
--root-certificates /certificates/ca.crt
You should be able to see the following:
INFO : Starting Flower SuperNode
INFO : Starting Flower ClientAppIo gRPC server on 0.0.0.0:9094
INFO :
Tip
In this section, we used the official Flower Docker images to deploy the SuperLink and the SuperNodes. Check out Flower's Docker Hub repository to learn about the available base and Python images. Learn more about deploying Flower with Docker in our documentation.
π Feature Highlights
- Enable node and secure TLS connection between SuperNodes and the SuperLink
- Start SuperNodes and SuperLink via CLI
Note
Part 4 will be the stretch section of the PyCon DE tutorial. Feel free to follow the tutorial in your own free time as you will be deploying the SuperLink and SuperNodes in your local machine.
In this section, we'll enable secure TLS connection and SuperNode
authentication in the deployment mode. The TLS connection will be enabled between the SuperLink
and SuperNode
s, as well as between the Flower CLI and the SuperLink
. For authenticated SuperNode
s, each identity of the SuperNode
is verified when connecting to the SuperLink
.
Note
For more details, refer to the documentation on enabling TLS connections and authenticating SuperNode
s.
In this repo, we provide a utility script called generate.sh
and a configuration file certificate.conf
. The script by default generates self-signed certificates for creating a secure TLS connection and three private and public key pairs for one server and two clients. The script also generates a CSV file that includes each of the generated (client) public keys. The script uses certificate.conf
, which is a configuration file typically used by OpenSSL to generate a Certificate Signing Request (CSR) or self-signed certificates.
Caution
Using self-signed certificates is for testing purposes only and not recommended for production.
First, copy generate.sh
and certificate.conf
to your Flower App. Then, run the script:
cp generate.sh certificate.conf path/to/app_dir
./generate.sh
Note
You can generate more keys by specifying the number of client credentials that you wish to generate, as follows: ./generate.sh {your_number_of_clients}
After running the script, the following new folders and files will be generated:
awesomeapp
βββ README.md
βββ certificate.conf
βββ certificates # Folder containing certificates for TLS connection
βΒ Β βββ ca.crt # *Certificate Authority (CA) certificate
βΒ Β βββ ca.key # Private key for CA
βΒ Β βββ ca.srl # Serial number file for CA
βΒ Β βββ server.csr # Server certificate signing request
βΒ Β βββ server.key # *Server private key
βΒ Β βββ server.pem # *Server certificate
βββ generate.sh
βββ keys # Folder containing keys for authenticating SuperNodes
βΒ Β βββ client_credentials_1 # Private key for client 1
βΒ Β βββ client_credentials_1.pub # Public key for client 1
βΒ Β βββ client_credentials_2 # Private key for client 2
βΒ Β βββ client_credentials_2.pub # Public key for client 2
βΒ Β βββ client_public_keys.csv # *Public keys for both clients
βΒ Β βββ server_credentials # *Private server credentials
βΒ Β βββ server_credentials.pub # *Public server credentials
βββ awesomeapp
βΒ Β βββ __init__.py
βΒ Β βββ client_app.py
βΒ Β βββ server_app.py
βΒ Β βββ task.py
βββ pyproject.toml
The files that are preceded by asterisks *
will be used in our deployment.
Note
From this point onwards, ensure that your working directory where you execute all Flower commands is in /path/to/app_dir
. This is because the paths to the certificates and keys are relative to execution directory. Optionally, modify the paths below to absolute paths.
Launch a local instance of your SuperLink
with additional commands.
flower-superlink \
--ssl-ca-certfile certificates/ca.crt \
--ssl-certfile certificates/server.pem \
--ssl-keyfile certificates/server.key \
--auth-list-public-keys keys/client_public_keys.csv \
--auth-superlink-private-key keys/server_credentials \
--auth-superlink-public-key keys/server_credentials.pub
The first three flags defines the three certificates paths: CA certificate (--ssl-ca-certfile
), server certificate (--ssl-certfile
) and server private key (--ssl-keyfile
), respectively. The following three flags defines the path to a CSV file storing all known node public keys (--auth-list-public-keys
), and the paths to the serverβs private (--auth-superlink-private-key
) and public keys (--auth-superlink-public-key
).
Next, we restart the SuperNode
s with a secure TLS connection and authentication. Run the following command to start the first SuperNode
:
flower-supernode \
--superlink="127.0.0.1:9092" \
--root-certificates certificates/ca.crt \
--auth-supernode-private-key keys/client_credentials_1 \
--auth-supernode-public-key keys/client_credentials_1.pub \
--clientappio-api-address="0.0.0.0:9094" \
--node-config 'num-partitions=10 partition-id=0'
Then, the next command to start the second SuperNode
:
flower-supernode \
--superlink="127.0.0.1:9092" \
--root-certificates certificates/ca.crt \
--auth-supernode-private-key keys/client_credentials_2 \
--auth-supernode-public-key keys/client_credentials_2.pub \
--clientappio-api-address="0.0.0.0:9095" \
--node-config 'num-partitions=10 partition-id=1'
Now, we need to modify our pyproject.toml
so that our Flower CLI will connect in a secure way to our SuperLink
. In the pyproject.toml
, make the following changes:
[tool.flwr.federations.pyconde]
address = "127.0.0.1:9093" # Point to the local SuperLink address
root-certificates = "certificates/ca.crt" # Points to the path of the CA certificate. Must be relative to `pyproject.toml`.
Finally, we can launch the Run in the same way as above, but now with TLS and client authentication like this:
flwr run . pyconde --stream
In this tutorial, we used several flwr
CLI commands including flwr new
, flwr run
, flwr ls
and flwr log
. A cheatsheet of all the relevant commands are shown below.
Tip
For more details on all Flower CLI commands, please refer to the flwr
CLI reference documentation.
Command | Description | Example Usage |
---|---|---|
flwr new |
Create a new Flower App from a template | flwr new |
flwr run |
Run the Flower App in the CWD <.> on the <federation> federation |
flwr run . <federation> |
Run the Flower App and stream logs from the ServerApp |
flwr run . <federation> --stream |
|
flwr ls |
List the Run statuses on the <federation> federation on the SuperLink (Default) |
flwr ls . <federation> |
List the Run status of one <run-id> on the SuperLink |
flwr ls . <federation> --run-id <run-id> |
|
flwr log |
Stream logs from one <run-id> (Default) |
flwr log <run-id> . <federation> |
Print logs from one <run-id> |
flwr log <run-id> . <federation> --show |
Here are some useful references to expand on the networking and architecture topics covered in this tutorial:
- Flower federated learning architecture (link)
- Flower network communication (link)
- Quickstart Docker guides (link)
- Flower official Docker images (link)
And here are some links to the Flower quickstarts and tutorials.