Important
This project has been archived because KubeFed, which this project relies on, is no longer under active development. We are currently exploring alternative approaches for applying WAO in multi-cluster environments.
Optimizes workload allocation and loadbalancing on KubeFed.
- Overview
- Getting Started
- Developing
- License
WAOFed optimizes workload allocation and loadbalancing on KubeFed with the following components:
- RSPOptimizer: Optimizes
FederatedDeployment
weights across clusters by generatingReplicaSchedulingPreference
using the specified method. KubeFed handles the actual scheduling according toReplicaSchedulingPreference
. - SLPOptimizer: Optimizes
FederatedService
loadbalancing weights across clusters by generatingServiceLoadbalancingPreference
using the specified method. A supported controller is required to handle the actual loadbalancing according toServiceLoadbalancingPreference
.
Supported Kubernetes versions: 1.19 or higher
💡 Mainly tested with 1.25, may work with the same versions that KubeFed supports (but may require some efforts).
Supported KubeFed APIs:
FederatedDeployment [types.kubefed.io/v1beta1]
ReplicaSchedulingPreference [scheduling.kubefed.io/v1alpha1]
KubeFedCluster [core.kubefed.io/v1beta1]
New APIs Provided by WAOFed:
WAOFedConfig [waofed.bitmedia.co.jp/v1beta1]
ServiceLoadBalancingPreference [waofed.bitmedia.co.jp/v1beta1]
Make sure you have cert-manager deployed on the cluster where KubeFed control plane is deployed, as it is used to generate webhook certificates.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.0/cert-manager.yaml
⚠️ You may have to wait a second for cert-manager to be ready.
Deploy the Operator with the following command. It creates waofed-system
namespace and deploys CRDs, controllers and other resources.
kubectl apply -f https://github.com/Nedopro2022/waofed/releases/download/v0.4.0/waofed.yaml
WAOFedConfig
is a resource for configuring WAOFed. Deploy it with the name default
to the cluster where KubeFed control plane is deployed.
spec.kubefedNamespace
specifies the namespace from which WAOFed gets member clusters.
apiVersion: waofed.bitmedia.co.jp/v1beta1
kind: WAOFedConfig
metadata:
name: default # must be default
spec:
kubefedNamespace: "kube-federation-system"
scheduling:
selector:
hasAnnotation: waofed.bitmedia.co.jp/scheduling
optimizer:
method: "rr"
loadbalancing:
selector:
hasAnnotation: waofed.bitmedia.co.jp/loadbalancing
optimizer:
method: "rr"
RSPOptimizer watches the creation of FederatedDeployment
resources and generates ReplicaSchedulingPreference
resources with optimized workload allocation determined by the specified method.
Supported methods: rr
(Round-robin, for testing purposes), wao
(WAO-Estimator is required)
spec.scheduling.selector
specifies the conditions for the FederatedDeployment
resources that KubeFed watches.
💡 You can enable RSPOptimizer by default by setting
spec.scheduling.selector.any
to true.scheduling: selector: - hasAnnotation: waofed.bitmedia.co.jp/scheduling + any: true
💡 Ensure the namespace is federated by a
FederatedNamespace
resource before deployingFederatedDeployment
resources.apiVersion: types.kubefed.io/v1beta1 kind: FederatedNamespace metadata: name: default namespace: default spec: placement: clusterSelector: {}
When a FederatedDeployment
with the annotation specified in WAOFedConfig is deployed, RSPOptimizer will detect the resource and generate an ReplicaSchedulingPreference
.
apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
name: fdeploy-sample
namespace: default
annotations:
waofed.bitmedia.co.jp/scheduling: ""
spec:
template:
metadata:
labels:
app: nginx
spec:
replicas: 9
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.23.2
name: nginx
placement:
clusterSelector: {}
💡 You can see the resources with the following commands, and see the details by adding
-oyaml
.$ kubectl get fdeploy NAME AGE fdeploy-sample 12s $ kubectl get rsp NAME AGE fdeploy-sample 12s
The generated ReplicaSchedulingPreference
has an owner reference indicating that it is controlled by the FederatedDeployment
so that it will be deleted by GC when the FederatedDeployment
is deleted.
spec.clusters
includes all clusters specified in FederatedDeployment
spec.placement
(RSPOptimizer parses the selector and retrives clusters), and spec.clusters[name].weight
is optimized by the method specified in WAOFedConfig
. This sample uses rr
so all clusters have a weight of 1.
apiVersion: scheduling.kubefed.io/v1alpha1
kind: ReplicaSchedulingPreference
metadata:
name: fdeploy-sample
namespace: default
ownerReferences:
- apiVersion: types.kubefed.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: FederatedDeployment
name: fdeploy-sample
...
spec:
clusters:
cluster1:
weight: 1
cluster2:
weight: 1
cluster3:
weight: 1
intersectWithClusterSelector: true
rebalance: true
targetKind: FederatedDeployment
totalReplicas: 9
...
💡 Since
spec.intersectWithClusterSelector
is set totrue
, the generatedReplicaSchedulingPreference
does not overwrite anything in theFederatedDeployment
, allowing RSPOptimizer to watch theFederatedDeployment
easily.
⚠️ Edge cases not covered:
placement.clusters
has 0 itemsKubeFed ignores
spec.placement.clusterSelector
ifspec.placement.clusters
is provided, so no clusters will be selected for the following case (docs). However, RSPOptimizer currently does not recognize whether a list is nil (null
) or has 0 items ([]
), so it regardsspec.placement.clusters
as "not provided" and usesspec.placement.clusterSelector
for scheduling.spec: placement: clusters: [] clusterSelector: matchExpressions: - { key: mylabel, operator: Exists }
⚠️ NOTE: A supported controller is required to handle the actual loadbalancing according toServiceLoadbalancingPreference
.
SLPOptimizer watches the creation of FederatedService
resources and generates ServiceLoadbalancingPreference
resources with optimized workload allocation determined by the specified method.
Supported methods: rr
(Round-robin, for testing purposes)
spec.loadbalancing.selector
specifies the conditions for the FederatedService
resources that KubeFed watches.
💡 You can enable SLPOptimizer by default by setting
spec.loadbalancing.selector.any
to true.loadbalancing: selector: - hasAnnotation: waofed.bitmedia.co.jp/loadbalancing + any: true
💡 Ensure the namespace is federated by a
FederatedNamespace
resource before deployingFederatedService
resources.apiVersion: types.kubefed.io/v1beta1 kind: FederatedNamespace metadata: name: default namespace: default spec: placement: clusterSelector: {}
When a FederatedService
with the annotation specified in WAOFedConfig is deployed, SLPOptimizer will detect the resource and generate an ServiceLoadbalancingPreference
.
apiVersion: types.kubefed.io/v1beta1
kind: FederatedService
metadata:
name: fsvc-sample
namespace: default
annotations:
waofed.bitmedia.co.jp/loadbalancing: ""
spec:
template:
spec:
selector:
app: nginx
ports:
- name: http
port: 80
placement:
clusterSelector: {}
💡 You can see the resources with the following commands, and see the details by adding
-oyaml
.$ kubectl get fsvc NAME AGE fsvc-sample 12s $ kubectl get slp NAME AGE fsvc-sample 12s
The generated ServiceLoadbalancingPreference
has an owner reference indicating that it is controlled by the FederatedService
so that it will be deleted by GC when the FederatedService
is deleted.
spec.clusters
includes all clusters specified in FederatedService
spec.placement
(SLPOptimizer parses the selector and retrives clusters), and spec.clusters[name].weight
is optimized by the method specified in WAOFedConfig
. This sample uses rr
so all clusters have a weight of 1.
apiVersion: waofed.bitmedia.co.jp/v1beta1
kind: ServiceLoadbalancingPreference
metadata:
name: fsvc-sample
namespace: default
ownerReferences:
- apiVersion: types.kubefed.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: FederatedService
name: fsvc-sample
...
spec:
clusters:
cluster1:
weight: 1
cluster2:
weight: 1
cluster3:
weight: 1
...
⚠️ Edge cases not covered:
placement.clusters
has 0 items Same as RSPOptimizer
Delete the Operator and resources with the following command.
kubectl delete -f https://github.com/Nedopro2022/waofed/releases/download/v0.4.0/waofed.yaml
This Operator uses Kubebuilder (v3.8.0), so we basically follow the Kubebuilder way. See the Kubebuilder Documentation for details.
Make sure you have the following tools installed:
- Git
- Make
- Go
- Docker
Run development clusters with kind
The script creates K8s clusters kind-waofed-[0123]
, deploys KubeFed control plane on kind-waofed-0
and let the remaining clusters join as member clusters.
./hack/dev-kind-reset-clusters.sh
./hack/dev-kind-deploy.sh
⚠️ NOTE: Currently it is needed to re-create the clusters on every reboot as the script does not set static IPs to Docker containers.
The script creates K8s clusters kind-waofed-test-[01]
, deploys KubeFed control plane on kind-waofed-test-0
, let all clusters join as member clusters and runs integration tests.
# setup test clusters
./test/reset-clusters.sh
# basic tests (tests both scheduling and loadbalancing with "rr" method)
./test/run-basic-tests.sh
# test spec.scheduling.optimizer.method="wao"
./test/rspoptimizer-wao-setup.sh
./test/rspoptimizer-wao-run-tests.sh
Copyright 2022 Bitmedia Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.