This contains a playbook of getting UDS Core up and running in a local OpenShift cluster. We will be deploying slim-dev and adding the appropriate RBAC to allow UDS Core to run.
Tests were run on a Mac Pro with M2 chip, but should work on any system with CRC installed.
docker pull registry
docker run -d -p 5001:5000 --restart always --name registry registry
docker pull ghcr.io/zarf-dev/zarf/agent:v0.52.1
docker tag ghcr.io/zarf-dev/zarf/agent:v0.52.1 localhost:5001/zarf-dev/zarf/agent:v0.52.1
docker push localhost:5001/zarf-dev/zarf/agent:v0.52.1
crc delete -f
crc config set preset microshift
# crc config set disk-size 100
crc setup
crc start
eval $(crc oc-env)
echo "Apply RBAC after the command"
zarf init --registry-url localhost:5001 --registry-push-username doug --registry-push-password unicorn --log-level debug
uds run slim-dev -l trace
Add required RBAC (This is the for the registry only)
Zarf
kubectl apply -f -<<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: zarf-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: zarf
namespace: zarf
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: system:openshift:scc:anyuid
namespace: zarf
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:openshift:scc:anyuid
subjects:
- kind: ServiceAccount
name: zarf
namespace: zarf
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: system:openshift:scc:privileged
namespace: zarf
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:openshift:scc:privileged
subjects:
- kind: ServiceAccount
name: zarf
namespace: zarf
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: system:openshift:scc:hostnetwork
namespace: zarf
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:openshift:scc:hostnetwork
subjects:
- kind: ServiceAccount
name: zarf
namespace: zarf
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: system:openshift:scc:hostmount-anyuid
namespace: zarf
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:openshift:scc:hostmount-anyuid
subjects:
- kind: ServiceAccount
name: zarf
namespace: zarf
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: system:openshift:scc:hostaccess
namespace: zarf
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:openshift:scc:hostaccess
subjects:
- kind: ServiceAccount
name: zarf
namespace: zarf
EOF
Ignore everything below this line ( The registry got me stick, there are problems with nodePort in CRC )
- The zarf-docker-registry is a nodePort service, and the deployment uses the nodePort service to pull its own image
- You cannot port-forward because the images is pending and does not come up
- I was not able to find a way to get NodePort working at all but i tried several things unsuccessfully
k port-forward svc/zarf-docker-registry 5000 -n zarf
k set image deploy/zarf-docker-registry docker-registry=api.crc.testing:32640/library/registry:3.0.0 -n zarf
Istio
oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system