Download the Red Hat Enterprise Linux 9.2 and install any virtualization tools (can be, KVM, Virtualbox, VMware or any tool) to setup a RHEL virtual machine.
- Enable the Microshift RPM repository
sudo subscription-manager repos \
--enable rhocp-4.14-for-rhel-9-$(uname -m)-rpms \
--enable fast-datapath-for-rhel-9-$(uname -m)-rpms
- Install Red Hat build of Microshift
dnf install -y microshift
- Install greenboot for Red Hat build of Microshift (Optional)
sudo dnf install -y microshift-greenboot
- Download pull secret from Red Hat Hybrid Cloud Console
sudo cp $HOME/openshift-pull-secret /etc/crio/openshift-pull-secret
- Make the root user the owner of the /etc/crio/openshift-pull-secret
sudo chown root:root /etc/crio/openshift-pull-secret
- Make the /etc/crio/openshift-pull-secret file readable and writeable by the root user
sudo chmod 600 /etc/crio/openshift-pull-secret
- To access the microshift locally, copy the kuebconfig file from the /var/lib/microshift/resources/kubeadmin/kubeconfig and store it in $HOME/.kube folder.
mkdir -p ~/.kube/
sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config
chmod go-r ~/.kube/config
Follow below steps when firewall-cmd is enabled in RHEL 9.2
- 10.42.0.0/16 is the network range for pods running in microshift and 169.254.169.1 is the IP address of Microshift OVN network
sudo firewall-cmd --permanent --zone=trusted --add-source={10.42.0.0/16,169.254.169.1} && sudo firewall-cmd --reload
Enable the microshift using
sudo systemctl enable --now microshift.service
sudo systemctl start microshift.service
Check the status of the Microshift
sudo systemctl status microshift.service
For verification, that microshift is running, Use the following command
oc get all -A
Note: usually take 10 mins to get all the workloads up and running on the first time. If any workload doesn't get started. Restart the virtual machine.
Optional - Access Microshift from remote machines
- To access microshift remotely edit the microshift config file. It'll not be configured, has to be configured maunally.
sudo mv /etc/microshift/config.yaml.default /etc/microshift/config.yaml
vi /etc/microshift/config.yaml
dns:
baseDomain: microshift.example.com
node:
...
...
apiServer:
subjectAltNames:
- edge.microshift.example.com
...
...
sudo systemctl restart microshift
- View the kubeconfig file for the newly added subjectAltNames in the folder below
cat /var/lib/microshift/resources/kubeadmin/edge-microshift.example.com/kubeconfig
use scp to copy this file or copy the content and do the following steps in the remote machine
mkdir -p ~/.kube/
MICROSHIFT_MACHINE=<name or IP address of Red Hat build of MicroShift machine>
ssh <user>@$MICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/$MICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config
chmod go-r ~/.kube/config
Make sure you have oc client in the remote machine and verify the access to microshift using
oc get all -A
- Apply all the manifests in the edge-kepler directory using kustomization file
oc apply -k ./edge/edge-kepler
- Verify the kepler deamonset and pod is running, using
oc get all -n kepler
- Config file for opentelemetry collector is added as ConfigMap resource in kepler 8000 namespace.
Before creating config map in microshift,
Edit the opentelemetry configmap file using the below command,
vi edge/edge-otel-collector/1-kepler-microshift-otelconfig.yaml
and update the opentelemetry configmap file with the external opentelemetry hostname and port by,
A) Replacing hostname of the microshift instance in receivers.
- HOSTNAME = Name of the machine where microshift is installed.
[user@host]$ hostname # apply this command to get the hostname of the machine (Don't use localhost)
B) Replacing the following in exporters:
- EXTERNAL_OTEL_PROTOCOL = use either http (insecure) and https (secure)
- EXTERNAL_OTEL_HOSTNAME = Hostname of the external opentelemetry collector (Do not use localhost)
- EXTERNAL_OTEL_PORT = port in which opentelemetry collector is running (Default otel port is 4317)
Above Placeholders in opentelemetry configmap file will be like,
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'kepler'
scrape_interval: 2s
static_configs:
- targets: ['localhost:9102']
labels:
exported_instance: HOSTNAME
...
exporters:
...
otlp:
endpoint: EXTERNAL_OTEL_PROTOCOL://EXTERNAL_OTEL_HOSTNAME:EXTERNAL_OTEL_PORT
tls:
insecure: true
Prometheus metric label exported_instance: HOSTNAME in prometheus receiver is added to identify the name of microshift instance.
This label will be added to all the kepler exported prometheus metrics data, to use as an unique key for instance filter when collecting data from multiple microshift instance and visualize in grafana dashboard.
Run the below command to create the otel collector configmap in kepler namespace.
oc create -n kepler -f edge/edge-otel-collector/1-kepler-microshift-otelconfig.yaml
- Added the opentelemetry collector as sidecar container with kepler daemonset.
Note: Update the image with any custom opentelemetry collector if necessary or use the default collector image by editing the kepler opentelemetry sidecar yaml file using below command
The opentelemetry collector image used here is
ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.88.0
This image is also available in Quay and Docker container registries.
vi edge/edge-otel-collector/2-kepler-patch-sidecar-otel.yaml
Run the below command to patch the kepler-exporter-ds with opentelemetry sidecar container
oc patch daemonset kepler-exporter -n kepler --patch-file edge/edge-otel-collector/2-kepler-patch-sidecar-otel.yaml
The opentelemetry sidecar attached to the kepler daemonset in microshift is configured to send the data to an external opentelemetry collector running in openshift.
Alternate -To deploy kepler daemonset from official source code
To deploy kepler exporter Daemonset from the source code (if required)
- Create namespace in microshift to deploy kepler daemon-set.
oc create ns kepler
oc label ns kepler security.openshift.io/scc.podSecurityLabelSync=false
oc label ns kepler --overwrite pod-security.kubernetes.io/enforce=privileged
- Downloaded the Kepler source code from kepler - git repo and renamed as edge/edge-kepler.
git clone https://github.com/sustainable-computing-io/kepler.git ./edge/kepler
Kepler itself provides the easy way of deploying kepler expoerter as a daemonset in microshift using the kustomization.yaml
Since, microshift is a lightweight version of openshift, it requires scc permissions to be added.
Edit the following lines in kustomization.yaml
vi edge-kepler/manifests/config/exporter/kustomization.yaml
1. Uncomment line 3
- openshift_scc.yaml
2. Remove [] in line 8
patchesStrategicMerge:
3. uncomment line 18
- ./patch/patch-openshift.yaml
- Deployed kepler on microshift under the kepler namespace.
oc apply --kustomize edge/edge-kepler/manifests/config/base -n kepler
- Namespace used for this demo in Openshift Container Platform
oc new-project kepler-demo
-
Before Setting up install the necessary operators in the Openshift Console
-
Red Hat OpenShift distributed tracing data collection operator
Install from Operator Hub
-
grafana-operator
Install from Operator Hub
(or)
run the following command
-
oc apply -f 0-grafana-operator.yaml
-
observability-operator
Install from Operator Hub
- Configured the opentelemetry collector using Red Hat OpenShift distributed tracing data collection operator (From openshift Operator Hub).
oc apply -f ocp/ocp-otlp_collector/1-kepler-otel_collector.yaml
Note:
Opentelemetry collector can be exposed in both secure and insecure method.
Secure Route: Ingress configuration in OpenTelemetry Collector CRD creates a secure route for exposing the Otel collector outside of the cluster (Not tested).***
ingress:
route:
termination: edge
type: route
Insecure Route: For exposing OpenTelemetry collector outside the OCP cluster (Insecure), MetalLB LoadBalancer is used for this demo, since openshift cluster is running in Bare Metal. This configuration may vary for Openshift running in cloud environment.
Apply the patch command to change the type in openshift service
oc patch service kepler-otel-service -n kepler-demo --type='json' -p '[{"op": "replace", "path": "/spec/type", "value": "LoadBalancer"}]'
- Deployed Monitoring Stack to collect prometheus data from the opentelemetry (running in external openshift) using Observability Operator (From openshift Operator Hub).
oc apply -f ocp/ocp-prometheus/1-kepler-prometheus-monitoringstack.yaml
- Deployed grafana Dashboard to visualize the data collected in the Prometheus Monitoring Stack.
A) Apply the grafana CRD
To deploy grafana with customized docker image apply the grafana yaml configured with custom docker image
( custom docker image with Zaga Logo is used in this demo)
oc apply -f ocp/ocp-grafana/1-grafana-kepler.yaml
# or
oc apply -f ocp/ocp-grafana/1-grafana-kepler-zaga-logo.yaml
Note: For 1-grafana-kepler-zaga-logo.yaml
- 1-grafana-kepler-zaga-logo.yaml is configured with login option enabled. If required login with, username as rhel and password as rhel.
config:
auth:
disable_signout_menu: 'false' # If signout not needed change to true
auth.anonymous:
enabled: 'false' # If auto login not needed change to true
...
security:
admin_password: rhel
admin_user: rhel
-
Obtain the pull secret from the Quay registry and replace the image pull secret name in 1-grafana-kepler-zaga-logo.yaml. In below line
imagePullSecrets: - name: quay-secret-credentials
B) Apply the grafanaDatasource CRD
Grafana datasource CRD is configured with the prometheus service URL (Deployment steps in Prometheus Stack Setup)
spec:
datasource:
...
name: prometheus
type: prometheus
url: 'http://prometheus-kepler-prometheus.kepler-demo.svc.cluster.local:9090'
Run the command to apply grafanaDatasource CRD
oc apply -f ocp/ocp-grafana/2-grafana-datasource-kepler.yaml
C) Apply the grafanaDashboard CRD
Grafana Dashboard model is provided as a JSON file in the grafana-dashboard-kepler json. ( This dashboard json is also available in official kepler repo )
This dashboard json file is configured in the grafana-dashboard yaml CRD as
spec:
instanceSelector:
matchLabels:
dashboards: grafana-kepler
url: https://raw.githubusercontent.com/ZagaUS/kepler-demo/main/ocp/ocp-grafana/4-grafana-dashboard-kepler.json
Instance filter is added in the grafana dashboard for filtering the exported power metrics from multiple microshift instances.
Run the command to apply grafana dashboard CRD
oc apply -f ocp/ocp-grafana/3-grafana-dashboard-kepler.yaml
D) Apply the grafana route
Route for Grafana CRD need to be added manually, either through web console (Administrator -> Networking -> Routes) or use the 5-grafana-dashboard-route yaml file.
Edit the yaml file, to use custom hostname if necessary or leave this line commented
And verify the name of the service
spec:
# host: >-
# grafana-kepler-dashboard-kepler-demo.apps.zagaopenshift.zagaopensource.com
to:
kind: Service
name: grafana-kepler-service
Apply the grafana route yaml
oc apply -f ocp/ocp-grafana/5-grafana-dashboard-route.yaml
-
Added a label exported_instance in the opentelemetry collector configuration on microshift to identify the name of microshift instance in kepler exported prometheus metrics
-
Added the filter instance in grafana dashboard json file to query the data from prometheus metrics based on the label exported_instance.
-
Created a custom docker image to add the Zaga logo
-
Created new grafana CRD to use the Zaga logo.
-
To view the actual prometheus metrics, use the kepler-exporter-service, which uses the LoadBalancer type to expose the prometheus metrics in /metrics endpoint on port 9102 (default kepler port).
Verify the External IP using
oc get svc -n kepler
and use the below URL to view the prometheus metrics data.
http://{EXTERNAL_IP}:9102/metrics -
opentelemetry collector exporter is configured with debug exporter. So, the exported metrics data can be seen in kepler-exporter-pod (otel-collector container) logs.
oc logs -f <name of the kepler exporter pod> -n kepler # In microshift oc logs -f <otel collector pod> -n kepler-demo # In openshift
6. References
https://github.com/sustainable-computing-io/kepler
https://github.com/redhat-et/edge-ocp-observability
https://access.redhat.com/documentation/en-us/red_hat_build_of_microshift
https://www.ibm.com/docs/ko/erqa?topic=monitoring-installing-grafana-operator