-
Notifications
You must be signed in to change notification settings - Fork 7.3k
Keycloak 26.2.0 UI Performance Degradation #39023
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for reporting this issue, but there is insufficient information or lack of steps to reproduce. Please provide additional details, otherwise this issue will be automatically closed within 14 days. |
Can you provide a reproducer? If not, can you provide timings for specific UI screens / actions to highlight level of degredation? |
Could you try to enable tracing as described in https://www.keycloak.org/observability/tracing and provide a trace? If you are using Jaeger, you could either provide a screenshot, or export the trace as a JSON. As an alternative, you could provide a thread dump of a Keycloak node under load, still that is usually less helpful. |
Sorry for the late response. I could narrow the "problem" down. The Slugish ui only is noticable in our keycloak operator deployments. Single node dev instances work as fast as ever. The infrastructure around the k8s operator deployment did not change from 26.1 to 26.2 is it possible that session affinity could be a thing? Our Ingress Controller ist Nginx-Ingress with the following annotations.
|
Sorry forgot to anwser. Right now sadly we got no jaeger deployed. Our go live is in arround 1 month from now on so we got no real load on the systems. The mostly idle |
@VonNao - As we describe in our docs, Jaeger could be run as a Pod on Kubernetes like any other, similar on how you deploy Keycloak today. While in a production environment you would want all applications to send their traces to Jaeger, it might be enough for a test environment for just Keycloak to send its logs to this test instance of Jaeger. |
Closing due to lack of recent interest. We can reopen if needed. |
@ssilvert Sorry for the late response. Jaeger is up and running. Following are some screenshots from the traces. Tested with normal ser actions form perspective of a administrator. Some Events just need a really long Time 4s+ to finish. As in another issue mentioned when scaling down to one instance the performance is like 26.1 and earlier. This last Screenshot is from a login test from a user I also added our external monitoring as reference. We scaled down to 1 replica at around 10:00am. After that latency went back to normal. Since with one instance there is no problem i would guess that it has something to do with the infinispan cluster? If you need more information hook me up. |
Preliminary analysis how this caused:
This leads to the following symptoms:
Possible remedies (to be verified):
|
Will try jdbc-ping in our testcluster |
I can confirm the fix (workaround?). Thanks a lot for the pointer! Before setting it, we saw gaps on the request timeline during loadtesting, because the requests were just hanging: Now we have a constant request rate caused by our loadtest and answered in time by Keycloak: |
Can also confirm worked perfectly. Is there any advantage to kubernestes cache-stack management vs jdbc-ping? |
The As a lot of people are using |
Fixes keycloak#39023 Fixes keycloak#39454 Signed-off-by: Pedro Ruivo <pruivo@redhat.com>
Fixes keycloak#39023 Fixes keycloak#39454 Signed-off-by: Pedro Ruivo <pruivo@redhat.com>
Fixes keycloak#39023 Fixes keycloak#39454 Signed-off-by: Pedro Ruivo <pruivo@redhat.com>
Added a follow-up issue for the per-destination bundler for 26.3: #39545 |
KC 26.2.4 was released today that included a fix. |
Fixes keycloak#39023 Fixes keycloak#39454 Signed-off-by: Pedro Ruivo <pruivo@redhat.com>
The versions affected by this: ISPN 15.0.14.Final and ISPN 15.0.13.Final Due to backports, 26.0.11 was affected as well. |
Before reporting an issue
Area
admin/ui
Describe the bug
We have Deployt keycloak via the Keycloak Operator on our Cluster. After updateing from Keycloak 26.1.2 to 26.2.0 Keycloak seems to be kinda slow. UI Operations aswell aus Authentication seem to be a slower.
For our Setup:
We got around 150 LDAP federations against our Active directory.
We deployed it on k8s via the operator and got cnpg as postgres-cluster. Metrics show that none of the systems is anywhere near utilization.
Version
26.2.0
Regression
Expected behavior
Responsiveness as of patch 26.1.x
Actual behavior
Degredation is responsivness in regards of ui oprations.
How to Reproduce?
Install keycloak 26.2.0 manage groups etc. via Webinterface
Anything else?
No response
The text was updated successfully, but these errors were encountered: