8000 Operator fails to create ConfigMap `chi-storage-core` · Issue #1710 · Altinity/clickhouse-operator · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Operator fails to create ConfigMap chi-storage-core #1710

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
8000
tanner-bruce opened this issue May 15, 2025 · 4 comments
Open

Operator fails to create ConfigMap chi-storage-core #1710

tanner-bruce opened this issue May 15, 2025 · 4 comments

Comments

@tanner-bruce
Copy link
Contributor

Same as #1444 but I cannot re-open. v0.24.5

W0515 19:32:15.236346       1 cr.go:121] statusUpdateRetry():clickhouse-core/core/13fbe608-fb22-49ec-93b3-d0f3786c09e9:got error, will retry. err: "ConfigMap \"chi-storage-core\" is invalid: []: Too long: must have at most 1048576 bytes

It has never successfully created it so I cannot see what is in it

@alex-zaitsev
Copy link
Member

@tanner-bruce , how may nodes do you have?

@tanner-bruce
Copy link
Contributor Author
tanner-bruce commented May 18, 2025

@tanner-bruce , how may nodes do you have?

62, there are xml configs and TTL for system tables specified in it.

@alex-zaitsev
Copy link
Member

Do you have settings defined separately for every shard/replica?

I am asking this because we have clusters with 200+ nodes, and those fit chi-storage configmap. So there is something non-standard in your configuration that blows it too much.

@mklocke-shopify
Copy link

Do you have settings defined separately for every shard/replica?

We do not have settings defined per shard/replica. They are defined per cluster and we run two clusters that are defined in the same CHI.

We do have a lot of inline xml configurations that are also duplicated per cluster, so for example the table TTLs mentioned above do make up a big chunk:

      - files:
          log_ttls.xml: |-
            <clickhouse>
                <query_log replace="1">
                    <database>system</database>
                    <table>query_log</table>
                    <engine>ENGINE = MergeTree PARTITION BY (event_date)
                        ORDER BY (event_time)
                        TTL event_date + INTERVAL 30 DAY DELETE
                    </engine>
                    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
                </query_log>

                <text_log replace="1">
                    <database>system</database>
                    <table>text_log</table>
                    <engine>ENGINE = MergeTree PARTITION BY (event_date)
                        ORDER BY (event_time)
                        TTL event_date + INTERVAL 30 DAY DELETE
                    </engine>
                    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
                </text_log>

                <metric_log replace="1">
                    <database>system</database>
                    <table>metric_log</table>
                    <engine>ENGINE = MergeTree PARTITION BY (event_date)
                        ORDER BY (event_time)
                        TTL event_date + INTERVAL 30 DAY DELETE
                    </engine>
                    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
                </metric_log>

                <query_metric_log replace="1">
                    <database>system</database>
                    <table>query_metric_log</table>
                    <engine>ENGINE = MergeTree PARTITION BY (event_date)
                        ORDER BY (event_time)
                        TTL event_date + INTERVAL 30 DAY DELETE
                    </engine>
                    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
                </query_metric_log>

                <processors_profile_log replace="1">
                    <database>system</database>
                    <table>processors_profile_log</table>
                    <engine>ENGINE = MergeTree PARTITION BY (event_date)
                        ORDER BY (event_time)
                        TTL event_date + INTERVAL 30 DAY DELETE
                    </engine>
                    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
                </processors_profile_log>

                <opentelemetry_span_log replace="1">
                    <database>system</database>
                    <table>opentelemetry_span_log</table>
                    <engine>ENGINE = MergeTree PARTITION BY (event_date)
                        ORDER BY (event_time)
                        TTL event_date + INTERVAL 1 DAY DELETE
                    </engine>
                    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
                </opentelemetry_span_log>

                <error_log replace="1">
                    <database>system</database>
                    <table>error_log</table>
                    <engine>ENGINE = MergeTree PARTITION BY (event_date)
                        ORDER BY (event_time)
                        TTL event_date + INTERVAL 30 DAY DELETE
                    </engine>
                    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
                </error_log>

                <crash_log replace="1">
                    <database>system</database>
                    <table>crash_log</table>
                    <engine>ENGINE = MergeTree PARTITION BY (event_date)
                        ORDER BY (event_time)
                        TTL event_date + INTERVAL 30 DAY DELETE
                    </engine>
                    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
                </crash_log>
            </clickhouse>

And we have even more XML like this for storage, certificates and mtls user configuration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
0