Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (6)
📝 WalkthroughWalkthroughAdds a new Observability Logs Module for Moesif: Helm chart, values, templates (Fluent Bit + OpenTelemetry Collector), module manifest, and README documenting installation and multi-environment Moesif configuration. Changes
Sequence Diagram(s)sequenceDiagram
participant FluentBit as Fluent Bit (daemonset)
participant OTEL as OpenTelemetry Collector
participant Moesif as Moesif API
participant Helm as Helm / values.yaml
Note over Helm: helm values define `moesif.apps` per environment
FluentBit->>OTEL: send logs via OTLP HTTP
OTEL->>OTEL: routing processor selects pipeline by env label (openchoreo.dev/environment)
OTEL->>Moesif: forward to otlphttp/<env> exporter (endpoint, headers per app)
Moesif-->>OTEL: ack / response
OTEL-->>FluentBit: (optional) response forwarded/logged
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml`:
- Around line 27-33: The ConfigMap currently renders Moesif credentials
(X-Api-Token / X-Moesif-Application-Id) into plaintext; change this to read
those values from a Secret and inject them via environment variables instead:
stop templating (index . "api-key") and (index . "collector-application-id")
into the ConfigMap, create or reference a Kubernetes Secret with keys like
moesif_api_key and moesif_app_id, mount those into the collector
Deployment/StatefulSet as env vars (e.g., MOESIF_API_KEY, MOESIF_APP_ID) using
valueFrom.secretKeyRef, and update the collector configuration code that sets
headers to read from environment variables (or use an envsubst step) so headers
like X-Api-Token and X-Moesif-Application-Id are populated at runtime from
MOESIF_API_KEY / MOESIF_APP_ID rather than baked into the ConfigMap.
- Around line 24-33: Add render-time validation to fail fast when moesif.apps is
empty or contains invalid entries: before iterating .Values.moesif.apps use
required/empty check to error if the list is missing or empty, and inside the
range validate each item by calling required on the "env-name" key and enforce
an exclusive-or on "api-key" vs "collector-application-id" (use a conditional
and fail/required with a clear message if both or neither are set); keep the
existing usage of endpoint, env-name, api-key and collector-application-id but
replace silent omissions with these required/fail checks so the chart render
aborts on bad moesif.apps entries.
In `@observability-logs-moesif/helm/values.yaml`:
- Around line 167-168: The Helm values default exposes the collector externally
by setting service.type to LoadBalancer; change the default to ClusterIP to
avoid publishing OTLP ports outside the cluster. Update the values entry for
service (the service.type setting) to use "ClusterIP" instead of "LoadBalancer"
so the collector remains internal and Fluent Bit can continue to target the
in-cluster opentelemetry-collector service.
- Around line 138-147: Remove the unnecessary cluster-wide RBAC: when the
OpenTelemetry collector is not using a Kubernetes metadata processor, stop
creating the ClusterRole or at least remove the rules granting access to "pods"
and "replicasets" (the clusterRole.create / clusterRole.rules block that lists
apiGroups [""] resources ["pods"] and apiGroups ["apps"] resources
["replicasets"]). If you do intend to enrich logs with Kubernetes metadata
later, add the k8s metadata processor to the collector config
(templates/opentelemetry-collector/configMap.yaml) and then reintroduce the
specific RBAC rules; otherwise set clusterRole.create: false or drop those rules
to avoid granting unused cluster-wide permissions.
- Around line 121-127: The init container is over-privileged: change
securityContext to avoid privileged: true and instead run as UID 0 (keep
runAsUser: 0) and only add the minimal capability if required (e.g., use
capabilities.add: ["CHOWN"] rather than broad privileges or dropping ALL), keep
readOnlyRootFilesystem as appropriate; update the securityContext block
(securityContext, privileged, runAsUser, capabilities) to set privileged: false
and add a single capability only if a runtime error proves it necessary.
In `@observability-logs-moesif/README.md`:
- Around line 14-24: The README currently suggests passing sensitive Moesif
credentials via --set (e.g., moesif.apps[0].collector-application-id or api-key)
which will expose secrets in shell history/process args; update the docs to
remove or deprecate the --set api-key alternative and instead instruct users to
supply the Moesif API key via a values.yaml file or a Kubernetes Secret and
reference the existing Secret-based flow (mention moesif.apps[*].env-name and
collector-application-id only as non-sensitive examples), and add a short note
showing the secure pattern and pointing to the Secret-based example elsewhere in
the repo.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2c947a81-6359-4bb3-bd30-c4894ad3112f
⛔ Files ignored due to path filters (1)
observability-logs-moesif/helm/Chart.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
observability-logs-moesif/README.mdobservability-logs-moesif/helm/.helmignoreobservability-logs-moesif/helm/Chart.yamlobservability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yamlobservability-logs-moesif/helm/values.yamlobservability-logs-moesif/module.yaml
| {{- range .Values.moesif.apps }} | ||
| otlphttp/{{ index . "env-name" }}: | ||
| endpoint: {{ .endpoint | default "https://api.moesif.net" | quote }} | ||
| headers: | ||
| {{- if (index . "api-key") }} | ||
| X-Api-Token: {{ (index . "api-key") | quote }} | ||
| {{- end }} | ||
| {{- if (index . "collector-application-id") }} | ||
| X-Moesif-Application-Id: {{ (index . "collector-application-id") | quote }} | ||
| {{- end }} |
There was a problem hiding this comment.
Fail fast on invalid moesif.apps entries.
Right now the chart still renders when moesif.apps is empty, when an item is missing env-name, or when an item sets both/neither of api-key and collector-application-id. That gives you a release that installs but cannot route or authenticate logs.
Suggested render-time guards
+ {{- $apps := required "moesif.apps must contain at least one entry" .Values.moesif.apps }}
exporters:
- {{- range .Values.moesif.apps }}
- otlphttp/{{ index . "env-name" }}:
+ {{- range $app := $apps }}
+ {{- $env := required "moesif.apps[].env-name is required" (index $app "env-name") }}
+ {{- $apiKey := index $app "api-key" }}
+ {{- $appID := index $app "collector-application-id" }}
+ {{- if or (and $apiKey $appID) (and (not $apiKey) (not $appID)) }}
+ {{- fail (printf "moesif.apps[%s] must set exactly one of api-key or collector-application-id" $env) }}
+ {{- end }}
+ otlphttp/{{ $env }}:
- endpoint: {{ .endpoint | default "https://api.moesif.net" | quote }}
+ endpoint: {{ $app.endpoint | default "https://api.moesif.net" | quote }}
headers:
- {{- if (index . "api-key") }}
- X-Api-Token: {{ (index . "api-key") | quote }}
+ {{- if $apiKey }}
+ X-Api-Token: {{ $apiKey | quote }}
{{- end }}
- {{- if (index . "collector-application-id") }}
- X-Moesif-Application-Id: {{ (index . "collector-application-id") | quote }}
+ {{- if $appID }}
+ X-Moesif-Application-Id: {{ $appID | quote }}
{{- end }}
{{- end }}Also applies to: 39-63
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml`
around lines 24 - 33, Add render-time validation to fail fast when moesif.apps
is empty or contains invalid entries: before iterating .Values.moesif.apps use
required/empty check to error if the list is missing or empty, and inside the
range validate each item by calling required on the "env-name" key and enforce
an exclusive-or on "api-key" vs "collector-application-id" (use a conditional
and fail/required with a clear message if both or neither are set); keep the
existing usage of endpoint, env-name, api-key and collector-application-id but
replace silent omissions with these required/fail checks so the chart render
aborts on bad moesif.apps entries.
| headers: | ||
| {{- if (index . "api-key") }} | ||
| X-Api-Token: {{ (index . "api-key") | quote }} | ||
| {{- end }} | ||
| {{- if (index . "collector-application-id") }} | ||
| X-Moesif-Application-Id: {{ (index . "collector-application-id") | quote }} | ||
| {{- end }} |
There was a problem hiding this comment.
Don't render Moesif credentials into a ConfigMap.
Line 29 writes the API key into plaintext config, which makes it readable to anyone who can view the ConfigMap and also bakes it into Helm release state. Please switch this chart to reference an existing Secret and have the collector read the header value from an environment variable instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml`
around lines 27 - 33, The ConfigMap currently renders Moesif credentials
(X-Api-Token / X-Moesif-Application-Id) into plaintext; change this to read
those values from a Secret and inject them via environment variables instead:
stop templating (index . "api-key") and (index . "collector-application-id")
into the ConfigMap, create or reference a Kubernetes Secret with keys like
moesif_api_key and moesif_app_id, mount those into the collector
Deployment/StatefulSet as env vars (e.g., MOESIF_API_KEY, MOESIF_APP_ID) using
valueFrom.secretKeyRef, and update the collector configuration code that sets
headers to read from environment variables (or use an envsubst step) so headers
like X-Api-Token and X-Moesif-Application-Id are populated at runtime from
MOESIF_API_KEY / MOESIF_APP_ID rather than baked into the ConfigMap.
| securityContext: | ||
| capabilities: | ||
| drop: | ||
| - ALL | ||
| privileged: true | ||
| readOnlyRootFilesystem: true | ||
| runAsUser: 0 |
There was a problem hiding this comment.
Avoid a privileged init container for a simple chown.
This container only fixes ownership on one mounted path, but privileged: true gives it far broader node-level access than that. Start with UID 0 without privilege escalation and only add a narrower capability if you find a runtime that truly requires it.
Suggested hardening
securityContext:
capabilities:
drop:
- ALL
- privileged: true
readOnlyRootFilesystem: true
runAsUser: 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@observability-logs-moesif/helm/values.yaml` around lines 121 - 127, The init
container is over-privileged: change securityContext to avoid privileged: true
and instead run as UID 0 (keep runAsUser: 0) and only add the minimal capability
if required (e.g., use capabilities.add: ["CHOWN"] rather than broad privileges
or dropping ALL), keep readOnlyRootFilesystem as appropriate; update the
securityContext block (securityContext, privileged, runAsUser, capabilities) to
set privileged: false and add a single capability only if a runtime error proves
it necessary.
| clusterRole: | ||
| create: true | ||
| rules: | ||
| - apiGroups: [""] | ||
| resources: ["pods"] | ||
| verbs: ["list", "watch"] | ||
| - apiGroups: ["apps"] | ||
| resources: ["replicasets"] | ||
| verbs: ["list","watch"] | ||
|
|
There was a problem hiding this comment.
Drop the collector ClusterRole unless you add a Kubernetes metadata processor.
The collector config in observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml only receives OTLP, transforms, routes, and exports. It never queries the Kubernetes API, so these cluster-wide pods/replicasets permissions are unused and widen access for no benefit.
Suggested change
clusterRole:
- create: true
- rules:
- - apiGroups: [""]
- resources: ["pods"]
- verbs: ["list", "watch"]
- - apiGroups: ["apps"]
- resources: ["replicasets"]
- verbs: ["list","watch"]
+ create: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| clusterRole: | |
| create: true | |
| rules: | |
| - apiGroups: [""] | |
| resources: ["pods"] | |
| verbs: ["list", "watch"] | |
| - apiGroups: ["apps"] | |
| resources: ["replicasets"] | |
| verbs: ["list","watch"] | |
| clusterRole: | |
| create: false |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@observability-logs-moesif/helm/values.yaml` around lines 138 - 147, Remove
the unnecessary cluster-wide RBAC: when the OpenTelemetry collector is not using
a Kubernetes metadata processor, stop creating the ClusterRole or at least
remove the rules granting access to "pods" and "replicasets" (the
clusterRole.create / clusterRole.rules block that lists apiGroups [""] resources
["pods"] and apiGroups ["apps"] resources ["replicasets"]). If you do intend to
enrich logs with Kubernetes metadata later, add the k8s metadata processor to
the collector config (templates/opentelemetry-collector/configMap.yaml) and then
reintroduce the specific RBAC rules; otherwise set clusterRole.create: false or
drop those rules to avoid granting unused cluster-wide permissions.
| service: | ||
| type: LoadBalancer |
There was a problem hiding this comment.
Don't publish the collector externally by default.
This chart already points Fluent Bit at the in-cluster opentelemetry-collector service, so LoadBalancer turns the OTLP ingest ports into an unnecessary external endpoint. ClusterIP is the safer default here.
Suggested change
service:
- type: LoadBalancer
+ type: ClusterIP📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| service: | |
| type: LoadBalancer | |
| service: | |
| type: ClusterIP |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@observability-logs-moesif/helm/values.yaml` around lines 167 - 168, The Helm
values default exposes the collector externally by setting service.type to
LoadBalancer; change the default to ClusterIP to avoid publishing OTLP ports
outside the cluster. Update the values entry for service (the service.type
setting) to use "ClusterIP" instead of "LoadBalancer" so the collector remains
internal and Fluent Bit can continue to target the in-cluster
opentelemetry-collector service.
| ```bash | ||
| helm upgrade --install observability-logs-moesif \ | ||
| oci://ghcr.io/openchoreo/charts/observability-logs-moesif \ | ||
| --create-namespace \ | ||
| --namespace openchoreo-observability-plane \ | ||
| --version 0.1.0 \ | ||
| --set moesif.apps[0].env-name=default \ | ||
| --set moesif.apps[0].collector-application-id=YOUR_MOESIF_APPLICATION_ID | ||
| ``` | ||
|
|
||
| > **Note:** Replace `YOUR_MOESIF_APPLICATION_ID` with your actual Moesif Application ID. Alternatively, you can use `api-key` instead of `collector-application-id`. |
There was a problem hiding this comment.
Don't document api-key usage via --set.
If users follow the alternative called out on Line 24 or Line 81, the key ends up in shell history and process arguments. Please steer api-key users to a values file or, better, an existing Secret-based flow instead.
Also applies to: 30-50, 79-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@observability-logs-moesif/README.md` around lines 14 - 24, The README
currently suggests passing sensitive Moesif credentials via --set (e.g.,
moesif.apps[0].collector-application-id or api-key) which will expose secrets in
shell history/process args; update the docs to remove or deprecate the --set
api-key alternative and instead instruct users to supply the Moesif API key via
a values.yaml file or a Kubernetes Secret and reference the existing
Secret-based flow (mention moesif.apps[*].env-name and collector-application-id
only as non-sensitive examples), and add a short note showing the secure pattern
and pointing to the Secret-based example elsewhere in the repo.
| # Path /var/log/containers/*dp-default-default-development-f8e58905*.log | ||
| # Exclude_Path /var/log/containers/*_kube-system_*.log, /var/log/containers/*_observability_*.log, /var/log/containers/*_kube-public_*.log, /var/log/containers/*_kube-node-lease_*.log |
There was a problem hiding this comment.
Shall we remove these commented fields if they are unused?
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
observability-logs-moesif/helm/values.yaml (3)
121-127:⚠️ Potential issue | 🟠 MajorThe
chowninit container is still over-privileged.
privileged: truegrants far more node access than a recursive ownership fix needs. Start withrunAsUser: 0andprivileged: false, then add only the minimal capability if a specific runtime proves it necessary.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@observability-logs-moesif/helm/values.yaml` around lines 121 - 127, The chown init container's securityContext is over-privileged: change privileged: true to privileged: false, keep runAsUser: 0, and remove unnecessary broad privileges; if the runtime fails to perform recursive chown, add only the minimal capability (e.g., add capabilities: add: - CHOWN) rather than enabling full privileged mode, and ensure readOnlyRootFilesystem remains true and existing drop: - ALL is preserved except for the explicitly added capability.
138-147:⚠️ Potential issue | 🟠 MajorRecheck whether the collector actually needs this ClusterRole.
I don't see anything in the provided module context wiring Kubernetes metadata enrichment into the collector, so listing/watching
podsandreplicasetsmay just widen access. Ifobservability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yamlhas nok8sattributes-style processor, disable this ClusterRole.Verification: inspect the shipped collector config for a Kubernetes metadata processor. Expected result: if there is no
k8sattributes/kubernetesattributesprocessor and only this values block referencespods/replicasets, these RBAC rules are probably unused.#!/bin/bash set -euo pipefail sed -n '1,260p' observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml echo rg -n -C2 'k8sattributes|kubernetesattributes|pod_association|pods|replicasets' \ observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml \ observability-logs-moesif/helm/values.yaml🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@observability-logs-moesif/helm/values.yaml` around lines 138 - 147, The ClusterRole block (clusterRole.create, clusterRole.rules with apiGroups/resources for "pods" and "replicasets") appears unnecessary unless the collector config includes a Kubernetes metadata processor (k8sattributes or kubernetesattributes/pod_association). Inspect observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml for any k8sattributes/kubernetesattributes usage; if none exists, disable or remove clusterRole.create and/or remove the rules that grant list/watch on "pods" and "replicasets" to narrow RBAC scope, otherwise keep them only if the k8s processor is present and required.
167-168:⚠️ Potential issue | 🟠 MajorKeep the collector service internal by default.
Fluent Bit already talks to
opentelemetry-collectorover cluster DNS at Line 64, soLoadBalanceronly publishes the OTLP endpoint externally.ClusterIPis the safer default here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@observability-logs-moesif/helm/values.yaml` around lines 167 - 168, The service default exposes the collector externally; change the Helm values so the collector service under the "service: type" key uses ClusterIP instead of LoadBalancer to keep the opentelemetry-collector internal (Fluent Bit already communicates via cluster DNS to opentelemetry-collector). Update the "service: type" value from LoadBalancer to ClusterIP and ensure any related comments or docs referencing external exposure are adjusted accordingly.observability-logs-moesif/README.md (1)
24-24:⚠️ Potential issue | 🟠 MajorDon't route
api-keythrough--setin the docs.Line 24 and the multi-environment example still normalize a path that leaks the key into shell history and process args. Keep
--setexamples to non-sensitive fields, and point API-key users to-f moesif-values.yamlor a Secret-based flow instead.Also applies to: 42-50
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@observability-logs-moesif/README.md` at line 24, The README currently shows using --set to pass sensitive values like `api-key`/`collector-application-id` (see the Note and the multi-environment example), which can leak secrets to shell history/process args; update the Note to remove or reword any `--set` example for `api-key` and instead instruct users to supply the key via a values file (`-f moesif-values.yaml`) or a Secret-based flow, and update the multi-environment example (the block covering lines ~42-50) to demonstrate using `moesif-values.yaml` or Kubernetes Secrets rather than `--set`.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@observability-logs-moesif/helm/values.yaml`:
- Line 20: Reduce the Fluent Bit tail buffer setting: lower the Mem_Buf_Limit
value (the Mem_Buf_Limit key) to leave headroom for Fluent Bit process, filters
and output buffering (for example drop from 256MB to a safer value like
64–128MB), or alternatively raise the pod memory limit used by this chart so the
buffer isn't near the container ceiling; update the Mem_Buf_Limit entry and/or
the pod memory limit in values.yaml consistently (also adjust the tail buffer
settings referenced elsewhere in the same block) to ensure bursts cannot OOM the
pod.
- Around line 21-26: The current fluent-bit settings use Path
/var/log/containers/*.log with Read_from_Head On which will replay full backlog
on fresh installs; change the default to Read_from_Head Off and
document/parameterize an opt-in toggle (e.g., values.yaml flag like
enableBacklogReplay or readFromHead=true) so operators can explicitly enable
replay; update the block referencing Path /var/log/containers/*.log and keep
multiline.parser docker, cri and Refresh_Interval 5 unchanged while adding a
comment explaining how to opt-in for backlog replay.
---
Duplicate comments:
In `@observability-logs-moesif/helm/values.yaml`:
- Around line 121-127: The chown init container's securityContext is
over-privileged: change privileged: true to privileged: false, keep runAsUser:
0, and remove unnecessary broad privileges; if the runtime fails to perform
recursive chown, add only the minimal capability (e.g., add capabilities: add: -
CHOWN) rather than enabling full privileged mode, and ensure
readOnlyRootFilesystem remains true and existing drop: - ALL is preserved except
for the explicitly added capability.
- Around line 138-147: The ClusterRole block (clusterRole.create,
clusterRole.rules with apiGroups/resources for "pods" and "replicasets") appears
unnecessary unless the collector config includes a Kubernetes metadata processor
(k8sattributes or kubernetesattributes/pod_association). Inspect
observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml
for any k8sattributes/kubernetesattributes usage; if none exists, disable or
remove clusterRole.create and/or remove the rules that grant list/watch on
"pods" and "replicasets" to narrow RBAC scope, otherwise keep them only if the
k8s processor is present and required.
- Around line 167-168: The service default exposes the collector externally;
change the Helm values so the collector service under the "service: type" key
uses ClusterIP instead of LoadBalancer to keep the opentelemetry-collector
internal (Fluent Bit already communicates via cluster DNS to
opentelemetry-collector). Update the "service: type" value from LoadBalancer to
ClusterIP and ensure any related comments or docs referencing external exposure
are adjusted accordingly.
In `@observability-logs-moesif/README.md`:
- Line 24: The README currently shows using --set to pass sensitive values like
`api-key`/`collector-application-id` (see the Note and the multi-environment
example), which can leak secrets to shell history/process args; update the Note
to remove or reword any `--set` example for `api-key` and instead instruct users
to supply the key via a values file (`-f moesif-values.yaml`) or a Secret-based
flow, and update the multi-environment example (the block covering lines ~42-50)
to demonstrate using `moesif-values.yaml` or Kubernetes Secrets rather than
`--set`.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a92dc61c-d978-4b3e-964d-93981c5de826
⛔ Files ignored due to path filters (1)
observability-logs-moesif/helm/Chart.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
observability-logs-moesif/README.mdobservability-logs-moesif/helm/.helmignoreobservability-logs-moesif/helm/Chart.yamlobservability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yamlobservability-logs-moesif/helm/values.yamlobservability-logs-moesif/module.yaml
✅ Files skipped from review due to trivial changes (2)
- observability-logs-moesif/module.yaml
- observability-logs-moesif/helm/.helmignore
🚧 Files skipped from review as they are similar to previous changes (1)
- observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml
| DB /var/lib/fluent-bit/db/tail-container-logs.db | ||
| Exclude_Path /var/log/containers/fluent-bit-*.log | ||
| Inotify_Watcher false | ||
| Mem_Buf_Limit 256MB |
There was a problem hiding this comment.
Leave buffer headroom below the pod memory limit.
Line 20 sets Mem_Buf_Limit to almost the full container ceiling from Line 93. That leaves very little room for Fluent Bit itself, filters, and output buffering, so bursts can OOM the pod and drop logs. Lower the tail buffer or raise the pod limit.
Also applies to: 91-96
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@observability-logs-moesif/helm/values.yaml` at line 20, Reduce the Fluent Bit
tail buffer setting: lower the Mem_Buf_Limit value (the Mem_Buf_Limit key) to
leave headroom for Fluent Bit process, filters and output buffering (for example
drop from 256MB to a safer value like 64–128MB), or alternatively raise the pod
memory limit used by this chart so the buffer isn't near the container ceiling;
update the Mem_Buf_Limit entry and/or the pod memory limit in values.yaml
consistently (also adjust the tail buffer settings referenced elsewhere in the
same block) to ensure bursts cannot OOM the pod.
| Path /var/log/containers/*.log | ||
| # Path /var/log/containers/*dp-default-default-development-f8e58905*.log | ||
| # Exclude_Path /var/log/containers/*_kube-system_*.log, /var/log/containers/*_observability_*.log, /var/log/containers/*_kube-public_*.log, /var/log/containers/*_kube-node-lease_*.log | ||
| multiline.parser docker, cri | ||
| Read_from_Head On | ||
| Refresh_Interval 5 |
There was a problem hiding this comment.
Defaulting to full backlog replay is risky here.
With Path /var/log/containers/*.log and Read_from_Head On, a fresh install or DB loss will replay historical logs from every node. That can create a large one-time ingest spike and surprise Moesif costs. Prefer Off by default and make backlog replay opt-in.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@observability-logs-moesif/helm/values.yaml` around lines 21 - 26, The current
fluent-bit settings use Path /var/log/containers/*.log with Read_from_Head On
which will replay full backlog on fresh installs; change the default to
Read_from_Head Off and document/parameterize an opt-in toggle (e.g., values.yaml
flag like enableBacklogReplay or readFromHead=true) so operators can explicitly
enable replay; update the block referencing Path /var/log/containers/*.log and
keep multiline.parser docker, cri and Refresh_Interval 5 unchanged while adding
a comment explaining how to opt-in for backlog replay.
Signed-off-by: ruks <rukshan@wso2.com>
Summary by CodeRabbit
New Features
Documentation