Skip to content

Adding Moesif logs module#22

Open
ruks wants to merge 1 commit intoopenchoreo:mainfrom
ruks:main
Open

Adding Moesif logs module#22
ruks wants to merge 1 commit intoopenchoreo:mainfrom
ruks:main

Conversation

@ruks
Copy link

@ruks ruks commented Mar 6, 2026

  • Adding Moesif logs module

Summary by CodeRabbit

  • New Features

    • Added an Observability Logs module enabling Helm-based installation for Moesif log collection.
    • Supports multi-environment log routing and ingestion using OpenTelemetry and Fluent Bit components.
  • Documentation

    • Added a comprehensive README with prerequisites, Helm install examples, values file usage, multi-environment configuration guidance, and credential options for setup and deployment.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

Warning

Rate limit exceeded

@ruks has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 2 minutes and 20 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 77e96e8c-6e9c-46b9-adcc-fe2feb050f46

📥 Commits

Reviewing files that changed from the base of the PR and between b6ab693 and f08cb34.

⛔ Files ignored due to path filters (1)
  • observability-logs-moesif/helm/Chart.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • observability-logs-moesif/README.md
  • observability-logs-moesif/helm/.helmignore
  • observability-logs-moesif/helm/Chart.yaml
  • observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml
  • observability-logs-moesif/helm/values.yaml
  • observability-logs-moesif/module.yaml
📝 Walkthrough

Walkthrough

Adds a new Observability Logs Module for Moesif: Helm chart, values, templates (Fluent Bit + OpenTelemetry Collector), module manifest, and README documenting installation and multi-environment Moesif configuration.

Changes

Cohort / File(s) Summary
Documentation & Module Manifest
observability-logs-moesif/README.md, observability-logs-moesif/module.yaml
New README with prerequisites, Helm install examples, multi-environment moesif.apps guidance; module.yaml added (images list empty placeholder).
Helm Chart Metadata
observability-logs-moesif/helm/Chart.yaml, observability-logs-moesif/helm/.helmignore
New Chart.yaml (apiVersion v2, v0.1.0) declaring dependencies on opentelemetry-collector and fluent-bit; .helmignore added to exclude VCS/backup/IDE files.
Helm Templates & Values
observability-logs-moesif/helm/values.yaml, observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml
Adds Fluent Bit configuration (inputs, filters, outputs, RBAC, init container) and OpenTelemetry Collector ConfigMap template that generates OTLP exporters and per-environment routing pipelines based on .Values.moesif.apps.

Sequence Diagram(s)

sequenceDiagram
  participant FluentBit as Fluent Bit (daemonset)
  participant OTEL as OpenTelemetry Collector
  participant Moesif as Moesif API
  participant Helm as Helm / values.yaml

  Note over Helm: helm values define `moesif.apps` per environment
  FluentBit->>OTEL: send logs via OTLP HTTP
  OTEL->>OTEL: routing processor selects pipeline by env label (openchoreo.dev/environment)
  OTEL->>Moesif: forward to otlphttp/<env> exporter (endpoint, headers per app)
  Moesif-->>OTEL: ack / response
  OTEL-->>FluentBit: (optional) response forwarded/logged
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • akila-i
  • LakshanSS
  • NomadXD

Poem

🐰 I hopped in with charts and a plan,
Fluent logs flowing from pod to span.
Collector routes, Moesif gleams,
Multiple envs, one observability dream. 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'Adding Moesif logs module' directly and accurately summarizes the main change—a new Moesif logs observability module is being added with supporting documentation and Helm chart configuration.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml`:
- Around line 27-33: The ConfigMap currently renders Moesif credentials
(X-Api-Token / X-Moesif-Application-Id) into plaintext; change this to read
those values from a Secret and inject them via environment variables instead:
stop templating (index . "api-key") and (index . "collector-application-id")
into the ConfigMap, create or reference a Kubernetes Secret with keys like
moesif_api_key and moesif_app_id, mount those into the collector
Deployment/StatefulSet as env vars (e.g., MOESIF_API_KEY, MOESIF_APP_ID) using
valueFrom.secretKeyRef, and update the collector configuration code that sets
headers to read from environment variables (or use an envsubst step) so headers
like X-Api-Token and X-Moesif-Application-Id are populated at runtime from
MOESIF_API_KEY / MOESIF_APP_ID rather than baked into the ConfigMap.
- Around line 24-33: Add render-time validation to fail fast when moesif.apps is
empty or contains invalid entries: before iterating .Values.moesif.apps use
required/empty check to error if the list is missing or empty, and inside the
range validate each item by calling required on the "env-name" key and enforce
an exclusive-or on "api-key" vs "collector-application-id" (use a conditional
and fail/required with a clear message if both or neither are set); keep the
existing usage of endpoint, env-name, api-key and collector-application-id but
replace silent omissions with these required/fail checks so the chart render
aborts on bad moesif.apps entries.

In `@observability-logs-moesif/helm/values.yaml`:
- Around line 167-168: The Helm values default exposes the collector externally
by setting service.type to LoadBalancer; change the default to ClusterIP to
avoid publishing OTLP ports outside the cluster. Update the values entry for
service (the service.type setting) to use "ClusterIP" instead of "LoadBalancer"
so the collector remains internal and Fluent Bit can continue to target the
in-cluster opentelemetry-collector service.
- Around line 138-147: Remove the unnecessary cluster-wide RBAC: when the
OpenTelemetry collector is not using a Kubernetes metadata processor, stop
creating the ClusterRole or at least remove the rules granting access to "pods"
and "replicasets" (the clusterRole.create / clusterRole.rules block that lists
apiGroups [""] resources ["pods"] and apiGroups ["apps"] resources
["replicasets"]). If you do intend to enrich logs with Kubernetes metadata
later, add the k8s metadata processor to the collector config
(templates/opentelemetry-collector/configMap.yaml) and then reintroduce the
specific RBAC rules; otherwise set clusterRole.create: false or drop those rules
to avoid granting unused cluster-wide permissions.
- Around line 121-127: The init container is over-privileged: change
securityContext to avoid privileged: true and instead run as UID 0 (keep
runAsUser: 0) and only add the minimal capability if required (e.g., use
capabilities.add: ["CHOWN"] rather than broad privileges or dropping ALL), keep
readOnlyRootFilesystem as appropriate; update the securityContext block
(securityContext, privileged, runAsUser, capabilities) to set privileged: false
and add a single capability only if a runtime error proves it necessary.

In `@observability-logs-moesif/README.md`:
- Around line 14-24: The README currently suggests passing sensitive Moesif
credentials via --set (e.g., moesif.apps[0].collector-application-id or api-key)
which will expose secrets in shell history/process args; update the docs to
remove or deprecate the --set api-key alternative and instead instruct users to
supply the Moesif API key via a values.yaml file or a Kubernetes Secret and
reference the existing Secret-based flow (mention moesif.apps[*].env-name and
collector-application-id only as non-sensitive examples), and add a short note
showing the secure pattern and pointing to the Secret-based example elsewhere in
the repo.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2c947a81-6359-4bb3-bd30-c4894ad3112f

📥 Commits

Reviewing files that changed from the base of the PR and between 7e53ae7 and 86279fc.

⛔ Files ignored due to path filters (1)
  • observability-logs-moesif/helm/Chart.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • observability-logs-moesif/README.md
  • observability-logs-moesif/helm/.helmignore
  • observability-logs-moesif/helm/Chart.yaml
  • observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml
  • observability-logs-moesif/helm/values.yaml
  • observability-logs-moesif/module.yaml

Comment on lines +24 to +33
{{- range .Values.moesif.apps }}
otlphttp/{{ index . "env-name" }}:
endpoint: {{ .endpoint | default "https://api.moesif.net" | quote }}
headers:
{{- if (index . "api-key") }}
X-Api-Token: {{ (index . "api-key") | quote }}
{{- end }}
{{- if (index . "collector-application-id") }}
X-Moesif-Application-Id: {{ (index . "collector-application-id") | quote }}
{{- end }}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail fast on invalid moesif.apps entries.

Right now the chart still renders when moesif.apps is empty, when an item is missing env-name, or when an item sets both/neither of api-key and collector-application-id. That gives you a release that installs but cannot route or authenticate logs.

Suggested render-time guards
+    {{- $apps := required "moesif.apps must contain at least one entry" .Values.moesif.apps }}
     exporters:
-      {{- range .Values.moesif.apps }}
-      otlphttp/{{ index . "env-name" }}:
+      {{- range $app := $apps }}
+      {{- $env := required "moesif.apps[].env-name is required" (index $app "env-name") }}
+      {{- $apiKey := index $app "api-key" }}
+      {{- $appID := index $app "collector-application-id" }}
+      {{- if or (and $apiKey $appID) (and (not $apiKey) (not $appID)) }}
+      {{- fail (printf "moesif.apps[%s] must set exactly one of api-key or collector-application-id" $env) }}
+      {{- end }}
+      otlphttp/{{ $env }}:
-        endpoint: {{ .endpoint | default "https://api.moesif.net" | quote }}
+        endpoint: {{ $app.endpoint | default "https://api.moesif.net" | quote }}
         headers:
-          {{- if (index . "api-key") }}
-          X-Api-Token: {{ (index . "api-key") | quote }}
+          {{- if $apiKey }}
+          X-Api-Token: {{ $apiKey | quote }}
           {{- end }}
-          {{- if (index . "collector-application-id") }}
-          X-Moesif-Application-Id: {{ (index . "collector-application-id") | quote }}
+          {{- if $appID }}
+          X-Moesif-Application-Id: {{ $appID | quote }}
           {{- end }}
       {{- end }}

Also applies to: 39-63

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml`
around lines 24 - 33, Add render-time validation to fail fast when moesif.apps
is empty or contains invalid entries: before iterating .Values.moesif.apps use
required/empty check to error if the list is missing or empty, and inside the
range validate each item by calling required on the "env-name" key and enforce
an exclusive-or on "api-key" vs "collector-application-id" (use a conditional
and fail/required with a clear message if both or neither are set); keep the
existing usage of endpoint, env-name, api-key and collector-application-id but
replace silent omissions with these required/fail checks so the chart render
aborts on bad moesif.apps entries.

Comment on lines +27 to +33
headers:
{{- if (index . "api-key") }}
X-Api-Token: {{ (index . "api-key") | quote }}
{{- end }}
{{- if (index . "collector-application-id") }}
X-Moesif-Application-Id: {{ (index . "collector-application-id") | quote }}
{{- end }}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't render Moesif credentials into a ConfigMap.

Line 29 writes the API key into plaintext config, which makes it readable to anyone who can view the ConfigMap and also bakes it into Helm release state. Please switch this chart to reference an existing Secret and have the collector read the header value from an environment variable instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml`
around lines 27 - 33, The ConfigMap currently renders Moesif credentials
(X-Api-Token / X-Moesif-Application-Id) into plaintext; change this to read
those values from a Secret and inject them via environment variables instead:
stop templating (index . "api-key") and (index . "collector-application-id")
into the ConfigMap, create or reference a Kubernetes Secret with keys like
moesif_api_key and moesif_app_id, mount those into the collector
Deployment/StatefulSet as env vars (e.g., MOESIF_API_KEY, MOESIF_APP_ID) using
valueFrom.secretKeyRef, and update the collector configuration code that sets
headers to read from environment variables (or use an envsubst step) so headers
like X-Api-Token and X-Moesif-Application-Id are populated at runtime from
MOESIF_API_KEY / MOESIF_APP_ID rather than baked into the ConfigMap.

Comment on lines +121 to +127
securityContext:
capabilities:
drop:
- ALL
privileged: true
readOnlyRootFilesystem: true
runAsUser: 0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid a privileged init container for a simple chown.

This container only fixes ownership on one mounted path, but privileged: true gives it far broader node-level access than that. Start with UID 0 without privilege escalation and only add a narrower capability if you find a runtime that truly requires it.

Suggested hardening
       securityContext:
         capabilities:
           drop:
             - ALL
-        privileged: true
         readOnlyRootFilesystem: true
         runAsUser: 0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@observability-logs-moesif/helm/values.yaml` around lines 121 - 127, The init
container is over-privileged: change securityContext to avoid privileged: true
and instead run as UID 0 (keep runAsUser: 0) and only add the minimal capability
if required (e.g., use capabilities.add: ["CHOWN"] rather than broad privileges
or dropping ALL), keep readOnlyRootFilesystem as appropriate; update the
securityContext block (securityContext, privileged, runAsUser, capabilities) to
set privileged: false and add a single capability only if a runtime error proves
it necessary.

Comment on lines +138 to +147
clusterRole:
create: true
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "watch"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["list","watch"]

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Drop the collector ClusterRole unless you add a Kubernetes metadata processor.

The collector config in observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml only receives OTLP, transforms, routes, and exports. It never queries the Kubernetes API, so these cluster-wide pods/replicasets permissions are unused and widen access for no benefit.

Suggested change
   clusterRole:
-    create: true
-    rules:
-      - apiGroups: [""]
-        resources: ["pods"]
-        verbs: ["list", "watch"]
-      - apiGroups: ["apps"]
-        resources: ["replicasets"]
-        verbs: ["list","watch"]
+    create: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
clusterRole:
create: true
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "watch"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["list","watch"]
clusterRole:
create: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@observability-logs-moesif/helm/values.yaml` around lines 138 - 147, Remove
the unnecessary cluster-wide RBAC: when the OpenTelemetry collector is not using
a Kubernetes metadata processor, stop creating the ClusterRole or at least
remove the rules granting access to "pods" and "replicasets" (the
clusterRole.create / clusterRole.rules block that lists apiGroups [""] resources
["pods"] and apiGroups ["apps"] resources ["replicasets"]). If you do intend to
enrich logs with Kubernetes metadata later, add the k8s metadata processor to
the collector config (templates/opentelemetry-collector/configMap.yaml) and then
reintroduce the specific RBAC rules; otherwise set clusterRole.create: false or
drop those rules to avoid granting unused cluster-wide permissions.

Comment on lines +167 to +168
service:
type: LoadBalancer
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't publish the collector externally by default.

This chart already points Fluent Bit at the in-cluster opentelemetry-collector service, so LoadBalancer turns the OTLP ingest ports into an unnecessary external endpoint. ClusterIP is the safer default here.

Suggested change
   service:
-    type: LoadBalancer
+    type: ClusterIP
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
service:
type: LoadBalancer
service:
type: ClusterIP
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@observability-logs-moesif/helm/values.yaml` around lines 167 - 168, The Helm
values default exposes the collector externally by setting service.type to
LoadBalancer; change the default to ClusterIP to avoid publishing OTLP ports
outside the cluster. Update the values entry for service (the service.type
setting) to use "ClusterIP" instead of "LoadBalancer" so the collector remains
internal and Fluent Bit can continue to target the in-cluster
opentelemetry-collector service.

Comment on lines +14 to +24
```bash
helm upgrade --install observability-logs-moesif \
oci://ghcr.io/openchoreo/charts/observability-logs-moesif \
--create-namespace \
--namespace openchoreo-observability-plane \
--version 0.1.0 \
--set moesif.apps[0].env-name=default \
--set moesif.apps[0].collector-application-id=YOUR_MOESIF_APPLICATION_ID
```

> **Note:** Replace `YOUR_MOESIF_APPLICATION_ID` with your actual Moesif Application ID. Alternatively, you can use `api-key` instead of `collector-application-id`.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't document api-key usage via --set.

If users follow the alternative called out on Line 24 or Line 81, the key ends up in shell history and process arguments. Please steer api-key users to a values file or, better, an existing Secret-based flow instead.

Also applies to: 30-50, 79-82

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@observability-logs-moesif/README.md` around lines 14 - 24, The README
currently suggests passing sensitive Moesif credentials via --set (e.g.,
moesif.apps[0].collector-application-id or api-key) which will expose secrets in
shell history/process args; update the docs to remove or deprecate the --set
api-key alternative and instead instruct users to supply the Moesif API key via
a values.yaml file or a Kubernetes Secret and reference the existing
Secret-based flow (mention moesif.apps[*].env-name and collector-application-id
only as non-sensitive examples), and add a short note showing the secure pattern
and pointing to the Secret-based example elsewhere in the repo.

Comment on lines +22 to +23
# Path /var/log/containers/*dp-default-default-development-f8e58905*.log
# Exclude_Path /var/log/containers/*_kube-system_*.log, /var/log/containers/*_observability_*.log, /var/log/containers/*_kube-public_*.log, /var/log/containers/*_kube-node-lease_*.log
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we remove these commented fields if they are unused?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
observability-logs-moesif/helm/values.yaml (3)

121-127: ⚠️ Potential issue | 🟠 Major

The chown init container is still over-privileged.

privileged: true grants far more node access than a recursive ownership fix needs. Start with runAsUser: 0 and privileged: false, then add only the minimal capability if a specific runtime proves it necessary.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@observability-logs-moesif/helm/values.yaml` around lines 121 - 127, The chown
init container's securityContext is over-privileged: change privileged: true to
privileged: false, keep runAsUser: 0, and remove unnecessary broad privileges;
if the runtime fails to perform recursive chown, add only the minimal capability
(e.g., add capabilities: add: - CHOWN) rather than enabling full privileged
mode, and ensure readOnlyRootFilesystem remains true and existing drop: - ALL is
preserved except for the explicitly added capability.

138-147: ⚠️ Potential issue | 🟠 Major

Recheck whether the collector actually needs this ClusterRole.

I don't see anything in the provided module context wiring Kubernetes metadata enrichment into the collector, so listing/watching pods and replicasets may just widen access. If observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml has no k8sattributes-style processor, disable this ClusterRole.

Verification: inspect the shipped collector config for a Kubernetes metadata processor. Expected result: if there is no k8sattributes/kubernetesattributes processor and only this values block references pods/replicasets, these RBAC rules are probably unused.

#!/bin/bash
set -euo pipefail

sed -n '1,260p' observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml
echo
rg -n -C2 'k8sattributes|kubernetesattributes|pod_association|pods|replicasets' \
  observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml \
  observability-logs-moesif/helm/values.yaml
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@observability-logs-moesif/helm/values.yaml` around lines 138 - 147, The
ClusterRole block (clusterRole.create, clusterRole.rules with
apiGroups/resources for "pods" and "replicasets") appears unnecessary unless the
collector config includes a Kubernetes metadata processor (k8sattributes or
kubernetesattributes/pod_association). Inspect
observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml
for any k8sattributes/kubernetesattributes usage; if none exists, disable or
remove clusterRole.create and/or remove the rules that grant list/watch on
"pods" and "replicasets" to narrow RBAC scope, otherwise keep them only if the
k8s processor is present and required.

167-168: ⚠️ Potential issue | 🟠 Major

Keep the collector service internal by default.

Fluent Bit already talks to opentelemetry-collector over cluster DNS at Line 64, so LoadBalancer only publishes the OTLP endpoint externally. ClusterIP is the safer default here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@observability-logs-moesif/helm/values.yaml` around lines 167 - 168, The
service default exposes the collector externally; change the Helm values so the
collector service under the "service: type" key uses ClusterIP instead of
LoadBalancer to keep the opentelemetry-collector internal (Fluent Bit already
communicates via cluster DNS to opentelemetry-collector). Update the "service:
type" value from LoadBalancer to ClusterIP and ensure any related comments or
docs referencing external exposure are adjusted accordingly.
observability-logs-moesif/README.md (1)

24-24: ⚠️ Potential issue | 🟠 Major

Don't route api-key through --set in the docs.

Line 24 and the multi-environment example still normalize a path that leaks the key into shell history and process args. Keep --set examples to non-sensitive fields, and point API-key users to -f moesif-values.yaml or a Secret-based flow instead.

Also applies to: 42-50

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@observability-logs-moesif/README.md` at line 24, The README currently shows
using --set to pass sensitive values like `api-key`/`collector-application-id`
(see the Note and the multi-environment example), which can leak secrets to
shell history/process args; update the Note to remove or reword any `--set`
example for `api-key` and instead instruct users to supply the key via a values
file (`-f moesif-values.yaml`) or a Secret-based flow, and update the
multi-environment example (the block covering lines ~42-50) to demonstrate using
`moesif-values.yaml` or Kubernetes Secrets rather than `--set`.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@observability-logs-moesif/helm/values.yaml`:
- Line 20: Reduce the Fluent Bit tail buffer setting: lower the Mem_Buf_Limit
value (the Mem_Buf_Limit key) to leave headroom for Fluent Bit process, filters
and output buffering (for example drop from 256MB to a safer value like
64–128MB), or alternatively raise the pod memory limit used by this chart so the
buffer isn't near the container ceiling; update the Mem_Buf_Limit entry and/or
the pod memory limit in values.yaml consistently (also adjust the tail buffer
settings referenced elsewhere in the same block) to ensure bursts cannot OOM the
pod.
- Around line 21-26: The current fluent-bit settings use Path
/var/log/containers/*.log with Read_from_Head On which will replay full backlog
on fresh installs; change the default to Read_from_Head Off and
document/parameterize an opt-in toggle (e.g., values.yaml flag like
enableBacklogReplay or readFromHead=true) so operators can explicitly enable
replay; update the block referencing Path /var/log/containers/*.log and keep
multiline.parser docker, cri and Refresh_Interval 5 unchanged while adding a
comment explaining how to opt-in for backlog replay.

---

Duplicate comments:
In `@observability-logs-moesif/helm/values.yaml`:
- Around line 121-127: The chown init container's securityContext is
over-privileged: change privileged: true to privileged: false, keep runAsUser:
0, and remove unnecessary broad privileges; if the runtime fails to perform
recursive chown, add only the minimal capability (e.g., add capabilities: add: -
CHOWN) rather than enabling full privileged mode, and ensure
readOnlyRootFilesystem remains true and existing drop: - ALL is preserved except
for the explicitly added capability.
- Around line 138-147: The ClusterRole block (clusterRole.create,
clusterRole.rules with apiGroups/resources for "pods" and "replicasets") appears
unnecessary unless the collector config includes a Kubernetes metadata processor
(k8sattributes or kubernetesattributes/pod_association). Inspect
observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml
for any k8sattributes/kubernetesattributes usage; if none exists, disable or
remove clusterRole.create and/or remove the rules that grant list/watch on
"pods" and "replicasets" to narrow RBAC scope, otherwise keep them only if the
k8s processor is present and required.
- Around line 167-168: The service default exposes the collector externally;
change the Helm values so the collector service under the "service: type" key
uses ClusterIP instead of LoadBalancer to keep the opentelemetry-collector
internal (Fluent Bit already communicates via cluster DNS to
opentelemetry-collector). Update the "service: type" value from LoadBalancer to
ClusterIP and ensure any related comments or docs referencing external exposure
are adjusted accordingly.

In `@observability-logs-moesif/README.md`:
- Line 24: The README currently shows using --set to pass sensitive values like
`api-key`/`collector-application-id` (see the Note and the multi-environment
example), which can leak secrets to shell history/process args; update the Note
to remove or reword any `--set` example for `api-key` and instead instruct users
to supply the key via a values file (`-f moesif-values.yaml`) or a Secret-based
flow, and update the multi-environment example (the block covering lines ~42-50)
to demonstrate using `moesif-values.yaml` or Kubernetes Secrets rather than
`--set`.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a92dc61c-d978-4b3e-964d-93981c5de826

📥 Commits

Reviewing files that changed from the base of the PR and between 86279fc and b6ab693.

⛔ Files ignored due to path filters (1)
  • observability-logs-moesif/helm/Chart.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • observability-logs-moesif/README.md
  • observability-logs-moesif/helm/.helmignore
  • observability-logs-moesif/helm/Chart.yaml
  • observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml
  • observability-logs-moesif/helm/values.yaml
  • observability-logs-moesif/module.yaml
✅ Files skipped from review due to trivial changes (2)
  • observability-logs-moesif/module.yaml
  • observability-logs-moesif/helm/.helmignore
🚧 Files skipped from review as they are similar to previous changes (1)
  • observability-logs-moesif/helm/templates/opentelemetry-collector/configMap.yaml

DB /var/lib/fluent-bit/db/tail-container-logs.db
Exclude_Path /var/log/containers/fluent-bit-*.log
Inotify_Watcher false
Mem_Buf_Limit 256MB
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Leave buffer headroom below the pod memory limit.

Line 20 sets Mem_Buf_Limit to almost the full container ceiling from Line 93. That leaves very little room for Fluent Bit itself, filters, and output buffering, so bursts can OOM the pod and drop logs. Lower the tail buffer or raise the pod limit.

Also applies to: 91-96

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@observability-logs-moesif/helm/values.yaml` at line 20, Reduce the Fluent Bit
tail buffer setting: lower the Mem_Buf_Limit value (the Mem_Buf_Limit key) to
leave headroom for Fluent Bit process, filters and output buffering (for example
drop from 256MB to a safer value like 64–128MB), or alternatively raise the pod
memory limit used by this chart so the buffer isn't near the container ceiling;
update the Mem_Buf_Limit entry and/or the pod memory limit in values.yaml
consistently (also adjust the tail buffer settings referenced elsewhere in the
same block) to ensure bursts cannot OOM the pod.

Comment on lines +21 to +26
Path /var/log/containers/*.log
# Path /var/log/containers/*dp-default-default-development-f8e58905*.log
# Exclude_Path /var/log/containers/*_kube-system_*.log, /var/log/containers/*_observability_*.log, /var/log/containers/*_kube-public_*.log, /var/log/containers/*_kube-node-lease_*.log
multiline.parser docker, cri
Read_from_Head On
Refresh_Interval 5
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Defaulting to full backlog replay is risky here.

With Path /var/log/containers/*.log and Read_from_Head On, a fresh install or DB loss will replay historical logs from every node. That can create a large one-time ingest spike and surprise Moesif costs. Prefer Off by default and make backlog replay opt-in.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@observability-logs-moesif/helm/values.yaml` around lines 21 - 26, The current
fluent-bit settings use Path /var/log/containers/*.log with Read_from_Head On
which will replay full backlog on fresh installs; change the default to
Read_from_Head Off and document/parameterize an opt-in toggle (e.g., values.yaml
flag like enableBacklogReplay or readFromHead=true) so operators can explicitly
enable replay; update the block referencing Path /var/log/containers/*.log and
keep multiline.parser docker, cri and Refresh_Interval 5 unchanged while adding
a comment explaining how to opt-in for backlog replay.

Signed-off-by: ruks <rukshan@wso2.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants