Skip to content

OpAMP data mapping#6330

Draft
juliaElastic wants to merge 37 commits intoelastic:mainfrom
juliaElastic:opamp-data-mapping
Draft

OpAMP data mapping#6330
juliaElastic wants to merge 37 commits intoelastic:mainfrom
juliaElastic:opamp-data-mapping

Conversation

@juliaElastic
Copy link
Contributor

@juliaElastic juliaElastic commented Feb 11, 2026

What is the problem this PR solves?

Map data from OpAMP AgentToServer message to Agent fields in .fleet-agents

Depends on #6270 being merged first, will reopen this PR after that.

Data mapping changes are in this commit: 359d1af#diff-8eae4f76576728f1316ae744b620c2dac1f0ea3964426990df7329dcb9fff745

How does this PR solve the problem?

Add OpAMP message fields to Agent document:

  • convert capabilities to string array
  • convert effective config to json object and sanitize
  • add health and set last_checkin_status, last_checkin_message, etc.
  • add identifying and non-identifying attributes
  • add sequence number

How to test this PR locally

Follow instructions in https://github.com/ycombinator/fleet-server/blob/d9271fa723bf189f16c086559626aad09315637a/docs/opamp.md

Download and extract otel collector, e.g.:

curl -L -O https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.144.0/otelcol-contrib_0.144.0_darwin_arm64.tar.gz 

Create otel config to include system fields and internal telemetry, e.g. otel-opamp.yaml:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317

processors:
  resourcedetection:
    detectors: ["system","env"]
    system:
      hostname_sources: ["os"]
      resource_attributes:
        host.name:
          enabled: true
        host.arch:
          enabled: true
        os.description:
          enabled: true
        os.type:
          enabled: true

exporters:
  debug:
    verbosity: detailed
  elasticsearch/otel:
    endpoints: [ "http://localhost:9200" ]
    api_key: ${env:ES_API_KEY} 
    mapping:
      mode: otel
  otlp:
    endpoint: "http://localhost:4317"
    tls:
      insecure: true

extensions:
  opamp:
    server:
      http:
        endpoint: http://localhost:8220/v1/opamp
        tls:
          insecure: true
        headers:
          Authorization: ApiKey ${env:FLEET_ENROLLMENT_TOKEN}
    instance_uid: ${env:INSTANCE_UID}
    capabilities:
      reports_effective_config: true

service:
  pipelines:
    logs:
      receivers: [otlp]
      exporters: [debug]
    metrics:
      receivers: [otlp]
      processors: [resourcedetection]
      exporters: [elasticsearch/otel]
  extensions: [opamp]

  # publish collector internal telemetry
  telemetry:
    metrics:
      level: detailed
      readers:
        - periodic:
            interval: 30000
            exporter:
              otlp:
                protocol: grpc
                endpoint: http://localhost:4317

Create API keys and start otel collector:

 cd ~/Downloads/otelcol-contrib_0.144.0_darwin_arm64
 
 export INSTANCE_UID=<uuid> # e.g. "519b8d7a-2da8-7657-b52d-492a9de33313"
 export OTEL_RESOURCE_ATTRIBUTES="service.instance.id=$INSTANCE_UID" # to include instance id in internal telemetry data
 export ES_API_KEY=<api_key> # ES API key from observability onboarding UI
 export FLEET_ENROLLMENT_TOKEN=<enrollment_token> 
 ./otelcol-contrib --config ./otel-opamp.yaml

Design Checklist

  • I have ensured my design is stateless and will work when multiple fleet-server instances are behind a load balancer.
  • I have or intend to scale test my changes, ensuring it will work reliably with 100K+ agents connected.
  • I have included fail safe mechanisms to limit the load on fleet-server: rate limiting, circuit breakers, caching, load shedding, etc.

Checklist

  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in ./changelog/fragments using the changelog tool

Related issues

Relates https://github.com/elastic/ingest-dev/issues/6982

ycombinator and others added 7 commits February 6, 2026 11:06
The test previously referenced ErrOpAMPDisabled and handleOpAMP which
no longer exist. The feature flag check now happens at route registration
time, so test the Enabled() method directly instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Wire up pathToOperation to recognize /v1/opamp and add the opamp case
to the limiter middleware. Also apply the limiter middleware to the
OpAMP route handler in server.go.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@mergify
Copy link
Contributor

mergify bot commented Feb 11, 2026

This pull request does not have a backport label. Could you fix it @juliaElastic? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-./d./d is the label to automatically backport to the 8./d branch. /d is the digit
  • backport-active-all is the label that automatically backports to all active branches.
  • backport-active-8 is the label that automatically backports to all active minor branches for the 8 major.
  • backport-active-9 is the label that automatically backports to all active minor branches for the 9 major.

@github-actions
Copy link
Contributor

✅ Vale Linting Results

No issues found on modified lines!


The Vale linter checks documentation changes against the Elastic Docs style guide.

To use Vale locally or report issues, refer to Elastic style guide for Vale.

@github-actions
Copy link
Contributor

🔍 Preview links for changed docs

@mergify
Copy link
Contributor

mergify bot commented Feb 13, 2026

This pull request is now in conflicts. Could you fix it @juliaElastic? 🙏
To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/

git fetch upstream
git checkout -b opamp-data-mapping upstream/opamp-data-mapping
git merge upstream/main
git push upstream opamp-data-mapping

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants