DocsCentralized Profiling with Datadog (No App Code Changes)

Manage Profiling Centrally for Datadog

Turn profiling on when it matters, off when it doesn’t—without touching app code.

Profiling reveals real CPU and memory hotspots in production, validates performance fixes, and helps right-size services. It doesn’t need to run constantly: continuous profiling adds runtime overhead and data cost, and most workloads only benefit during releases, incidents, or focused tuning windows. By managing profiling from a single, central policy—via Fleet Automation or Helm-based SSI—you keep control simple and auditable: toggle by namespace or label, apply consistent settings across teams, and switch it off when the signal is no longer needed to maintain predictable spend.

In practice: enable profiling for targeted periods (rollouts, investigations), then disable it to minimize overhead and cost.

Below are two supported ways to enable and manage application profiling centrally—without touching application code.


Option A — IaC via Helm values (APM SSI)

Manage centrally via the Datadog Helm chart and Single-Step Instrumentation (SSI).

What you’ll do

  1. In your datadog-values.yaml, set datadog.apm.instrumentation.enabled: true.
  2. (Optional) Add disabledNamespaces to exclude system namespaces.
  3. Define ddTraceConfigs, Use Following helm values are reference, and apply the changes.
datadog:
  site: "us5.datadoghq.com"
  apiKeyExistingSecret: "datadog-secret"
  apm:
    instrumentation:
      enabled: true
      targets:
        - name: "default-target"
          namespaceSelector:
            matchNames:
              - "login"
          ddTraceVersions:
            java: "1"
          ddTraceConfigs:
            - name: "DD_PROFILING_ENABLED"
              value: "true"
        - name: "login-target"
          podSelector: 
            matchLabels:
              app: "login"
          ddTraceVersions:
            java: "1"
          ddTraceConfigs:
            - name: "DD_PROFILING_ENABLED"
              value: "true"
admissionController:
  enabled: true
  mutateUnlabelled: true
  1. Deploy the change, restart one test workload pod, and verify:
    • internal.apm.datadoghq.com/applied-target shows the target name.
    • The app container has DD_PROFILING_ENABLED=true.

Note that App should do a rolling restart for the profiling env variable to be applied.



Option B — Fleet Automation (no code deploy)

Use Datadog Fleet Automation / Remote Agent Management to push configuration to Agents/Cluster Agent.

What you’ll do

  1. Create a new Fleet configuration and scope it to your Kubernetes Agents (and Cluster Agent) group(s).
  2. Add environment variables in the configuration:
    • DD_APM_INSTRUMENTATION_ENABLEDtrue
    • DD_APM_INSTRUMENTATION_TARGETS(JSON string;)
    [{\"ddTraceConfigs\":[{\"name\":\"DD_PROFILING_ENABLED\",\"value\":\"true\"}],\"ddTraceVersions\":{\"java\":\"1\"},\"name\":\"default-target\",\"namespaceSelector\":{\"matchNames\":[\"login\"]}}]
    • DD_LANGUAGE_DETECTION_ENABLEDtrue
  3. Apply the configuration to the targeted Agents/Cluster Agent.
  4. Roll the Cluster Agent (and restart one test workload pod) so the mutation/injection picks up the central policy.
  5. Verify on a fresh pod:
    • Annotation internal.apm.datadoghq.com/applied-target is non-empty.
    • The app container has DD_PROFILING_ENABLED=true.

Placeholders for the exact values you can paste into Fleet are provided in the Code section at the end.


Quick Verification

Replace <pod> and <ns> as needed.

# Which target applied?
kubectl -n <ns> get pod <pod> \
  -o jsonpath='{.metadata.annotations.internal\.apm\.datadoghq\.com/applied-target}{"\n"}'
 
# Is profiling enabled on the APP container (not just init)?
kubectl -n <ns> get pod <pod> \
  -o jsonpath='{range .spec.containers[?(@.name!="datadog-init-apm-inject")].env[*]}{.name}={.value}{"\n"}{end}' \
  | grep DD_PROFILING_ENABLED