This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Deploying and configuring Dapr in your environment

Hosting options, best-practices, and other guides and running your application on Dapr

1 - Observability

See and measure the message calls to components and between networked services

The following overview video and demo demonstrates how observability in Dapr works.

1.1 - Tracing

Learn about tracing scenarios and how to use tracing for visibility in your application

1.1.1 - Distributed tracing overview

Overview on using tracing to get visibility into your application

Dapr uses the Open Telemetry (OTEL) and Zipkin protocols for distributed traces. OTEL is the industry standard and is the recommended trace protocol to use.

Most observability tools support OTEL, including:

The following diagram demonstrates how Dapr (using OTEL and Zipkin protocols) integrates with multiple observability tools.

Distributed tracing with Dapr

Scenarios

Tracing is used with service invocaton and pub/sub APIs. You can flow trace context between services that uses these APIs. There are two scenarios for how tracing is used:

  1. Dapr generates the trace context and you propagate the trace context to another service.
  2. You generate the trace context and Dapr propagates the trace context to a service.

Scenario 1: Dapr generates trace context headers

Propagating sequential service calls

Dapr takes care of creating the trace headers. However, when there are more than two services, you’re responsible for propagating the trace headers between them. Let’s go through the scenarios with examples:

Single service invocation call

For example, service A -> service B.

Dapr generates the trace headers in service A, which are then propagated from service A to service B. No further propagation is needed.

Multiple sequential service invocation calls

For example, service A -> service B -> propagate trace headers to -> service C and so on to further Dapr-enabled services.

Dapr generates the trace headers at the beginning of the request in service A, which are then propagated to service B. You are now responsible for taking the headers and propagating them to service C, since this is specific to your application.

In other words, if the app is calling to Dapr and wants to trace with an existing trace header (span), it must always propagate to Dapr (from service B to service C, in this example). Dapr always propagates trace spans to an application.

Request is from external endpoint

For example, from a gateway service to a Dapr-enabled service A.

An external gateway ingress calls Dapr, which generates the trace headers and calls service A. Service A then calls service B and further Dapr-enabled services.

You must propagate the headers from service A to service B. For example: Ingress -> service A -> propagate trace headers -> service B. This is similar to case 2.

Pub/sub messages

Dapr generates the trace headers in the published message topic. For rawPayload messages, it is possible to specify the traceparent header to propagate the tracing information. These trace headers are propagated to any services listening on that topic.

Propagating multiple different service calls

In the following scenarios, Dapr does some of the work for you, with you then creating or propagating trace headers.

Multiple service calls to different services from single service

When you are calling multiple services from a single service, you need to propagate the trace headers. For example:

service A -> service B
[ .. some code logic ..]
service A -> service C
[ .. some code logic ..]
service A -> service D
[ .. some code logic ..]

In this case:

  1. When service A first calls service B, Dapr generates the trace headers in service A.
  2. The trace headers in service A are propagated to service B.
  3. These trace headers are returned in the response from service B as part of response headers.
  4. You then need to propagate the returned trace context to the next services, like service C and service D, as Dapr does not know you want to reuse the same header.

Scenario 2: You generate your own trace context headers from non-Daprized applications

Generating your own trace context headers is more unusual and typically not required when calling Dapr.

However, there are scenarios where you could specifically choose to add W3C trace headers into a service call. For example, you have an existing application that does not use Dapr. In this case, Dapr still propagates the trace context headers for you.

If you decide to generate trace headers yourself, there are three ways this can be done:

  1. Standard OpenTelemetry SDK

    You can use the industry standard OpenTelemetry SDKs to generate trace headers and pass these trace headers to a Dapr-enabled service. This is the preferred method.

  2. Vendor SDK

    You can use a vendor SDK that provides a way to generate W3C trace headers and pass them to a Dapr-enabled service.

  3. W3C trace context

    You can handcraft a trace context following W3C trace context specifications and pass them to a Dapr-enabled service.

    Read the trace context overview for more background and examples on W3C trace context and headers.

Baggage Support

Dapr supports two distinct mechanisms for propagating W3C Baggage alongside trace context:

  1. Context Baggage (OpenTelemetry)

    • Follows OpenTelemetry conventions with decoded values
    • Used when working with OpenTelemetry context propagation
    • Values are stored and transmitted in their original, unencoded form
    • Recommended for OpenTelemetry integrations and when working with application context
  2. Header/Metadata Baggage

    • You must URL encode special characters (for example, %20 for spaces, %2F for slashes) when setting header/metadata baggage
    • Values remain percent-encoded in transport as required by the W3C Baggage spec
    • Values stay encoded when inspecting raw headers/metadata
    • Only OpenTelemetry APIs will decode the values
    • Example: Use serverNode=DF%2028 (not serverNode=DF 28) when setting header baggage

For security purposes, context baggage and header baggage are strictly separated and never merged between domains. This ensures that baggage values maintain their intended format and security properties.

Using Baggage with Dapr

You can propagate baggage using either mechanism, depending on your use case.

  1. In your application code: Set the baggage in the context before making a Dapr API call
  2. When calling Dapr: Pass the context to any Dapr API call
  3. Inside Dapr: The Dapr runtime automatically picks up the baggage
  4. Propagation: Dapr automatically propagates the baggage to downstream services, maintaining the appropriate encoding for each mechanism

Here are examples of both mechanisms:

1. Using Context Baggage (OpenTelemetry)

When using OpenTelemetry SDK:

import 	otelbaggage "go.opentelemetry.io/otel/baggage"

// Set baggage in context (values remain unencoded)
baggage, err = otelbaggage.Parse("userId=cassie,serverNode=DF%2028")
...
ctx := otelbaggage.ContextWithBaggage(t.Context(), baggage)
)

// Pass this context to any Dapr API call
client.InvokeMethodWithContent(ctx, "serviceB", ...)

2. Using Header/Metadata Baggage

When using gRPC metadata:

import "google.golang.org/grpc/metadata"

// Set URL-encoded baggage in context
ctx = metadata.AppendToOutgoingContext(ctx,
    "baggage", "userId=cassie,serverNode=DF%2028",
)

// Pass this context to any Dapr API call
client.InvokeMethodWithContent(ctx, "serviceB", ...)

3. Receiving Baggage in Target Service

In your target service, you can access the propagated baggage:

// Using OpenTelemetry (values are automatically decoded)
import "go.opentelemetry.io/otel/baggage"

bag := baggage.FromContext(ctx)
userID := bag.Member("userId").Value()  // "cassie"
// Using raw gRPC metadata (values remain percent-encoded)
import "google.golang.org/grpc/metadata"

md, _ := metadata.FromIncomingContext(ctx)
if values := md.Get("baggage"); len(values) > 0 {
    // values[0] contains the percent-encoded string you set: "userId=cassie,serverNode=DF%2028"
    // Remember: You must URL encode special characters when setting baggage
    
    // To decode the values, use OpenTelemetry APIs:
    bag, err := baggage.Parse(values[0])
    ...
    userID := bag.Member("userId").Value()  // "cassie"
}

HTTP Example (URL-encoded):

curl -X POST http://localhost:3500/v1.0/invoke/serviceB/method/hello \
  -H "Content-Type: application/json" \
  -H "baggage: userID=cassie,serverNode=DF%2028" \
  -d '{"message": "Hello service B"}'

gRPC Example (URL-encoded):

ctx = grpcMetadata.AppendToOutgoingContext(ctx,
    "baggage", "userID=cassie,serverNode=DF%2028",
)

Common Use Cases

Baggage is useful for:

  • Propagating user IDs or correlation IDs across services
  • Passing tenant or environment information
  • Maintaining consistent context across service boundaries
  • Debugging and troubleshooting distributed transactions

Best Practices

  1. Choose the Right Mechanism

    • Use Context Baggage when working with OpenTelemetry
    • Use Header Baggage when working directly with HTTP/gRPC
  2. Security Considerations

    • Be mindful that baggage is propagated across service boundaries
    • Don’t include sensitive information in baggage
    • Remember that context and header baggage remain separate

1.1.2 - W3C trace context overview

Background and scenarios for using W3C tracing context and headers with Dapr

Dapr uses the Open Telemetry protocol, which in turn uses the W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Dapr generates and propagates the trace context information, which can be sent to observability tools for visualization and querying.

Background

Distributed tracing is a methodology implemented by tracing tools to follow, analyze, and debug a transaction across multiple software components.

Typically, a distributed trace traverses more than one service, which requires it to be uniquely identifiable. Trace context propagation passes along this unique identification.

In the past, trace context propagation was implemented individually by each different tracing vendor. In multi-vendor environments, this causes interoperability problems, such as:

  • Traces collected by different tracing vendors can’t be correlated, as there is no shared unique identifier.
  • Traces crossing boundaries between different tracing vendors can’t be propagated, as there is no forwarded, uniformly agreed set of identification.
  • Vendor-specific metadata might be dropped by intermediaries.
  • Cloud platform vendors, intermediaries, and service providers cannot guarantee to support trace context propagation, as there is no standard to follow.

Previously, most applications were monitored by a single tracing vendor and stayed within the boundaries of a single platform provider, so these problems didn’t have a significant impact.

Today, an increasing number of applications are distributed and leverage multiple middleware services and cloud platforms. This transformation of modern applications requires a distributed tracing context propagation standard.

The W3C trace context specification defines a universally agreed-upon format for the exchange of trace context propagation data (referred to as trace context). Trace context solves the above problems by providing:

  • A unique identifier for individual traces and requests, allowing trace data of multiple providers to be linked together.
  • An agreed-upon mechanism to forward vendor-specific trace data and avoid broken traces when multiple tracing tools participate in a single transaction.
  • An industry standard that intermediaries, platforms, and hardware providers can support.

This unified approach for propagating trace data improves visibility into the behavior of distributed applications, facilitating problem and performance analysis.

W3C trace context and headers format

W3C trace context

Dapr uses the standard W3C trace context headers.

  • For HTTP requests, Dapr uses traceparent header.
  • For gRPC requests, Dapr uses grpc-trace-bin header.

When a request arrives without a trace ID, Dapr creates a new one. Otherwise, it passes the trace ID along the call chain.

W3C trace headers

These are the specific trace context headers that are generated and propagated by Dapr for HTTP and gRPC.

Copy these headers when propagating a trace context header from an HTTP response to an HTTP request:

Traceparent header

The traceparent header represents the incoming request in a tracing system in a common format, understood by all vendors:

traceparent: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01

Learn more about the traceparent fields details.

Tracestate header

The tracestate header includes the parent in a potentially vendor-specific format:

tracestate: congo=t61rcWkgMzE

Learn more about the tracestate fields details.

Baggage Support

Dapr supports W3C Baggage for propagating key-value pairs alongside trace context through two distinct mechanisms:

  1. Context Baggage (OpenTelemetry)

    • Follows OpenTelemetry conventions with decoded values
    • Used when propagating baggage through application context
    • Values are stored in their original, unencoded form
    • Example of how it would be printed with OpenTelemetry APIs:
      baggage: userId=cassie,serverNode=DF 28,isVIP=true
      
  2. HTTP Header Baggage

    • You must URL encode special characters (for example, %20 for spaces, %2F for slashes) when setting header baggage
    • Values remain percent-encoded in HTTP headers as required by the W3C Baggage spec
    • Values stay encoded when inspecting raw headers in Dapr
    • Only OpenTelemetry APIs like otelbaggage.Parse() will decode the values
    • Example (note the URL-encoded space %20):
      curl -X POST http://localhost:3500/v1.0/invoke/serviceB/method/hello \
        -H "Content-Type: application/json" \
        -H "baggage: userId=cassie,serverNode=DF%2028,isVIP=true" \
        -d '{"message": "Hello service B"}'
      

For security purposes, context baggage and header baggage are strictly separated and never merged between domains. This ensures that baggage values maintain their intended format and security properties in each domain.

Multiple baggage headers are supported and will be combined according to the W3C specification. Dapr automatically propagates baggage across service calls while maintaining the appropriate encoding for each domain.

In the gRPC API calls, trace context is passed through grpc-trace-bin header.

Baggage Support

Dapr supports W3C Baggage for propagating key-value pairs alongside trace context through two distinct mechanisms:

  1. Context Baggage (OpenTelemetry)

    • Follows OpenTelemetry conventions with decoded values
    • Used when propagating baggage through gRPC context
    • Values are stored in their original, unencoded form
    • Example of how it would be printed with OpenTelemetry APIs:
      baggage: userId=cassie,serverNode=DF 28,isVIP=true
      
  2. gRPC Metadata Baggage

    • You must URL encode special characters (for example, %20 for spaces, %2F for slashes) when setting metadata baggage
    • Values remain percent-encoded in gRPC metadata
    • Example (note the URL-encoded space %20):
      baggage: userId=cassie,serverNode=DF%2028,isVIP=true
      

For security purposes, context baggage and metadata baggage are strictly separated and never merged between domains. This ensures that baggage values maintain their intended format and security properties in each domain.

Multiple baggage metadata entries are supported and will be combined according to the W3C specification. Dapr automatically propagates baggage across service calls while maintaining the appropriate encoding for each domain.

1.1.3 - Configure Dapr to send distributed tracing data

Set up Dapr to send distributed tracing data

Configuration

The tracing section under the Configuration spec contains the following properties:

spec:
  tracing:
    samplingRate: "1"
    otel: 
      endpointAddress: "myendpoint.cluster.local:4317"
    zipkin:
      endpointAddress: "https://..."
    

The following table lists the properties for tracing:

PropertyTypeDescription
samplingRatestringSet sampling rate for tracing to be enabled or disabled.
stdoutboolTrue write more verbose information to the traces
otel.endpointAddressstringSet the Open Telemetry (OTEL) target hostname and optionally port. If this is used, you do not need to specify the ‘zipkin’ section.
otel.isSecureboolIs the connection to the endpoint address encrypted.
otel.protocolstringSet to http or grpc protocol.
zipkin.endpointAddressstringSet the Zipkin server URL. If this is used, you do not need to specify the otel section.

To enable tracing, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (every span is sampled), and sends trace using OTEL protocol to the OTEL server at localhost:4317

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: tracing
spec:
  tracing:
    samplingRate: "1"
    otel:
      endpointAddress: "localhost:4317"
      isSecure: false
      protocol: grpc 

Sampling rate

Dapr uses probabilistic sampling. The sample rate defines the probability a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The default sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled).

Changing samplingRate to 0 disables tracing altogether.

Environment variables

The OpenTelemetry (otel) endpoint can also be configured via an environment variables. The presence of the OTEL_EXPORTER_OTLP_ENDPOINT environment variable turns on tracing for the sidecar.

Environment VariableDescription
OTEL_EXPORTER_OTLP_ENDPOINTSets the Open Telemetry (OTEL) server hostname and optionally port, turns on tracing
OTEL_EXPORTER_OTLP_INSECURESets the connection to the endpoint as unencrypted (true/false)
OTEL_EXPORTER_OTLP_PROTOCOLTransport protocol (grpc, http/protobuf, http/json)

Next steps

Learn how to set up tracing with one of the following tools:

1.1.4 - Open Telemetry Collector

How to set up your observability tools to receive application traces

1.1.4.1 - Using OpenTelemetry Collector to collect traces

How to use Dapr to push trace events through the OpenTelemetry Collector.

Dapr directly writes traces using the OpenTelemetry (OTLP) protocol as the recommended method. For observability tools that support the OTLP directly, it is recommended to use the OpenTelemetry Collector, as it allows your application to quickly offload data and includes features, such as retries, batching, and encryption. For more information, read the Open Telemetry Collector documentation.

Dapr can also write traces using the Zipkin protocol. Prior to supporting the OTLP protocol, the Zipkin protocol was used with the OpenTelemetry Collector to send traces to observability tools such as AWS X-Ray, Google Cloud Operations Suite, and Azure Monitor. Both protocol approaches are valid, however the OpenTelemetry protocol is the recommended choice.

Using OpenTelemetry Collect to integrate with many backend

Prerequisites

Set up OTEL Collector to push to your trace backend

  1. Check out the open-telemetry-collector-generic.yaml.

  2. Replace the <your-exporter-here> section with the correct settings for your trace exporter.

  3. Apply the configuration with:

    kubectl apply -f open-telemetry-collector-generic.yaml
    

Set up Dapr to send traces to OTEL Collector

Set up a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.

  1. Use this collector-config.yaml file to create your own configuration.

  2. Apply the configuration with:

    kubectl apply -f collector-config.yaml
    

Deploy your app with tracing

Apply the appconfig configuration by adding a dapr.io/config annotation to the container that you want to participate in the distributed tracing, as shown in the following example:

apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  ...
  template:
    metadata:
      ...
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "MyApp"
        dapr.io/app-port: "8080"
        dapr.io/config: "appconfig"

You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.

That’s it! There’s no need to include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.

View traces

Deploy and run some applications. Wait for the trace to propagate to your tracing backend and view them there.

1.1.4.2 - Using OpenTelemetry Collector to collect traces to send to App Insights

How to push trace events to Azure Application Insights, using the OpenTelemetry Collector.

Dapr integrates with OpenTelemetry (OTEL) Collector using the OpenTelemetry protocol (OTLP). This guide walks through an example using Dapr to push traces to Azure Application Insights, using the OpenTelemetry Collector.

Prerequisites

Set up OTEL Collector to push to your App Insights instance

To push traces to your Application Insights instance, install the OpenTelemetry Collector on your Kubernetes cluster.

  1. Download and inspect the open-telemetry-collector-appinsights.yaml file.

  2. Replace the <CONNECTION_STRING> placeholder with your App Insights connection string.

  3. Deploy the OpenTelemetry Collector into the same namespace where your Dapr-enabled applications are running:

    kubectl apply -f open-telemetry-collector-appinsights.yaml
    

Set up Dapr to send traces to the OpenTelemetry Collector

Create a Dapr configuration file to enable tracing and send traces to the OpenTelemetry Collector via OTLP.

  1. Download and inspect the collector-config-otel.yaml. Update the namespace and otel.endpointAddress values to align with the namespace where your Dapr-enabled applications and OpenTelemetry Collector are deployed.

  2. Apply the configuration with:

    kubectl apply -f collector-config-otel.yaml
    

Deploy your app with tracing

Apply the tracing configuration by adding a dapr.io/config annotation to the Dapr applications that you want to include in distributed tracing, as shown in the following example:

apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  ...
  template:
    metadata:
      ...
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "MyApp"
        dapr.io/app-port: "8080"
        dapr.io/config: "tracing"

You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.

That’s it! There’s no need to include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.

View traces

Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your App Insights resource. You can also use the Application Map to examine the topology of your services, as shown below:

Application map

1.1.4.3 - Using OpenTelemetry Collector to collect traces to send to Jaeger

How to push trace events to Jaeger distributed tracing platform, using the OpenTelemetry Collector.

While Dapr supports writing traces using OpenTelemetry (OTLP) and Zipkin protocols, Zipkin support for Jaeger has been deprecated in favor of OTLP. Although Jaeger supports OTLP directly, the recommended approach for production is to use the OpenTelemetry Collector to collect traces from Dapr and send them to Jaeger, allowing your application to quickly offload data and take advantage of features like retries, batching, and encryption. For more information, read the Open Telemetry Collector documentation.

Configure Jaeger in self-hosted mode

Local setup

The simplest way to start Jaeger is to run the pre-built, all-in-one Jaeger image published to DockerHub and expose the OTLP port:

docker run -d --name jaeger \
  -p 4317:4317  \
  -p 16686:16686 \
  jaegertracing/all-in-one:1.49

Next, create the following config.yaml file locally:

Note: Because you are using the Open Telemetry protocol to talk to Jaeger, you need to fill out the otel section of the tracing configuration and set the endpointAddress to the address of the Jaeger container.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: tracing
  namespace: default
spec:
  tracing:
    samplingRate: "1"
    stdout: true
    otel:
      endpointAddress: "localhost:4317"
      isSecure: false
      protocol: grpc 

To launch the application referring to the new YAML configuration file, use the --config option. For example:

dapr run --app-id myapp --app-port 3000 node app.js --config config.yaml

View traces

To view traces in your browser, go to http://localhost:16686 to see the Jaeger UI.

Configure Jaeger on Kubernetes with the OpenTelemetry Collector

The following steps show you how to configure Dapr to send distributed tracing data to the OpenTelemetry Collector which, in turn, sends the traces to Jaeger.

Prerequisites

Set up OpenTelemetry Collector to push to Jaeger

To push traces to your Jaeger instance, install the OpenTelemetry Collector on your Kubernetes cluster.

  1. Download and inspect the open-telemetry-collector-jaeger.yaml file.

  2. In the data section of the otel-collector-conf ConfigMap, update the otlp/jaeger.endpoint value to reflect the endpoint of your Jaeger collector Kubernetes service object.

  3. Deploy the OpenTelemetry Collector into the same namespace where your Dapr-enabled applications are running:

    kubectl apply -f open-telemetry-collector-jaeger.yaml
    

Set up Dapr to send traces to OpenTelemetryCollector

Create a Dapr configuration file to enable tracing and export the sidecar traces to the OpenTelemetry Collector.

  1. Use the collector-config-otel.yaml file to create your own Dapr configuration.

  2. Update the namespace and otel.endpointAddress values to align with the namespace where your Dapr-enabled applications and OpenTelemetry Collector are deployed.

  3. Apply the configuration with:

    kubectl apply -f collector-config.yaml
    

Deploy your app with tracing enabled

Apply the tracing Dapr configuration by adding a dapr.io/config annotation to the application deployment that you want to enable distributed tracing for, as shown in the following example:

apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  ...
  template:
    metadata:
      ...
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "MyApp"
        dapr.io/app-port: "8080"
        dapr.io/config: "tracing"

You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.

That’s it! There’s no need to include the OpenTelemetry SDK or instrument your application code. Dapr automatically handles the distributed tracing for you.

View traces

To view Dapr sidecar traces, port-forward the Jaeger Service and open the UI:

kubectl port-forward svc/jaeger-query 16686 -n observability

In your browser, go to http://localhost:16686 and you will see the Jaeger UI.

jaeger

References

1.1.5 - How-To: Set-up New Relic for distributed tracing

Set-up New Relic for distributed tracing

Prerequisites

  • Perpetually free New Relic account, 100 GB/month of free data ingest, 1 free full access user, unlimited free basic users

Configure Dapr tracing

Dapr natively captures metrics and traces that can be send directly to New Relic. The easiest way to export these is by configuring Dapr to send the traces to New Relic’s Trace API using the Zipkin trace format.

In order for the integration to send data to New Relic Telemetry Data Platform, you need a New Relic Insights Insert API key.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
  namespace: default
spec:
  tracing:
    samplingRate: "1"
    zipkin:
      endpointAddress: "https://trace-api.newrelic.com/trace/v1?Api-Key=<NR-INSIGHTS-INSERT-API-KEY>&Data-Format=zipkin&Data-Format-Version=2"

Viewing Traces

New Relic Distributed Tracing overview New Relic Kubernetes Cluster Explorer App

New Relic Distributed Tracing details New Relic Kubernetes Cluster Explorer App

(optional) New Relic Instrumentation

In order for the integrations to send data to New Relic Telemetry Data Platform, you either need a New Relic license key or New Relic Insights Insert API key.

OpenTelemetry instrumentation

Leverage the different language specific OpenTelemetry implementations, for example New Relic Telemetry SDK and OpenTelemetry support for .NET. In this case, use the OpenTelemetry Trace Exporter. See example here.

New Relic Language agent

Similarly to the OpenTelemetry instrumentation, you can also leverage a New Relic language agent. As an example, the New Relic agent instrumentation for .NET Core is part of the Dockerfile. See example here.

(optional) Enable New Relic Kubernetes integration

In case Dapr and your applications run in the context of a Kubernetes environment, you can enable additional metrics and logs.

The easiest way to install the New Relic Kubernetes integration is to use the automated installer to generate a manifest. It bundles not just the integration DaemonSets, but also other New Relic Kubernetes configurations, like Kubernetes events, Prometheus OpenMetrics, and New Relic log monitoring.

New Relic Kubernetes Cluster Explorer

The New Relic Kubernetes Cluster Explorer provides a unique visualization of the entire data and deployments of the data collected by the Kubernetes integration.

It is a good starting point to observe all your data and dig deeper into any performance issues or incidents happening inside of the application or microservices.

New Relic Kubernetes Cluster Explorer App

Automated correlation is part of the visualization capabilities of New Relic.

Pod-level details

New Relic K8s Pod Level Details

Logs in Context

New Relic K8s Logs In Context

New Relic Dashboards

Kubernetes Overview

New Relic Dashboard Kubernetes Overview

Dapr System Services

New Relic Dashboard Dapr System Services

Dapr Metrics

New Relic Dashboard Dapr Metrics 1

New Relic Grafana integration

New Relic teamed up with Grafana Labs so you can use the Telemetry Data Platform as a data source for Prometheus metrics and see them in your existing dashboards, seamlessly tapping into the reliability, scale, and security provided by New Relic.

Grafana dashboard templates to monitor Dapr system services and sidecars can easily be used without any changes. New Relic provides a native endpoint for Prometheus metrics into Grafana. A datasource can easily be set-up:

New Relic Grafana Data Source

And the exact same dashboard templates from Dapr can be imported to visualize Dapr system services and sidecars.

New Relic Grafana Dashboard

New Relic Alerts

All the data that is collected from Dapr, Kubernetes or any services that run on top of can be used to set-up alerts and notifications into the preferred channel of your choice. See Alerts and Applied Intelligence.

1.1.6 - How-To: Set up Zipkin for distributed tracing

Set up Zipkin for distributed tracing

Configure self hosted mode

For self hosted mode, on running dapr init:

  1. The following YAML file is created by default in $HOME/.dapr/config.yaml (on Linux/Mac) or %USERPROFILE%\.dapr\config.yaml (on Windows) and it is referenced by default on dapr run calls unless otherwise overridden `:
  • config.yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: daprConfig
  namespace: default
spec:
  tracing:
    samplingRate: "1"
    zipkin:
      endpointAddress: "http://localhost:9411/api/v2/spans"
  1. The openzipkin/zipkin docker container is launched on running dapr init or it can be launched with the following code.

Launch Zipkin using Docker:

docker run -d -p 9411:9411 openzipkin/zipkin
  1. The applications launched with dapr run by default reference the config file in $HOME/.dapr/config.yaml or %USERPROFILE%\.dapr\config.yaml and can be overridden with the Dapr CLI using the --config param:
dapr run --app-id mynode --app-port 3000 node app.js

Viewing Traces

To view traces, in your browser go to http://localhost:9411 and you will see the Zipkin UI.

Configure Kubernetes

The following steps shows you how to configure Dapr to send distributed tracing data to Zipkin running as a container in your Kubernetes cluster, and how to view them.

Setup

First, deploy Zipkin:

kubectl create deployment zipkin --image openzipkin/zipkin

Create a Kubernetes service for the Zipkin pod:

kubectl expose deployment zipkin --type ClusterIP --port 9411

Next, create the following YAML file locally:

  • tracing.yaml configuration
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: tracing
  namespace: default
spec:
  tracing:
    samplingRate: "1"
    zipkin:
      endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"

Now, deploy the the Dapr configuration file:

kubectl apply -f tracing.yaml

In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template:

annotations:
  dapr.io/config: "tracing"

That’s it! Your sidecar is now configured to send traces to Zipkin.

Viewing Tracing Data

To view traces, connect to the Zipkin service and open the UI:

kubectl port-forward svc/zipkin 9411:9411

In your browser, go to http://localhost:9411 and you will see the Zipkin UI.

zipkin

References

1.1.7 - How-To: Set up Datadog for distributed tracing

Set up Datadog for distributed tracing

Dapr captures metrics and traces that can be sent directly to Datadog through the OpenTelemetry Collector Datadog exporter.

Configure Dapr tracing with the OpenTelemetry Collector and Datadog

Using the OpenTelemetry Collector Datadog exporter, you can configure Dapr to create traces for each application in your Kubernetes cluster and collect them in Datadog.

Before you begin, set up the OpenTelemetry Collector.

  1. Add your Datadog API key to the ./deploy/opentelemetry-collector-generic-datadog.yaml file in the datadog exporter configuration section:

    data:
      otel-collector-config:
        ...
        exporters:
          ...
          datadog:
            api:
              key: <YOUR_API_KEY>
    
  2. Apply the opentelemetry-collector configuration by running the following command.

    kubectl apply -f ./deploy/open-telemetry-collector-generic-datadog.yaml
    
  3. Set up a Dapr configuration file that will turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.

    kubectl apply -f ./deploy/collector-config.yaml
    
  4. Apply the appconfig configuration by adding a dapr.io/config annotation to the container that you want to participate in the distributed tracing.

    annotations:
       dapr.io/config: "appconfig"
    
  5. Create and configure the application. Once running, telemetry data is sent to Datadog and visible in Datadog APM.

Datadog APM showing telemetry data.

1.2 - Metrics

How to view Dapr metrics

1.2.1 - Configure metrics

Enable or disable Dapr metrics

By default, each Dapr system process emits Go runtime/process metrics and has their own Dapr metrics.

Prometheus endpoint

The Dapr sidecar exposes a Prometheus-compatible metrics endpoint that you can scrape to gain a greater understanding of how Dapr is behaving.

Configuring metrics using the CLI

The metrics application endpoint is enabled by default. You can disable it by passing the command line argument --enable-metrics=false.

The default metrics port is 9090. You can override this by passing the command line argument --metrics-port to daprd.

Configuring metrics in Kubernetes

You can also enable/disable the metrics for a specific application by setting the dapr.io/enable-metrics: "false" annotation on your application deployment. With the metrics exporter disabled, daprd does not open the metrics listening port.

The following Kubernetes deployment example shows how metrics are explicitly enabled with the port specified as “9090”.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodeapp
  labels:
    app: node
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node
  template:
    metadata:
      labels:
        app: node
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "nodeapp"
        dapr.io/app-port: "3000"
        dapr.io/enable-metrics: "true"
        dapr.io/metrics-port: "9090"
    spec:
      containers:
      - name: node
        image: dapriosamples/hello-k8s-node:latest
        ports:
        - containerPort: 3000
        imagePullPolicy: Always

Configuring metrics using application configuration

You can also enable metrics via application configuration. To disable the metrics collection in the Dapr sidecars by default, set spec.metrics.enabled to false.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: tracing
  namespace: default
spec:
  metrics:
    enabled: false

Configuring metrics for error codes

You can enable additional metrics for Dapr API error codes by setting spec.metrics.recordErrorCodes to true. Dapr APIs which communicate back to their caller may return standardized error codes. A new metric called error_code_total is recorded, which allows monitoring of error codes triggered by application, code, and category. See the errorcodes package for specific codes and categories.

Example configuration:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: tracing
  namespace: default
spec:
  metrics:
    enabled: true
    recordErrorCodes: true

Example metric:

{
  "app_id": "publisher-app",
  "category": "state",
  "dapr_io_enabled": "true",
  "error_code": "ERR_STATE_STORE_NOT_CONFIGURED",
  "instance": "10.244.1.64:9090",
  "job": "kubernetes-service-endpoints",
  "namespace": "my-app",
  "node": "my-node",
  "service": "publisher-app-dapr"
}

Optimizing HTTP metrics reporting with path matching

When invoking Dapr using HTTP, metrics are created for each requested method by default. This can result in a high number of metrics, known as high cardinality, which can impact memory usage and CPU.

Path matching allows you to manage and control the cardinality of HTTP metrics in Dapr. This is an aggregation of metrics, so rather than having a metric for each event, you can reduce the number of metrics events and report an overall number. Learn more about how to set the cardinality in configuration.

This configuration is opt-in and is enabled via the Dapr configuration spec.metrics.http.pathMatching. When defined, it enables path matching, which standardizes specified paths for both metrics paths. This reduces the number of unique metrics paths, making metrics more manageable and reducing resource consumption in a controlled way.

When spec.metrics.http.pathMatching is combined with the increasedCardinality flag set to false, non-matched paths are transformed into a catch-all bucket to control and limit cardinality, preventing unbounded path growth. Conversely, when increasedCardinality is true (the default), non-matched paths are passed through as they normally would be, allowing for potentially higher cardinality but preserving the original path data.

Examples of Path Matching in HTTP Metrics

The following examples demonstrate how to use the Path Matching API in Dapr for managing HTTP metrics. On each example, the metrics are collected from 5 HTTP requests to the /orders endpoint with different order IDs. By adjusting cardinality and utilizing path matching, you can fine-tune metric granularity to balance detail and resource efficiency.

These examples illustrate the cardinality of the metrics, highlighting that high cardinality configurations result in many entries, which correspond to higher memory usage for handling metrics. For simplicity, the following example focuses on a single metric: dapr_http_server_request_count.

Low cardinality with path matching (Recommendation)

Configuration:

http:
  increasedCardinality: false
  pathMatching:
    - /orders/{orderID}

Metrics generated:

# matched paths
dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/{orderID}",status="200"} 5
# unmatched paths
dapr_http_server_request_count{app_id="order-service",method="GET",path="",status="200"} 1

With low cardinality and path matching configured, you get the best of both worlds by grouping the metrics for the important endpoints without compromising the cardinality. This approach helps avoid high memory usage and potential security issues.

Low cardinality without path matching

Configuration:

http:
  increasedCardinality: false

Metrics generated:

dapr_http_server_request_count{app_id="order-service",method="GET", path="",status="200"} 5

In low cardinality mode, the path, which is the main source of unbounded cardinality, is dropped. This results in metrics that primarily indicate the number of requests made to the service for a given HTTP method, but without any information about the paths invoked.

High cardinality with path matching

Configuration:

http:
  increasedCardinality: true
  pathMatching:
    - /orders/{orderID}

Metrics generated:

dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/{orderID}",status="200"} 5

This example results from the same HTTP requests as the example above, but with path matching configured for the path /orders/{orderID}. By using path matching, you achieve reduced cardinality by grouping the metrics based on the matched path.

High Cardinality without path matching

Configuration:

http:
  increasedCardinality: true

Metrics generated:

dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/1",status="200"} 1
dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/2",status="200"} 1
dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/3",status="200"} 1
dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/4",status="200"} 1
dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/5",status="200"} 1

For each request, a new metric is created with the request path. This process continues for every request made to a new order ID, resulting in unbounded cardinality since the IDs are ever-growing.

HTTP metrics exclude verbs

The excludeVerbs option allows you to exclude specific HTTP verbs from being reported in the metrics. This can be useful in high-performance applications where memory savings are critical.

Examples of excluding HTTP verbs in metrics

The following examples demonstrate how to exclude HTTP verbs in Dapr for managing HTTP metrics.

Default - Include HTTP verbs

Configuration:

http:
  excludeVerbs: false

Metrics generated:

dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders",status="200"} 1
dapr_http_server_request_count{app_id="order-service",method="POST",path="/orders",status="200"} 1

In this example, the HTTP method is included in the metrics, resulting in a separate metric for each request to the /orders endpoint.

Exclude HTTP verbs

Configuration:

http:
  excludeVerbs: true

Metrics generated:

dapr_http_server_request_count{app_id="order-service",method="",path="/orders",status="200"} 2

In this example, the HTTP method is excluded from the metrics, resulting in a single metric for all requests to the /orders endpoint.

Configuring custom latency histogram buckets

Dapr uses cumulative histogram metrics to group latency values into buckets, where each bucket contains:

  • A count of the number of requests with that latency
  • All the requests with lower latency

Using the default latency bucket configurations

By default, Dapr groups request latency metrics into the following buckets:

1, 2, 3, 4, 5, 6, 8, 10, 13, 16, 20, 25, 30, 40, 50, 65, 80, 100, 130, 160, 200, 250, 300, 400, 500, 650, 800, 1000, 2000, 5000, 10000, 20000, 50000, 100000

Grouping latency values in a cumulative fashion allows buckets to be used or dropped as needed for increased or decreased granularity of data. For example, if a request takes 3ms, it’s counted in the 3ms bucket, the 4ms bucket, the 5ms bucket, and so on. Similarly, if a request takes 10ms, it’s counted in the 10ms bucket, the 13ms bucket, the 16ms bucket, and so on. After these two requests have completed, the 3ms bucket has a count of 1 and the 10ms bucket has a count of 2, since both the 3ms and 10ms requests are included here.

This shows up as follows:

123456810131620253040506580100130160…..100000
00111112222222222222…..2

The default number of buckets works well for most use cases, but can be adjusted as needed. Each request creates 34 different metrics, leaving this value to grow considerably for a large number of applications. More accurate latency percentiles can be achieved by increasing the number of buckets. However, a higher number of buckets increases the amount of memory used to store the metrics, potentially negatively impacting your monitoring system.

It is recommended to keep the number of latency buckets set to the default value, unless you are seeing unwanted memory pressure in your monitoring system. Configuring the number of buckets allows you to choose applications where:

  • You want to see more detail with a higher number of buckets
  • Broader values are sufficient by reducing the buckets

Take note of the default latency values your applications are producing before configuring the number buckets.

Customizing latency buckets to your scenario

Tailor the latency buckets to your needs, by modifying the spec.metrics.latencyDistributionBuckets field in the Dapr configuration spec for your application(s).

For example, if you aren’t interested in extremely low latency values (1-10ms), you can group them in a single 10ms bucket. Similarly, you can group the high values in a single bucket (1000-5000ms), while keeping more detail in the middle range of values that you are most interested in.

The following Configuration spec example replaces the default 34 buckets with 11 buckets, giving a higher level of granularity in the middle range of values:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: custom-metrics
spec:
    metrics:
        enabled: true
        latencyDistributionBuckets: [10, 25, 40, 50, 70, 100, 150, 200, 500, 1000, 5000]

Transform metrics with regular expressions

You can set regular expressions for every metric exposed by the Dapr sidecar to “transform” their values. See a list of all Dapr metrics.

The name of the rule must match the name of the metric that is transformed. The following example shows how to apply a regular expression for the label method in the metric dapr_runtime_service_invocation_req_sent_total:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: daprConfig
spec:
  metrics:
    enabled: true
    http:
      increasedCardinality: true
    rules:
      - name: dapr_runtime_service_invocation_req_sent_total
        labels:
        - name: method
          regex:
            "orders/": "orders/.+"

When this configuration is applied, a recorded metric with the method label of orders/a746dhsk293972nz is replaced with orders/.

Using regular expressions to reduce metrics cardinality is considered legacy. We encourage all users to set spec.metrics.http.increasedCardinality to false instead, which is simpler to configure and offers better performance.

References

1.2.2 - How-To: Observe metrics with Prometheus

Use Prometheus to collect time-series data relating to the execution of the Dapr runtime itself

Setup Prometheus Locally

To run Prometheus on your local machine, you can either install and run it as a process or run it as a Docker container.

Install

To install Prometheus, follow the steps outlined here for your OS.

Configure

Now you’ve installed Prometheus, you need to create a configuration.

Below is an example Prometheus configuration, save this to a file i.e. /tmp/prometheus.yml or C:\Temp\prometheus.yml

global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  - job_name: 'dapr'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    static_configs:
      - targets: ['localhost:9090'] # Replace with Dapr metrics port if not default

Run as Process

Run Prometheus with your configuration to start it collecting metrics from the specified targets.

./prometheus --config.file=/tmp/prometheus.yml --web.listen-address=:8080

We change the port so it doesn’t conflict with Dapr’s own metrics endpoint.

If you are not currently running a Dapr application, the target will show as offline. In order to start collecting metrics you must start Dapr with the metrics port matching the one provided as the target in the configuration.

Once Prometheus is running, you’ll be able to visit its dashboard by visiting http://localhost:8080.

Run as Container

To run Prometheus as a Docker container on your local machine, first ensure you have Docker installed and running.

Then you can run Prometheus as a Docker container using:

docker run \
    --net=host \
    -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml \
    prom/prometheus --config.file=/etc/prometheus/prometheus.yml --web.listen-address=:8080

--net=host ensures that the Prometheus instance will be able to connect to any Dapr instances running on the host machine. If you plan to run your Dapr apps in containers as well, you’ll need to run them on a shared Docker network and update the configuration with the correct target address.

Once Prometheus is running, you’ll be able to visit its dashboard by visiting http://localhost:8080.

Setup Prometheus on Kubernetes

Prerequisites

Install Prometheus

  1. First create namespace that can be used to deploy the Grafana and Prometheus monitoring tools
kubectl create namespace dapr-monitoring
  1. Install Prometheus
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring

If you are Minikube user or want to disable persistent volume for development purposes, you can disable it by using the following command.

helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring
 --set alertmanager.persistence.enabled=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false

For automatic discovery of Dapr targets (Service Discovery), use:

  helm install dapr-prom prometheus-community/prometheus -f values.yaml -n dapr-monitoring --create-namespace

values.yaml File

alertmanager:
  persistence:
    enabled: false
pushgateway:
  persistentVolume:
    enabled: false
server:
  persistentVolume:
    enabled: false

# Adds additional scrape configurations to prometheus.yml
# Uses service discovery to find Dapr and Dapr sidecar targets
extraScrapeConfigs: |-
  - job_name: dapr-sidecars
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - action: keep
        regex: "true"
        source_labels:
          - __meta_kubernetes_pod_annotation_dapr_io_enabled
      - action: keep
        regex: "true"
        source_labels:
          - __meta_kubernetes_pod_annotation_dapr_io_enable_metrics
      - action: replace
        replacement: ${1}
        source_labels:
          - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        replacement: ${1}
        source_labels:
          - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        regex: (.*);daprd
        replacement: ${1}-dapr
        source_labels:
          - __meta_kubernetes_pod_annotation_dapr_io_app_id
          - __meta_kubernetes_pod_container_name
        target_label: service
      - action: replace
        replacement: ${1}:9090
        source_labels:
          - __meta_kubernetes_pod_ip
        target_label: __address__

  - job_name: dapr
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - action: keep
        regex: dapr
        source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_name
      - action: keep
        regex: dapr
        source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_part_of
      - action: replace
        replacement: ${1}
        source_labels:
          - __meta_kubernetes_pod_label_app
        target_label: app
      - action: replace
        replacement: ${1}
        source_labels:
          - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        replacement: ${1}
        source_labels:
          - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        replacement: ${1}:9090
        source_labels:
          - __meta_kubernetes_pod_ip
        target_label: __address__
  1. Validation

Ensure Prometheus is running in your cluster.

kubectl get pods -n dapr-monitoring

Expected output:

NAME                                                READY   STATUS    RESTARTS   AGE
dapr-prom-kube-state-metrics-9849d6cc6-t94p8        1/1     Running   0          4m58s
dapr-prom-prometheus-alertmanager-749cc46f6-9b5t8   2/2     Running   0          4m58s
dapr-prom-prometheus-node-exporter-5jh8p            1/1     Running   0          4m58s
dapr-prom-prometheus-node-exporter-88gbg            1/1     Running   0          4m58s
dapr-prom-prometheus-node-exporter-bjp9f            1/1     Running   0          4m58s
dapr-prom-prometheus-pushgateway-688665d597-h4xx2   1/1     Running   0          4m58s
dapr-prom-prometheus-server-694fd8d7c-q5d59         2/2     Running   0          4m58s

Access the Prometheus Dashboard

To view the Prometheus dashboard and check service discovery:

kubectl port-forward svc/dapr-prom-prometheus-server 9090:80 -n dapr-monitoring

Open a browser and visit http://localhost:9090. Navigate to Status > Service Discovery to verify that the Dapr targets are discovered correctly.

Prometheus Web UI

You can see the job_name and its discovered targets.

Prometheus Service Discovery

Example

References

1.2.3 - How-To: Observe metrics with Grafana

How to view Dapr metrics in a Grafana dashboard.

Available dashboards

The grafana-system-services-dashboard.json template shows Dapr system component status, dapr-operator, dapr-sidecar-injector, dapr-sentry, and dapr-placement:

Screenshot of the system service dashboard

The grafana-sidecar-dashboard.json template shows Dapr sidecar status, including sidecar health/resources, throughput/latency of HTTP and gRPC, Actor, mTLS, etc.:

Screenshot of the sidecar dashboard

The grafana-actor-dashboard.json template shows Dapr Sidecar status, actor invocation throughput/latency, timer/reminder triggers, and turn-based concurrnecy:

Screenshot of the actor dashboard

Pre-requisites

Setup on Kubernetes

Install Grafana

  1. Add the Grafana Helm repo:

    helm repo add grafana https://grafana.github.io/helm-charts
    helm repo update
    
  2. Install the chart:

    helm install grafana grafana/grafana -n dapr-monitoring
    
  3. Retrieve the admin password for Grafana login:

    kubectl get secret --namespace dapr-monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
    

    You will get a password similar to cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1%. Remove the % character from the password to get cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1 as the admin password.

  4. Validation Grafana is running in your cluster:

    kubectl get pods -n dapr-monitoring
    
    NAME                                                READY   STATUS       RESTARTS   AGE
    dapr-prom-kube-state-metrics-9849d6cc6-t94p8        1/1     Running      0          4m58s
    dapr-prom-prometheus-alertmanager-749cc46f6-9b5t8   2/2     Running      0          4m58s
    dapr-prom-prometheus-node-exporter-5jh8p            1/1     Running      0          4m58s
    dapr-prom-prometheus-node-exporter-88gbg            1/1     Running      0          4m58s
    dapr-prom-prometheus-node-exporter-bjp9f            1/1     Running      0          4m58s
    dapr-prom-prometheus-pushgateway-688665d597-h4xx2   1/1     Running      0          4m58s
    dapr-prom-prometheus-server-694fd8d7c-q5d59         2/2     Running      0          4m58s
    grafana-c49889cff-x56vj                             1/1     Running      0          5m10s
    

Configure Prometheus as data source

First you need to connect Prometheus as a data source to Grafana.

  1. Port-forward to svc/grafana:

    kubectl port-forward svc/grafana 8080:80 -n dapr-monitoring
    
    Forwarding from 127.0.0.1:8080 -> 3000
    Forwarding from [::1]:8080 -> 3000
    Handling connection for 8080
    Handling connection for 8080
    
  2. Open a browser to http://localhost:8080

  3. Login to Grafana

    • Username = admin
    • Password = Password from above
  4. Select Configuration and Data Sources

    Screenshot of the Grafana add Data Source menu
  5. Add Prometheus as a data source.

    Screenshot of the Prometheus add Data Source
  6. Get your Prometheus HTTP URL

    The Prometheus HTTP URL follows the format http://<prometheus service endpoint>.<namespace>

    Start by getting the Prometheus server endpoint by running the following command:

    kubectl get svc -n dapr-monitoring
    
    NAME                                 TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)             AGE
    dapr-prom-kube-state-metrics         ClusterIP   10.0.174.177      <none>        8080/TCP            7d9h
    dapr-prom-prometheus-alertmanager    ClusterIP   10.0.255.199      <none>        80/TCP              7d9h
    dapr-prom-prometheus-node-exporter   ClusterIP   None              <none>        9100/TCP            7d9h
    dapr-prom-prometheus-pushgateway     ClusterIP   10.0.190.59       <none>        9091/TCP            7d9h
    dapr-prom-prometheus-server          ClusterIP   10.0.172.191      <none>        80/TCP              7d9h
    elasticsearch-master                 ClusterIP   10.0.36.146       <none>        9200/TCP,9300/TCP   7d10h
    elasticsearch-master-headless        ClusterIP   None              <none>        9200/TCP,9300/TCP   7d10h
    grafana                              ClusterIP   10.0.15.229       <none>        80/TCP              5d5h
    kibana-kibana                        ClusterIP   10.0.188.224      <none>        5601/TCP            7d10h
    

    In this guide the server name is dapr-prom-prometheus-server and the namespace is dapr-monitoring, so the HTTP URL will be http://dapr-prom-prometheus-server.dapr-monitoring.

  7. Fill in the following settings:

    • Name: Dapr
    • HTTP URL: http://dapr-prom-prometheus-server.dapr-monitoring
    • Default: On
    • Skip TLS Verify: On
      • Necessary in order to save and test the configuration
    Screenshot of the Prometheus Data Source configuration
  8. Click Save & Test button to verify that the connection succeeded.

Import dashboards in Grafana

  1. In the upper left corner of the Grafana home screen, click the “+” option, then “Import”.

    You can now import Grafana dashboard templates from release assets for your Dapr version:

    Screenshot of the Grafana dashboard upload option
  2. Find the dashboard that you imported and enjoy

    Screenshot of Dapr service dashboard

References

Example

1.2.4 - How-To: Set-up New Relic to collect and analyze metrics

Set-up New Relic for Dapr metrics

Prerequisites

  • Perpetually free New Relic account, 100 GB/month of free data ingest, 1 free full access user, unlimited free basic users

Background

New Relic offers a Prometheus OpenMetrics Integration.

This document explains how to install it in your cluster, either using a Helm chart (recommended).

Installation

  1. Install Helm following the official instructions.

  2. Add the New Relic official Helm chart repository following these instructions

  3. Run the following command to install the New Relic Logging Kubernetes plugin via Helm, replacing the placeholder value YOUR_LICENSE_KEY with your New Relic license key:

    helm install nri-prometheus newrelic/nri-prometheus --set licenseKey=YOUR_LICENSE_KEY
    

View Metrics

Dapr Metrics

Dashboard

1.2.5 - How-To: Set up Azure Monitor to search logs and collect metrics

Enable Dapr metrics and logs with Azure Monitor for Azure Kubernetes Service (AKS)

Prerequisites

Enable Prometheus metric scrape using config map

  1. Make sure that Azure Monitor Agents (AMA) are running.

    $ kubectl get pods -n kube-system
    NAME                                                  READY   STATUS    RESTARTS   AGE
    ...
    ama-logs-48kpv                                        2/2     Running   0          2d13h
    ama-logs-mx24c                                        2/2     Running   0          2d13h
    ama-logs-rs-f9bbb9898-vbt6k                           1/1     Running   0          30h
    ama-logs-sm2mz                                        2/2     Running   0          2d13h
    ama-logs-z7p4c                                        2/2     Running   0          2d13h
    ...
    
  2. Apply config map to enable Prometheus metrics endpoint scrape.

You can use azm-config-map.yaml to enable Prometheus metrics endpoint scrape.

If you installed Dapr to a different namespace, you need to change the monitor_kubernetes_pod_namespaces array values. For example:

...
  prometheus-data-collection-settings: |-
    [prometheus_data_collection_settings.cluster]
        interval = "1m"
        monitor_kubernetes_pods = true
        monitor_kubernetes_pods_namespaces = ["dapr-system", "default"]
    [prometheus_data_collection_settings.node]
        interval = "1m"
...

Apply config map:

kubectl apply -f ./azm-config.map.yaml

Install Dapr with JSON formatted logs

  1. Install Dapr with enabling JSON-formatted logs.

    helm install dapr dapr/dapr --namespace dapr-system --set global.logAsJson=true
    
  2. Enable JSON formatted log in Dapr sidecar and add Prometheus annotations.

Note: The Azure Monitor Agents (AMA) only sends the metrics if the Prometheus annotations are set.

Add dapr.io/log-as-json: "true" annotation to your deployment yaml.

Example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pythonapp
  namespace: default
  labels:
    app: python
spec:
  replicas: 1
  selector:
    matchLabels:
      app: python
  template:
    metadata:
      labels:
        app: python
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "pythonapp"
        dapr.io/log-as-json: "true"
        prometheus.io/scrape: "true"
        prometheus.io/port: "9090"
        prometheus.io/path: "/"

...

Search metrics and logs with Azure Monitor

  1. Go to Azure Monitor in the Azure portal.

  2. Search Dapr Logs.

Here is an example query, to parse JSON formatted logs and query logs from Dapr system processes.

ContainerLog
| extend parsed=parse_json(LogEntry)
| project Time=todatetime(parsed['time']), app_id=parsed['app_id'], scope=parsed['scope'],level=parsed['level'], msg=parsed['msg'], type=parsed['type'], ver=parsed['ver'], instance=parsed['instance']
| where level != ""
| sort by Time
  1. Search Metrics.

This query, queries process_resident_memory_bytes Prometheus metrics for Dapr system processes and renders timecharts.

InsightsMetrics
| where Namespace == "prometheus" and Name == "process_resident_memory_bytes"
| extend tags=parse_json(Tags)
| project TimeGenerated, Name, Val, app=tostring(tags['app'])
| summarize memInBytes=percentile(Val, 99) by bin(TimeGenerated, 1m), app
| where app startswith "dapr-"
| render timechart

References

1.3 - Logging

How to setup logging for Dapr sidecar, and your application

1.3.1 - Logs

Understand Dapr logging

Dapr produces structured logs to stdout, either in plain-text or JSON-formatted. By default, all Dapr processes (runtime, or sidecar, and all control plane services) write logs to the console (stdout) in plain-text. To enable JSON-formatted logging, you need to add the --log-as-json command flag when running Dapr processes.

Log schema

Dapr produces logs based on the following schema:

FieldDescriptionExample
timeISO8601 Timestamp2011-10-05T14:48:00.000Z
levelLog Level (info/warn/debug/error)info
typeLog Typelog
msgLog Messagehello dapr!
scopeLogging Scopedapr.runtime
instanceContainer Namedapr-pod-xxxxx
app_idDapr App IDdapr-app
verDapr Runtime Version1.9.0

API logging may add other structured fields, as described in the documentation for API logging.

Plain text and JSON formatted logs

  • Plain-text log examples
time="2022-11-01T17:08:48.303776-07:00" level=info msg="starting Dapr Runtime -- version 1.9.0 -- commit v1.9.0-g5dfcf2e" instance=dapr-pod-xxxx scope=dapr.runtime type=log ver=1.9.0
time="2022-11-01T17:08:48.303913-07:00" level=info msg="log level set to: info" instance=dapr-pod-xxxx scope=dapr.runtime type=log ver=1.9.0
  • JSON-formatted log examples
{"instance":"dapr-pod-xxxx","level":"info","msg":"starting Dapr Runtime -- version 1.9.0 -- commit v1.9.0-g5dfcf2e","scope":"dapr.runtime","time":"2022-11-01T17:09:45.788005Z","type":"log","ver":"1.9.0"}
{"instance":"dapr-pod-xxxx","level":"info","msg":"log level set to: info","scope":"dapr.runtime","time":"2022-11-01T17:09:45.788075Z","type":"log","ver":"1.9.0"}

Log formats

Dapr supports printing either plain-text, the default, or JSON-formatted logs.

To use JSON-formatted logs, you need to add additional configuration options when you install Dapr and when deploy your apps. The recommendation is to use JSON-formatted logs because most log collectors and search engines can parse JSON more easily with built-in parsers.

Enabling JSON logging with the Dapr CLI

When using the Dapr CLI to run an application, pass the --log-as-json option to enable JSON-formatted logs, for example:

dapr run \
  --app-id orderprocessing \
  --resources-path ./components/ \
  --log-as-json \
    -- python3 OrderProcessingService.py

Enabling JSON logging in Kubernetes

The following steps describe how to configure JSON-formatted logs for Kubernetes

Dapr control plane

All services in the Dapr control plane (such as operator, sentry, etc) support a --log-as-json option to enable JSON-formatted logging.

If you’re deploying Dapr to Kubernetes using a Helm chart, you can enable JSON-formatted logs for Dapr system services by passing the --set global.logAsJson=true option; for example:

helm upgrade --install dapr \
  dapr/dapr \
  --namespace dapr-system \
  --set global.logAsJson=true

Enable JSON-formatted log for Dapr sidecars

You can enable JSON-formatted logs in Dapr sidecars by adding the dapr.io/log-as-json: "true" annotation to the deployment, for example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pythonapp
  labels:
    app: python
spec:
  selector:
    matchLabels:
      app: python
  template:
    metadata:
      labels:
        app: python
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "pythonapp"
        # This enables JSON-formatted logging
        dapr.io/log-as-json: "true"
...

API Logging

API logging enables you to see the API calls your application makes to the Dapr sidecar, to debug issues or monitor the behavior of your application. You can combine both Dapr API logging with Dapr log events.

See configure and view Dapr Logs and configure and view Dapr API Logs for more information.

Log collectors

If you run Dapr in a Kubernetes cluster, Fluentd is a popular container log collector. You can use Fluentd with a JSON parser plugin to parse Dapr JSON-formatted logs. This how-to shows how to configure Fluentd in your cluster.

If you are using Azure Kubernetes Service, you can use the built-in agent to collect logs with Azure Monitor without needing to install Fluentd.

Search engines

If you use Fluentd, we recommend using Elastic Search and Kibana. This how-to shows how to set up Elastic Search and Kibana in your Kubernetes cluster.

If you are using the Azure Kubernetes Service, you can use Azure Monitor for containers without installing any additional monitoring tools. Also read How to enable Azure Monitor for containers

References

1.3.2 - How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes

How to install Fluentd, Elastic Search, and Kibana to search logs in Kubernetes

Prerequisites

Install Elastic search and Kibana

  1. Create a Kubernetes namespace for monitoring tools

    kubectl create namespace dapr-monitoring
    
  2. Add the helm repo for Elastic Search

    helm repo add elastic https://helm.elastic.co
    helm repo update
    
  3. Install Elastic Search using Helm

    By default, the chart creates 3 replicas which must be on different nodes. If your cluster has fewer than 3 nodes, specify a smaller number of replicas. For example, this sets the number of replicas to 1:

    helm install elasticsearch elastic/elasticsearch --version 7.17.3 -n dapr-monitoring --set replicas=1
    

    Otherwise:

    helm install elasticsearch elastic/elasticsearch --version 7.17.3 -n dapr-monitoring
    

    If you are using minikube or simply want to disable persistent volumes for development purposes, you can do so by using the following command:

    helm install elasticsearch elastic/elasticsearch --version 7.17.3 -n dapr-monitoring --set persistence.enabled=false,replicas=1
    
  4. Install Kibana

    helm install kibana elastic/kibana --version 7.17.3 -n dapr-monitoring
    
  5. Ensure that Elastic Search and Kibana are running in your Kubernetes cluster

    $ kubectl get pods -n dapr-monitoring
    NAME                            READY   STATUS    RESTARTS   AGE
    elasticsearch-master-0          1/1     Running   0          6m58s
    kibana-kibana-95bc54b89-zqdrk   1/1     Running   0          4m21s
    

Install Fluentd

  1. Install config map and Fluentd as a daemonset

    Download these config files:

    Note: If you already have Fluentd running in your cluster, please enable the nested json parser so that it can parse JSON-formatted logs from Dapr.

    Apply the configurations to your cluster:

    kubectl apply -f ./fluentd-config-map.yaml
    kubectl apply -f ./fluentd-dapr-with-rbac.yaml
    
  2. Ensure that Fluentd is running as a daemonset. The number of FluentD instances should be the same as the number of cluster nodes. In the example below, there is only one node in the cluster:

    $ kubectl get pods -n kube-system -w
    NAME                          READY   STATUS    RESTARTS   AGE
    coredns-6955765f44-cxjxk      1/1     Running   0          4m41s
    coredns-6955765f44-jlskv      1/1     Running   0          4m41s
    etcd-m01                      1/1     Running   0          4m48s
    fluentd-sdrld                 1/1     Running   0          14s
    

Install Dapr with JSON formatted logs

  1. Install Dapr with enabling JSON-formatted logs

    helm repo add dapr https://dapr.github.io/helm-charts/
    helm repo update
    helm install dapr dapr/dapr --namespace dapr-system --set global.logAsJson=true
    
  2. Enable JSON formatted log in Dapr sidecar

    Add the dapr.io/log-as-json: "true" annotation to your deployment yaml. For example:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: pythonapp
      namespace: default
      labels:
        app: python
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: python
      template:
        metadata:
          labels:
            app: python
          annotations:
            dapr.io/enabled: "true"
            dapr.io/app-id: "pythonapp"
            dapr.io/log-as-json: "true"
    ...
    

Search logs

Note: Elastic Search takes a time to index the logs that Fluentd sends.

  1. Port-forward from localhost to svc/kibana-kibana

    $ kubectl port-forward svc/kibana-kibana 5601 -n dapr-monitoring
    Forwarding from 127.0.0.1:5601 -> 5601
    Forwarding from [::1]:5601 -> 5601
    Handling connection for 5601
    Handling connection for 5601
    
  2. Browse to http://localhost:5601

  3. Expand the drop-down menu and click Management → Stack Management

    Stack Management item under Kibana Management menu options

  4. On the Stack Management page, select Data → Index Management and wait until dapr-* is indexed.

    Index Management view on Kibana Stack Management page

  5. Once dapr-* is indexed, click on Kibana → Index Patterns and then the Create index pattern button.

    Kibana create index pattern button

  6. Define a new index pattern by typing dapr* into the Index Pattern name field, then click the Next step button to continue.

    Kibana define an index pattern page

  7. Configure the primary time field to use with the new index pattern by selecting the @timestamp option from the Time field drop-down. Click the Create index pattern button to complete creation of the index pattern.

    Kibana configure settings page for creating an index pattern

  8. The newly created index pattern should be shown. Confirm that the fields of interest such as scope, type, app_id, level, etc. are being indexed by using the search box in the Fields tab.

    Note: If you cannot find the indexed field, please wait. The time it takes to search across all indexed fields depends on the volume of data and size of the resource that the elastic search is running on.

    View of created Kibana index pattern

  9. To explore the indexed data, expand the drop-down menu and click Analytics → Discover.

    Discover item under Kibana Analytics menu options

  10. In the search box, type in a query string such as scope:* and click the Refresh button to view the results.

    Note: This can take a long time. The time it takes to return all results depends on the volume of data and size of the resource that the elastic search is running on.

    Using the search box in the Kibana Analytics Discover page

References

1.3.3 - How-To: Set-up New Relic for Dapr logging

Set-up New Relic for Dapr logging

Prerequisites

  • Perpetually free New Relic account, 100 GB/month of free data ingest, 1 free full access user, unlimited free basic users

Background

New Relic offers a Fluent Bit output plugin to easily forward your logs to New Relic Logs. This plugin is also provided in a standalone Docker image that can be installed in a Kubernetes cluster in the form of a DaemonSet, which we refer as the Kubernetes plugin.

This document explains how to install it in your cluster, either using a Helm chart (recommended), or manually by applying Kubernetes manifests.

Installation

  1. Install Helm following the official instructions.

  2. Add the New Relic official Helm chart repository following these instructions

  3. Run the following command to install the New Relic Logging Kubernetes plugin via Helm, replacing the placeholder value YOUR_LICENSE_KEY with your New Relic license key:

  • Helm 3

    helm install newrelic-logging newrelic/newrelic-logging --set licenseKey=YOUR_LICENSE_KEY
    
  • Helm 2

    helm install newrelic/newrelic-logging --name newrelic-logging --set licenseKey=YOUR_LICENSE_KEY
    

For EU users, add `–set endpoint=https://log-api.eu.newrelic.com/log/v1 to any of the helm install commands above.

By default, tailing is set to /var/log/containers/*.log. To change this setting, provide your preferred path by adding –set fluentBit.path=DESIRED_PATH to any of the helm install commands above.

Install the Kubernetes manifest

  1. Download the following 3 manifest files into your current working directory:

    curl https://raw.githubusercontent.com/newrelic/helm-charts/master/charts/newrelic-logging/k8s/fluent-conf.yml > fluent-conf.yml
    curl https://raw.githubusercontent.com/newrelic/helm-charts/master/charts/newrelic-logging/k8s/new-relic-fluent-plugin.yml > new-relic-fluent-plugin.yml
    curl https://raw.githubusercontent.com/newrelic/helm-charts/master/charts/newrelic-logging/k8s/rbac.yml > rbac.yml
    
  2. In the downloaded new-relic-fluent-plugin.yml file, replace the placeholder value LICENSE_KEY with your New Relic license key.

    For EU users, replace the ENDPOINT environment variable to https://log-api.eu.newrelic.com/log/v1.

  3. Once the License key has been added, run the following command in your terminal or command-line interface:

    kubectl apply -f .
    
  4. [OPTIONAL] You can configure how the plugin parses the data by editing the parsers.conf section in the fluent-conf.yml file. For more information, see Fluent Bit’s documentation on Parsers configuration.

    By default, tailing is set to /var/log/containers/*.log. To change this setting, replace the default path with your preferred path in the new-relic-fluent-plugin.yml file.

View Logs

Dapr Annotations

Search

2 - Hosting options for Dapr

How to deploy Dapr into your environment.

2.1 - Run Dapr in self-hosted mode

How to get Dapr up and running in your local environment

2.1.1 - Overview of Dapr in self-hosted mode

Overview of how to get Dapr running on a Windows/Linux/MacOS machine

Overview

Dapr can be configured to run in self-hosted mode on your local developer machine or on production VMs. Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.

Initialization

Dapr can be initialized with Docker (default) or in slim-init mode. It can also be initialized and run in offline or airgap environments.

The default Docker setup provides out of the box functionality with the following containers and configuration:

  • A Redis container configured to serve as the default component for both state management and publish/subscribe.
  • A Zipkin container for diagnostics and tracing.
  • A default Dapr configuration and components installed in $HOME/.dapr/ (Mac/Linux) or %USERPROFILE%\.dapr\ (Windows).

The dapr-placement service is responsible for managing the actor distribution scheme and key range settings. This service is not launched as a container and is only required if you are using Dapr actors. For more information on the actor Placement service read actor overview.

Diagram of Dapr in self-hosted Docker mode

Launching applications with Dapr

You can use the dapr run CLI command to a Dapr sidecar process along with your application. Additional arguments and flags can be found here.

Name resolution

Dapr uses a name resolution component for service discovery within the service invocation building block. By default Dapr uses mDNS when in self-hosted mode.

If you are running Dapr on virtual machines or where mDNS is not available, then you can use the HashiCorp Consul component for name resolution.

2.1.2 - How-To: Run Dapr in self-hosted mode with Docker

How to deploy and run Dapr in self-hosted mode using Docker

This article provides guidance on running Dapr with Docker on a Windows/Linux/macOS machine or VM.

Prerequisites

Initialize Dapr environment

To initialize the Dapr control plane containers and create a default configuration file, run:

dapr init

Run both app and sidecar as a process

The dapr run CLI command can be used to launch a Dapr sidecar along with your application:

dapr run --app-id myapp --app-port 5000 -- dotnet run

This command will launch both the daprd sidecar binary and run dotnet run, launching your application.

Run app as a process and sidecar as a Docker container

Alternately, if you are running Dapr in a Docker container and your app as a process on the host machine, then you need to configure Docker to use the host network so that Dapr and the app can share a localhost network interface.

If you are running your Docker daemon on a Linux host, you can run the following to launch Dapr:

docker run --net="host" --mount type=bind,source="$(pwd)"/components,target=/components daprio/daprd:edge ./daprd -app-id <my-app-id> -app-port <my-app-port>

Then you can run your app on the host and they should connect over the localhost network interface.

Run both app and Dapr in a single Docker container

For development purposes ONLY

It is not recommended to run both the Dapr runtime and an application inside the same container. However, it is possible to do so for local development scenarios.

In order to do this, you’ll need to write a Dockerfile that installs the Dapr runtime, Dapr CLI and your app code. You can then invoke both the Dapr runtime and your app code using the Dapr CLI.

Below is an example of a Dockerfile which achieves this:

FROM python:3.7.1
# Install dapr CLI
RUN wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash

# Install daprd
ARG DAPR_BUILD_DIR
COPY $DAPR_BUILD_DIR /opt/dapr
ENV PATH="/opt/dapr/:${PATH}"
RUN dapr init --slim

# Install your app
WORKDIR /app
COPY python .
RUN pip install requests
ENTRYPOINT ["dapr"]
CMD ["run", "--app-id", "nodeapp", "--app-port", "3000", "node", "app.js"]

Remember that if Dapr needs to communicate with other components i.e. Redis, these also need to be made accessible to it.

Run on a Docker network

If you have multiple instances of Dapr running in Docker containers and want them to be able to communicate with each other i.e. for service invocation, then you’ll need to create a shared Docker network and make sure those Dapr containers are attached to it.

You can create a simple Docker network using:

docker network create my-dapr-network

When running your Docker containers, you can attach them to the network using:

docker run --net=my-dapr-network ...

Each container will receive a unique IP on that network and be able to communicate with other containers on that network.

Run using Docker-Compose

Docker Compose can be used to define multi-container application configurations. If you wish to run multiple apps with Dapr sidecars locally without Kubernetes then it is recommended to use a Docker Compose definition (docker-compose.yml).

The syntax and tooling of Docker Compose is outside the scope of this article, however, it is recommended you refer to the official Docker documentation for further details.

In order to run your applications using Dapr and Docker Compose you’ll need to define the sidecar pattern in your docker-compose.yml. For example:

version: '3'
services:
  nodeapp:
    build: ./node
    ports:
      - "50001:50001" # Dapr instances communicate over gRPC so we need to expose the gRPC port
    depends_on:
      - redis
      - placement
    networks:
      - hello-dapr
  nodeapp-dapr:
    image: "daprio/daprd:edge"
    command: [
      "./daprd",
     "--app-id", "nodeapp",
     "--app-port", "3000",
     "--placement-host-address", "placement:50006", # Dapr's placement service can be reach via the docker DNS entry
     "--scheduler-host-address", "scheduler:50007", # Dapr's scheduler service can be reach via the docker DNS entry
     "--resources-path", "./components"
     ]
    volumes:
        - "./components/:/components" # Mount our components folder for the runtime to use. The mounted location must match the --resources-path argument.
    depends_on:
      - nodeapp
    network_mode: "service:nodeapp" # Attach the nodeapp-dapr service to the nodeapp network namespace

  ... # Deploy other daprized services and components (i.e. Redis)

  placement:
    image: "daprio/placement"
    command: ["./placement", "--port", "50006"]
    ports:
      - "50006:50006"

  scheduler:
    image: "daprio/scheduler"
    command: ["./scheduler", "--port", "50007", "--etcd-data-dir", "/data"]
    ports:
      - "50007:50007"
    user: root
    volumes:
    - "./dapr-etcd-data/:/data"
  
  networks:
    hello-dapr: null

For those running the Docker daemon on a Linux host, you can also use network_mode: host to leverage host networking if needed.

To further learn how to run Dapr with Docker Compose, see the Docker-Compose Sample.

The above example also includes a scheduler definition that uses a non-persistent data store for testing and development purposes.

Run on Kubernetes

If your deployment target is Kubernetes please use Dapr’s first-class integration. Refer to the Dapr on Kubernetes docs.

Name resolution

Dapr by default uses mDNS as the name resolution component in self-hosted mode for service invocation. If you are running Dapr on virtual machines or where mDNS is not available, then you can use the HashiCorp Consul component for name resolution.

Docker images

Dapr provides a number of prebuilt Docker images for different components, you should select the relevant image for your desired binary, architecture, and tag/version.

Images

There are published Docker images for each of the Dapr components available on Docker Hub.

Tags

Linux/amd64

  • latest: The latest release version, ONLY use for development purposes.
  • edge: The latest edge build (master).
  • major.minor.patch: A release version.
  • major.minor.patch-rc.iteration: A release candidate.

Linux/arm/v7

  • latest-arm: The latest release version for ARM, ONLY use for development purposes.
  • edge-arm: The latest edge build for ARM (master).
  • major.minor.patch-arm: A release version for ARM.
  • major.minor.patch-rc.iteration-arm: A release candidate for ARM.

2.1.3 - How-To: Run Dapr in self-hosted mode with Podman

How to deploy and run Dapr in self-hosted mode using Podman

This article provides guidance on running Dapr with Podman on a Windows/Linux/macOS machine or VM.

Prerequisites

Initialize Dapr environment

To initialize the Dapr control plane containers and create a default configuration file, run:

dapr init --container-runtime podman

Run both app and sidecar as a process

The dapr run CLI command can be used to launch a Dapr sidecar along with your application:

dapr run --app-id myapp --app-port 5000 -- dotnet run

This command launches both the daprd sidecar and your application.

Run app as a process and sidecar as a Docker container

Alternately, if you are running Dapr in a Docker container and your app as a process on the host machine, then you need to configure Podman to use the host network so that Dapr and the app can share a localhost network interface.

If you are running Podman on Linux host then you can run the following to launch Dapr:

podman run --network="host" --mount type=bind,source="$(pwd)"/components,target=/components daprio/daprd:edge ./daprd -app-id <my-app-id> -app-port <my-app-port>

Then you can run your app on the host and they should connect over the localhost network interface.

Uninstall Dapr environment

To uninstall Dapr completely, run:

dapr uninstall --container-runtime podman --all

2.1.4 - How-To: Run Dapr in an offline or airgap environment

How to deploy and run Dapr in self-hosted mode in an airgap environment

Overview

By default, Dapr initialization downloads binaries and pulls images from the network to setup the development environment. However, Dapr also supports offline or airgap installation using pre-downloaded artifacts, either with a Docker or slim environment. The artifacts for each Dapr release are built into a Dapr Installer Bundle which can be downloaded. By using this installer bundle with the Dapr CLI init command, you can install Dapr into environments that do not have any network access.

Setup

Before airgap initialization, it is required to download a Dapr Installer Bundle beforehand, containing the CLI, runtime and dashboard packaged together. This eliminates the need to download binaries as well as Docker images when initializing Dapr locally.

  1. Download the Dapr Installer Bundle for the specific release version. For example, daprbundle_linux_amd64.tar.gz, daprbundle_windows_amd64.zip.

  2. Unpack it.

  3. To install Dapr CLI copy the daprbundle/dapr (dapr.exe for Windows) binary to the desired location:

    • For Linux/MacOS - /usr/local/bin
    • For Windows, create a directory and add this to your System PATH. For example create a directory called c:\dapr and add this directory to your path, by editing your system environment variable.

    Note: If Dapr CLI is not moved to the desired location, you can use local dapr CLI binary in the bundle. The steps above is to move it to the usual location and add it to the path.

Initialize Dapr environment

Dapr can be initialized in an airgap environment with or without Docker containers.

Initialize Dapr with Docker

(Prerequisite: Docker is available in the environment)

Move to the bundle directory and run the following command:

dapr init --from-dir .

For linux users, if you run your Docker cmds with sudo, you need to use “sudo dapr init

If you are not running the above cmd from the bundle directory, provide the full path to bundle directory as input. For example, assuming the bundle directory path is $HOME/daprbundle, run dapr init --from-dir $HOME/daprbundle to have the same behavior.

The output should look similar to the following:

  Making the jump to hyperspace...
â„šī¸  Installing runtime version latest
↘  Extracting binaries and setting up components... Loaded image: daprio/dapr:$version
✅  Extracting binaries and setting up components...
✅  Extracted binaries and completed components set up.
â„šī¸  daprd binary has been installed to $HOME/.dapr/bin.
â„šī¸  dapr_placement container is running.
â„šī¸  Use `docker ps` to check running containers.
✅  Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started

Note: To emulate online Dapr initialization using dapr init, you can also run Redis and Zipkin containers as follows:

1. docker run --name "dapr_zipkin" --restart always -d -p 9411:9411 openzipkin/zipkin
2. docker run --name "dapr_redis" --restart always -d -p 6379:6379 redislabs/rejson

Initialize Dapr without Docker

Alternatively to have the CLI not install any default configuration files or run any Docker containers, use the --slim flag with the init command. Only the Dapr binaries will be installed.

dapr init --slim --from-dir .

The output should look similar to the following:

⌛  Making the jump to hyperspace...
â„šī¸  Installing runtime version latest
↙  Extracting binaries and setting up components... 
✅  Extracting binaries and setting up components...
✅  Extracted binaries and completed components set up.
â„šī¸  daprd binary has been installed to $HOME.dapr/bin.
â„šī¸  placement binary has been installed to $HOME/.dapr/bin.
✅  Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started

2.1.5 - How-To: Run Dapr in self-hosted mode without Docker

How to deploy and run Dapr in self-hosted mode without Docker installed on the local machine

Prerequisites

Initialize Dapr without containers

The Dapr CLI provides an option to initialize Dapr using slim init, without the default creation of a development environment with a dependency on Docker. To initialize Dapr with slim init, after installing the Dapr CLI, use the following command:

dapr init --slim

Two different binaries are installed:

  • daprd
  • placement

The placement binary is needed to enable actors in a Dapr self-hosted installation.

In slim init mode, no default components (such as Redis) are installed for state management or pub/sub. This means that, aside from service invocation, no other building block functionality is available “out-of-the-box” on install. Instead, you can set up your own environment and custom components.

Actor-based service invocation is possible if a state store is configured, as explained in the following sections.

Perform service invocation

See the Hello Dapr slim sample for an example on how to perform service invocation in slim init mode.

Enable state management or pub/sub

See documentation around configuring Redis in self-hosted mode without Docker to enable a local state store or pub/sub broker for messaging.

Enable actors

To enable actor placement:

By default, the placement binary is installed in:

  • For Linux/MacOS: /$HOME/.dapr/bin
  • For Windows: %USERPROFILE%\.dapr\bin
$ $HOME/.dapr/bin/placement

INFO[0000] starting Dapr Placement Service -- version 1.0.0-rc.1 -- commit 13ae49d  instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
INFO[0000] log level set to: info                        instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
INFO[0000] metrics server started on :9090/              instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.metrics type=log ver=1.0.0-rc.1
INFO[0000] Raft server is starting on 127.0.0.1:8201...  instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement.raft type=log ver=1.0.0-rc.1
INFO[0000] placement service started on port 50005       instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
INFO[0000] Healthz server is listening on :8080          instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
INFO[0001] cluster leadership acquired                   instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
INFO[0001] leader is established.                        instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1

When running standalone placement on Windows, specify port 6050:

%USERPROFILE%/.dapr/bin/placement.exe -port 6050

time="2022-10-17T14:56:55.4055836-05:00" level=info msg="starting Dapr Placement Service -- version 1.9.0 -- commit fdce5f1f1b76012291c888113169aee845f25ef8" instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
time="2022-10-17T14:56:55.4066226-05:00" level=info msg="log level set to: info" instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
time="2022-10-17T14:56:55.4067306-05:00" level=info msg="metrics server started on :9090/" instance=LAPTOP-OMK50S19 scope=dapr.metrics type=log ver=1.9.0
time="2022-10-17T14:56:55.4077529-05:00" level=info msg="Raft server is starting on 127.0.0.1:8201..." instance=LAPTOP-OMK50S19 scope=dapr.placement.raft type=log ver=1.9.0
time="2022-10-17T14:56:55.4077529-05:00" level=info msg="placement service started on port 6050" instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
time="2022-10-17T14:56:55.4082772-05:00" level=info msg="Healthz server is listening on :8080" instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
time="2022-10-17T14:56:56.8232286-05:00" level=info msg="cluster leadership acquired" instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
time="2022-10-17T14:56:56.8232286-05:00" level=info msg="leader is established." instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0

Now, to run an application with actors enabled, you can follow the sample example created for:

Update the state store configuration files to match the Redis host and password with your setup.

Enable it as a actor state store by making the metadata piece similar to the sample Java Redis component definition.

  - name: actorStateStore
    value: "true"

Clean up

When finished, remove the binaries by following Uninstall Dapr in a self-hosted environment to remove the binaries.

Next steps

2.1.6 - How-to: Persist Scheduler Jobs

Configure Scheduler to persist its database to make it resilient to restarts

The Scheduler service is responsible for writing jobs to its embedded database and scheduling them for execution. By default, the Scheduler service database writes this data to the local volume dapr_scheduler, meaning that this data is persisted across restarts.

The host file location for this local volume is typically located at either /var/lib/docker/volumes/dapr_scheduler/_data or ~/.local/share/containers/storage/volumes/dapr_scheduler/_data, depending on your container runtime. Note that if you are using Docker Desktop, this volume is located in the Docker Desktop VM’s filesystem, which can be accessed using:

docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh

The Scheduler persistent volume can be modified with a custom volume that is pre-existing, or is created by Dapr.

dapr init --scheduler-volume my-scheduler-volume

2.1.7 - Steps to upgrade Dapr in a self-hosted environment

Follow these steps to upgrade Dapr in self-hosted mode and ensure a smooth upgrade
  1. Uninstall the current Dapr deployment:

    dapr uninstall --all
    
  2. Download and install the latest CLI by visiting this guide.

  3. Initialize the Dapr runtime:

    dapr init
    
  4. Ensure you are using the latest version of Dapr (v1.15.5)) with:

    $ dapr --version
    
    CLI version: 1.15
    Runtime version: 1.15
    

2.1.8 - Uninstall Dapr in a self-hosted environment

Steps to remove Dapr from your local machine

The following CLI command removes the Dapr sidecar binaries and the placement container:

dapr uninstall

The above command will not remove the Redis or Zipkin containers that were installed during dapr init by default, just in case you were using them for other purposes. To remove Redis, Zipkin, Actor Placement container, as well as the default Dapr directory located at $HOME/.dapr or %USERPROFILE%\.dapr\, run:

dapr uninstall --all

2.2 - Deploy and run Dapr in Kubernetes mode

How to get Dapr up and running on your Kubernetes cluster

2.2.1 - Overview of Dapr on Kubernetes

Overview of how to get Dapr running on your Kubernetes cluster

Dapr can be configured to run on any supported versions of Kubernetes. To achieve this, Dapr begins by deploying the following Kubernetes services, which provide first-class integration to make running applications with Dapr easy.

Kubernetes servicesDescription
dapr-operatorManages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.)
dapr-sidecar-injectorInjects Dapr into annotated deployment pods and adds the environment variables DAPR_HTTP_PORT and DAPR_GRPC_PORT to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values.
dapr-placementUsed for actors only. Creates mapping tables that map actor instances to pods
dapr-sentryManages mTLS between services and acts as a certificate authority. For more information read the security overview
dapr-schedulerProvides distributed job scheduling capabilities used by the Jobs API, Workflow API, and Actor Reminders

Supported versions

Dapr support for Kubernetes is aligned with Kubernetes Version Skew Policy.

Deploying Dapr to a Kubernetes cluster

Read Deploy Dapr on a Kubernetes cluster to learn how to deploy Dapr to your Kubernetes cluster.

Adding Dapr to a Kubernetes deployment

Deploying and running a Dapr-enabled application into your Kubernetes cluster is as simple as adding a few annotations to the pods schema. For example, in the following example, your Kubernetes pod is annotated to:

  • Give your service an id and port known to Dapr
  • Turn on tracing through configuration
  • Launch the Dapr sidecar container
  annotations:
    dapr.io/enabled: "true"
    dapr.io/app-id: "nodeapp"
    dapr.io/app-port: "3000"
    dapr.io/config: "tracing"

For more information, check Dapr annotations.

Pulling container images from private registries

Dapr works seamlessly with any user application container image, regardless of its origin. Simply initialize Dapr and add the Dapr annotations to your Kubernetes definition to add the Dapr sidecar.

The Dapr control plane and sidecar images come from the daprio Docker Hub container registry, which is a public registry.

For information about:

Tutorials

Work through the Hello Kubernetes tutorial to learn more about getting started with Dapr on your Kubernetes cluster.

2.2.2 - Kubernetes cluster setup

How to create a Kubernetes cluster

2.2.2.1 - Set up a Minikube cluster

How to setup a Minikube cluster

Prerequisites

Start the Minikube cluster

  1. If applicable for your project, set the default VM.

    minikube config set vm-driver [driver_name]
    
  2. Start the cluster. If necessary, specify version 1.13.x or newer of Kubernetes with --kubernetes-version

    minikube start --cpus=4 --memory=4096
    
  3. Enable the Minikube dashboard and ingress add-ons.

    # Enable dashboard
    minikube addons enable dashboard
    
    # Enable ingress
    minikube addons enable ingress
    

Install Helm v3 (optional)

If you are using Helm, install the Helm v3 client.

Troubleshooting

The external IP address of load balancer is not shown from kubectl get svc.

In Minikube, EXTERNAL-IP in kubectl get svc shows <pending> state for your service. In this case, you can run minikube service [service_name] to open your service without external IP address.

$ kubectl get svc
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)            AGE
...
calculator-front-end        LoadBalancer   10.103.98.37     <pending>     80:30534/TCP       25h
calculator-front-end-dapr   ClusterIP      10.107.128.226   <none>        80/TCP,50001/TCP   25h
...

$ minikube service calculator-front-end
|-----------|----------------------|-------------|---------------------------|
| NAMESPACE |         NAME         | TARGET PORT |            URL            |
|-----------|----------------------|-------------|---------------------------|
| default   | calculator-front-end |             | http://192.168.64.7:30534 |
|-----------|----------------------|-------------|---------------------------|
🎉  Opening kubernetes service  default/calculator-front-end in default browser...

2.2.2.2 - Set up a KiND cluster

How to set up a KiND cluster

Prerequisites

Install and configure KiND

Refer to the KiND documentation to install.

If you are using Docker Desktop, verify that you have the recommended settings.

Configure and create the KiND cluster

  1. Create a file named kind-cluster-config.yaml, and paste the following:

    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    nodes:
    - role: control-plane
      kubeadmConfigPatches:
      - |
        kind: InitConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            node-labels: "ingress-ready=true"
      extraPortMappings:
      - containerPort: 80
        hostPort: 8081
        protocol: TCP
      - containerPort: 443
        hostPort: 8443
        protocol: TCP
    - role: worker
    - role: worker
    

    This cluster configuration:

    • Requests KiND to spin up a Kubernetes cluster comprised of a control plane and two worker nodes.
    • Allows for future setup of ingresses.
    • Exposes container ports to the host machine.
  2. Run the kind create cluster command, providing the cluster configuration file:

    kind create cluster --config kind-cluster-config.yaml
    

    Expected output

    Creating cluster "kind" ...
     ✓ Ensuring node image (kindest/node:v1.21.1) đŸ–ŧ
     ✓ Preparing nodes đŸ“Ļ đŸ“Ļ đŸ“Ļ
     ✓ Writing configuration 📜
     ✓ Starting control-plane đŸ•šī¸
     ✓ Installing CNI 🔌
     ✓ Installing StorageClass 💾
     ✓ Joining worker nodes 🚜
    Set kubectl context to "kind-kind"
    You can now use your cluster with:
    
    kubectl cluster-info --context kind-kind
    
    Thanks for using kind! 😊
    

Initialize and run Dapr

  1. Initialize Dapr in Kubernetes.

    dapr init --kubernetes
    

    Once Dapr finishes initializing, you can use its core components on the cluster.

  2. Verify the status of the Dapr components:

    dapr status -k
    

    Expected output

      NAME                   NAMESPACE    HEALTHY  STATUS   REPLICAS  VERSION  AGE  CREATED
      dapr-sentry            dapr-system  True     Running  1         1.5.1    53s  2021-12-10 09:27.17
      dapr-operator          dapr-system  True     Running  1         1.5.1    53s  2021-12-10 09:27.17
      dapr-sidecar-injector  dapr-system  True     Running  1         1.5.1    53s  2021-12-10 09:27.17
      dapr-dashboard         dapr-system  True     Running  1         0.9.0    53s  2021-12-10 09:27.17
      dapr-placement-server  dapr-system  True     Running  1         1.5.1    52s  2021-12-10 09:27.18
    
  3. Forward a port to Dapr dashboard:

    dapr dashboard -k -p 9999
    
  4. Navigate to http://localhost:9999 to validate a successful setup.

Install metrics-server on the Kind Kubernetes Cluster

  1. Get metrics-server manifests

    wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
    
  2. Add insecure TLS parameter to the components.yaml file

    metadata:
       labels:
         k8s-app: metrics-server
     spec:
       containers:
       - args:
         - --cert-dir=/tmp
         - --secure-port=4443
         - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
         - --kubelet-use-node-status-port
         - --kubelet-insecure-tls   <==== Add this
         - --metric-resolution=15s
         image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
         imagePullPolicy: IfNotPresent
         livenessProbe:
           failureThreshold: 3
           httpGet:
             path: /livez
    
  3. Apply modified manifest

    kubectl apply -f components.yaml
    

2.2.2.3 - Set up an Azure Kubernetes Service (AKS) cluster

Learn how to set up an Azure Kubernetes Cluster

This guide walks you through installing an Azure Kubernetes Service (AKS) cluster. If you need more information, refer to Quickstart: Deploy an AKS cluster using the Azure CLI

Prerequisites

Deploy an AKS cluster

  1. In the terminal, log into Azure.

    az login
    
  2. Set your default subscription:

    az account set -s [your_subscription_id]
    
  3. Create a resource group.

    az group create --name [your_resource_group] --location [region]
    
  4. Create an AKS cluster. To use a specific version of Kubernetes, use --kubernetes-version (1.13.x or newer version required).

    az aks create --resource-group [your_resource_group] --name [your_aks_cluster_name] --location [region] --node-count 2 --enable-app-routing --generate-ssh-keys
    
  5. Get the access credentials for the AKS cluster.

    az aks get-credentials -n [your_aks_cluster_name] -g [your_resource_group]
    

AKS Edge Essentials

To create a single-machine K8s/K3s Linux-only cluster using Azure Kubernetes Service (AKS) Edge Essentials, you can follow the quickstart guide available at AKS Edge Essentials quickstart guide.

2.2.2.4 - Set up a Google Kubernetes Engine (GKE) cluster

Set up a Google Kubernetes Engine cluster

Prerequisites

Create a new cluster

Create a GKE cluster by running the following:

$ gcloud services enable container.googleapis.com && \
  gcloud container clusters create $CLUSTER_NAME \
  --zone $ZONE \
  --project $PROJECT_ID

For more options:

Sidecar injection for private GKE clusters

Sidecar injection for private clusters requires extra steps.

In private GKE clusters, an automatically created firewall rule for master access doesn’t open port 4000, which Dapr needs for sidecar injection.

Review the relevant firewall rule:

$ gcloud compute firewall-rules list --filter="name~gke-${CLUSTER_NAME}-[0-9a-z]*-master"

Replace the existing rule and allow Kubernetes master access to port 4000:

$ gcloud compute firewall-rules update <firewall-rule-name> --allow tcp:10250,tcp:443,tcp:4000

Retrieve your credentials for kubectl

Run the following command to retrieve your credentials:

$ gcloud container clusters get-credentials $CLUSTER_NAME \
    --zone $ZONE \
    --project $PROJECT_ID

Install Helm v3 (optional)

If you are using Helm, install the Helm v3 client.

Troubleshooting

Kubernetes dashboard permissions

Let’s say you receive an error message similar to the following:

configmaps is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list configmaps in the namespace "default"

Execute this command:

kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard

2.2.2.5 - Set up an Elastic Kubernetes Service (EKS) cluster

Learn how to set up an EKS Cluster

This guide walks you through installing an Elastic Kubernetes Service (EKS) cluster. If you need more information, refer to Create an Amazon EKS cluster

Prerequisites

Deploy an EKS cluster

  1. In the terminal, log into AWS.

    aws configure
    
  2. Create a new file called cluster-config.yaml and add the content below to it, replacing [your_cluster_name], [your_cluster_region], and [your_k8s_version] with the appropriate values:

    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig
    
    metadata:
      name: [your_cluster_name]
      region: [your_cluster_region]
      version: [your_k8s_version]
      tags:
        karpenter.sh/discovery: [your_cluster_name]
    
    iam:
      withOIDC: true
    
    managedNodeGroups:
      - name: mng-od-4vcpu-8gb
        desiredCapacity: 2
        minSize: 1
        maxSize: 5
        instanceType: c5.xlarge
        privateNetworking: true
    
    addons:
      - name: vpc-cni 
        attachPolicyARNs:
          - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
      - name: coredns
        version: latest 
      - name: kube-proxy
        version: latest
      - name: aws-ebs-csi-driver
        wellKnownPolicies: 
          ebsCSIController: true
    
  3. Create the cluster by running the following command:

    eksctl create cluster -f cluster-config.yaml
    
  4. Verify the kubectl context:

    kubectl config current-context
    

Add Dapr requirements for sidecar access and default storage class:

  1. Update the security group rule to allow the EKS cluster to communicate with the Dapr Sidecar by creating an inbound rule for port 4000.

    aws ec2 authorize-security-group-ingress --region [your_aws_region] \
    --group-id [your_security_group] \
    --protocol tcp \
    --port 4000 \
    --source-group [your_security_group]
    
  2. Add a default storage class if you don’t have one:

kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Install Dapr

Install Dapr on your cluster by running:

dapr init -k

You should see the following response:

⌛  Making the jump to hyperspace...
â„šī¸  Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr-kubernetes/#install-with-helm-advanced

â„šī¸  Container images will be pulled from Docker Hub
✅  Deploying the Dapr control plane with latest version to your cluster...
✅  Deploying the Dapr dashboard with latest version to your cluster...
✅  Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://docs.dapr.io/getting-started

Troubleshooting

Access permissions

If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile. More information here:

aws eks --region [your_aws_region] update-kubeconfig --name [your_eks_cluster_name] --profile [your_profile_name]

2.2.3 - Deploy Dapr on a Kubernetes cluster

Follow these steps to deploy Dapr on Kubernetes.

When setting up Dapr on Kubernetes, you can use either the Dapr CLI or Helm.

Install with Dapr CLI

You can install Dapr on a Kubernetes cluster using the Dapr CLI.

Prerequisites

Installation options

You can install Dapr from an official Helm chart or a private chart, using a custom namespace, etc.

Install Dapr from an official Dapr Helm chart

The -k flag initializes Dapr on the Kubernetes cluster in your current context.

  1. Verify the correct “target” cluster is set by checking kubectl context (kubectl config get-contexts).

    • You can set a different context using kubectl config use-context <CONTEXT>.
  2. Initialize Dapr on your cluster with the following command:

    dapr init -k
    

    Expected output

    ⌛  Making the jump to hyperspace...
    
    ✅  Deploying the Dapr control plane to your cluster...
    ✅  Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
    
  3. Run the dashboard:

    dapr dashboard -k
    

    If you installed Dapr in a non-default namespace, run:

    dapr dashboard -k -n <your-namespace>
    

Install Dapr from the offical Dapr Helm chart (with development flag)

Adding the --dev flag initializes Dapr on the Kubernetes cluster on your current context, with the addition of Redis and Zipkin deployments.

The steps are similar to installing from the Dapr Helm chart, except for appending the --dev flag to the init command:

dapr init -k --dev

Expected output:

⌛  Making the jump to hyperspace...
â„šī¸  Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr-kubernetes/#install-with-helm-advanced

â„šī¸  Container images will be pulled from Docker Hub
✅  Deploying the Dapr control plane with latest version to your cluster...
✅  Deploying the Dapr dashboard with latest version to your cluster...
✅  Deploying the Dapr Redis with latest version to your cluster...
✅  Deploying the Dapr Zipkin with latest version to your cluster...
â„šī¸  Applying "statestore" component to Kubernetes "default" namespace.
â„šī¸  Applying "pubsub" component to Kubernetes "default" namespace.
â„šī¸  Applying "appconfig" zipkin configuration to Kubernetes "default" namespace.
✅  Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://aka.ms/dapr-getting-started

After a short period of time (or using the --wait flag and specifying an amount of time to wait), you can check that the Redis and Zipkin components have been deployed to the cluster.

kubectl get pods --namespace default

Expected output:

NAME                              READY   STATUS    RESTARTS   AGE
dapr-dev-zipkin-bfb4b45bb-sttz7   1/1     Running   0          159m
dapr-dev-redis-master-0           1/1     Running   0          159m
dapr-dev-redis-replicas-0         1/1     Running   0          159m
dapr-dev-redis-replicas-1         1/1     Running   0          159m
dapr-dev-redis-replicas-2         1/1     Running   0          158m 

Install Dapr from a private Dapr Helm chart

Installing Dapr from a private Helm chart can be helpful for when you:

  • Need more granular control of the Dapr Helm chart
  • Have a custom Dapr deployment
  • Pull Helm charts from trusted registries that are managed and maintained by your organization

Set the following parameters to allow dapr init -k to install Dapr images from the configured Helm repository.

export DAPR_HELM_REPO_URL="https://helm.custom-domain.com/dapr/dapr"
export DAPR_HELM_REPO_USERNAME="username_xxx"
export DAPR_HELM_REPO_PASSWORD="passwd_xxx"

Install in high availability mode

You can run Dapr with three replicas of each control plane pod in the dapr-system namespace for production scenarios.

dapr init -k --enable-ha=true

Install in custom namespace

The default namespace when initializing Dapr is dapr-system. You can override this with the -n flag.

dapr init -k -n mynamespace

Disable mTLS

Dapr is initialized by default with mTLS. You can disable it with:

dapr init -k --enable-mtls=false

Wait for the installation to complete

You can wait for the installation to complete its deployment with the --wait flag. The default timeout is 300s (5 min), but can be customized with the --timeout flag.

dapr init -k --wait --timeout 600

Uninstall Dapr on Kubernetes with CLI

Run the following command on your local machine to uninstall Dapr on your cluster:

dapr uninstall -k

Install with Helm

You can install Dapr on Kubernetes using a Helm v3 chart.

❗Important: The latest Dapr Helm chart no longer supports Helm v2. Migrate from Helm v2 to Helm v3.

Prerequisites

Add and install Dapr Helm chart

  1. Add the Helm repo and update:

    // Add the official Dapr Helm chart.
    helm repo add dapr https://dapr.github.io/helm-charts/
    // Or also add a private Dapr Helm chart.
    helm repo add dapr http://helm.custom-domain.com/dapr/dapr/ \
       --username=xxx --password=xxx
    helm repo update
    // See which chart versions are available
    helm search repo dapr --devel --versions
    
  2. Install the Dapr chart on your cluster in the dapr-system namespace.

    helm upgrade --install dapr dapr/dapr \
    --version=1.15 \
    --namespace dapr-system \
    --create-namespace \
    --wait
    

    To install in high availability mode:

    helm upgrade --install dapr dapr/dapr \
    --version=1.15 \
    --namespace dapr-system \
    --create-namespace \
    --set global.ha.enabled=true \
    --wait
    

    To install in high availability mode and scale select services independently of global:

        helm upgrade --install dapr dapr/dapr \
     --version=1.15 \
     --namespace dapr-system \
     --create-namespace \
     --set global.ha.enabled=false \
     --set dapr_scheduler.ha=true \
     --set dapr_placement.ha=true \
     --wait
    

See Guidelines for production ready deployments on Kubernetes for more information on installing and upgrading Dapr using Helm.

(optional) Install the Dapr dashboard as part of the control plane

If you want to install the Dapr dashboard, use this Helm chart with the additional settings of your choice:

helm install dapr dapr/dapr-dashboard --namespace dapr-system

For example:

helm repo add dapr https://dapr.github.io/helm-charts/
helm repo update
kubectl create namespace dapr-system
# Install the Dapr dashboard
helm install dapr-dashboard dapr/dapr-dashboard --namespace dapr-system

Verify installation

Once the installation is complete, verify that the dapr-operator, dapr-placement, dapr-sidecar-injector, and dapr-sentry pods are running in the dapr-system namespace:

kubectl get pods --namespace dapr-system
NAME                                     READY     STATUS    RESTARTS   AGE
dapr-dashboard-7bd6cbf5bf-xglsr          1/1       Running   0          40s
dapr-operator-7bd6cbf5bf-xglsr           1/1       Running   0          40s
dapr-placement-7f8f76778f-6vhl2          1/1       Running   0          40s
dapr-sidecar-injector-8555576b6f-29cqm   1/1       Running   0          40s
dapr-sentry-9435776c7f-8f7yd             1/1       Running   0          40s

Uninstall Dapr on Kubernetes

helm uninstall dapr --namespace dapr-system

More information

Use Mariner-based images

The default container images pulled on Kubernetes are based on distroless.

Alternatively, you can use Dapr container images based on Mariner 2 (minimal distroless). Mariner, officially known as CBL-Mariner, is a free and open-source Linux distribution and container base image maintained by Microsoft. For some Dapr users, leveraging container images based on Mariner can help you meet compliance requirements.

To use Mariner-based images for Dapr, you need to add -mariner to your Docker tags. For example, while ghcr.io/dapr/dapr:latest is the Docker image based on distroless, ghcr.io/dapr/dapr:latest-mariner is based on Mariner. Tags pinned to a specific version are also available, such as 1.15-mariner.

In the Dapr CLI, you can switch to using Mariner-based images with the --image-variant flag.

dapr init -k --image-variant mariner

With Kubernetes and Helm, you can use Mariner-based images by setting the global.tag option and adding -mariner. For example:

helm upgrade --install dapr dapr/dapr \
  --version=1.15 \
  --namespace dapr-system \
  --create-namespace \
  --set global.tag=1.15.5-mariner \
  --wait

2.2.4 - Upgrade Dapr on a Kubernetes cluster

Follow these steps to upgrade Dapr on Kubernetes and ensure a smooth upgrade.

You can upgrade the Dapr control plane on a Kubernetes cluster using either the Dapr CLI or Helm.

Upgrade using the Dapr CLI

You can upgrade Dapr using the Dapr CLI.

Prerequisites

Upgrade existing cluster to 1.15.5

dapr upgrade -k --runtime-version=1.15.5

You can provide all the available Helm chart configurations using the Dapr CLI.

Troubleshoot upgrading via the CLI

There is a known issue running upgrades on clusters that may have previously had a version prior to 1.0.0-rc.2 installed on a cluster.

While this issue is uncommon, a few upgrade path edge cases may leave an incompatible CustomResourceDefinition installed on your cluster. If this is your scenario, you may see an error message like the following:

❌  Failed to upgrade Dapr: Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
The CustomResourceDefinition "configurations.dapr.io" is invalid: spec.preserveUnknownFields: Invalid value: true: must be false in order to use defaults in the schema

Solution

  1. Run the following command to upgrade the CustomResourceDefinition to a compatible version:

    kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/release-1.15/charts/dapr/crds/configuration.yaml
    
  2. Proceed with the dapr upgrade --runtime-version 1.15.5 -k command.

Upgrade using Helm

You can upgrade Dapr using a Helm v3 chart.

❗Important: The latest Dapr Helm chart no longer supports Helm v2. Migrate from Helm v2 to Helm v3.

Prerequisites

Upgrade existing cluster to 1.15.5

As of version 1.0.0 onwards, existing certificate values will automatically be reused when upgrading Dapr using Helm.

Note Helm does not handle upgrading resources, so you need to perform that manually. Resources are backward-compatible and should only be installed forward.

  1. Upgrade Dapr to version 1.15.5:

    kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/v1.15.5/charts/dapr/crds/components.yaml
    kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/v1.15.5/charts/dapr/crds/configuration.yaml
    kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/v1.15.5/charts/dapr/crds/subscription.yaml
    kubectl apply -f https://raw.githubusercontent.com/dapr/dapr/v1.15.5/charts/dapr/crds/resiliency.yaml
    kubectl apply -f https://raw.githubusercontent.com/dapr/dapr/v1.15.5/charts/dapr/crds/httpendpoints.yaml
    
    helm repo update
    
    helm upgrade dapr dapr/dapr --version 1.15.5 --namespace dapr-system --wait
    

    If you’re using a values file, remember to add the --values option when running the upgrade command.*

  2. Ensure all pods are running:

    kubectl get pods -n dapr-system -w
    
    NAME                                     READY   STATUS    RESTARTS   AGE
    dapr-dashboard-69f5c5c867-mqhg4          1/1     Running   0          42s
    dapr-operator-5cdd6b7f9c-9sl7g           1/1     Running   0          41s
    dapr-placement-server-0                  1/1     Running   0          41s
    dapr-sentry-84565c747b-7bh8h             1/1     Running   0          35s
    dapr-sidecar-injector-68f868668f-6xnbt   1/1     Running   0          41s
    
  3. Restart your application deployments to update the Dapr runtime:

    kubectl rollout restart deploy/<DEPLOYMENT-NAME>
    

Upgrade existing Dapr deployment to enable high availability mode

Enable high availability mode in an existing Dapr deployment with a few additional steps.

2.2.5 - Production guidelines on Kubernetes

Best practices for deploying Dapr to a Kubernetes cluster in a production-ready configuration

Cluster and capacity requirements

Dapr support for Kubernetes is aligned with Kubernetes Version Skew Policy.

Use the following resource settings as a starting point. Requirements vary depending on cluster size, number of pods, and other factors. Perform individual testing to find the right values for your environment. In production, it’s recommended to not add memory limits to the Dapr control plane components to avoid OOMKilled pod statuses.

DeploymentCPUMemory
OperatorLimit: 1, Request: 100mRequest: 100Mi
Sidecar InjectorLimit: 1, Request: 100mRequest: 30Mi
SentryLimit: 1, Request: 100mRequest: 30Mi
PlacementLimit: 1, Request: 250mRequest: 75Mi

Helm

When installing Dapr using Helm, no default limit/request values are set. Each component has a resources option (for example, dapr_dashboard.resources), which you can use to tune the Dapr control plane to fit your environment.

The Helm chart readme has detailed information and examples.

For local/dev installations, you might want to skip configuring the resources options.

Optional components

The following Dapr control plane deployments are optional:

  • Placement: For using Dapr Actors
  • Sentry: For mTLS for service-to-service invocation
  • Dashboard: For an operational view of the cluster

Sidecar resource settings

Set the resource assignments for the Dapr sidecar using the supported annotations. The specific annotations related to resource constraints are:

  • dapr.io/sidecar-cpu-limit
  • dapr.io/sidecar-memory-limit
  • dapr.io/sidecar-cpu-request
  • dapr.io/sidecar-memory-request

If not set, the Dapr sidecar runs without resource settings, which may lead to issues. For a production-ready setup, it’s strongly recommended to configure these settings.

Example settings for the Dapr sidecar in a production-ready setup:

CPUMemory
Limit: 300m, Request: 100mLimit: 1000Mi, Request: 250Mi

The CPU and memory limits above account for Dapr supporting a high number of I/O bound operations. Use a monitoring tool to get a baseline for the sidecar (and app) containers and tune these settings based on those baselines.

For more details on configuring resource in Kubernetes, see the following Kubernetes guides:

Setting soft memory limits on Dapr sidecar

Set soft memory limits on the Dapr sidecar when you’ve set up memory limits. With soft memory limits, the sidecar garbage collector frees up memory once it exceeds the limit instead of waiting for it to be double of the last amount of memory present in the heap when it was run. Waiting is the default behavior of the garbage collector used in Go, and can lead to OOM Kill events.

For example, for an app with app-id nodeapp with memory limit set to 1000Mi, you can use the following in your pod annotations:

  annotations:
    dapr.io/enabled: "true"
    dapr.io/app-id: "nodeapp"
    # our daprd memory settings
    dapr.io/sidecar-memory-limit: "1000Mi"   # your memory limit
    dapr.io/env: "GOMEMLIMIT=900MiB"         # 90% of your memory limit. Also notice the suffix "MiB" instead of "Mi"

In this example, the soft limit has been set to be 90% to leave 5-10% for other services, as recommended.

The GOMEMLIMIT environment variable allows certain suffixes for the memory size: B, KiB, MiB, GiB, and TiB.

High availability mode

When deploying Dapr in a production-ready configuration, it’s best to deploy with a high availability (HA) configuration of the control plane. This creates three replicas of each control plane pod in the dapr-system namespace, allowing the Dapr control plane to retain three running instances and survive individual node failures and other outages.

For a new Dapr deployment, HA mode can be set with both:

For an existing Dapr deployment, you can enable HA mode in a few extra steps.

Individual service HA Helm configuration

You can configure HA mode via Helm across all services by setting the global.ha.enabled flag to true. By default, --set global.ha.enabled=true is fully respected and cannot be overridden, making it impossible to simultaneously have either the placement or scheduler service as a single instance.

Note: HA for scheduler and placement services is not the default setting.

To scale scheduler and placement to three instances independently of the global.ha.enabled flag, set global.ha.enabled to false and dapr_scheduler.ha and dapr_placement.ha to true. For example:

helm upgrade --install dapr dapr/dapr \
 --version=1.15 \
 --namespace dapr-system \
 --create-namespace \
 --set global.ha.enabled=false \
 --set dapr_scheduler.ha=true \
 --set dapr_placement.ha=true \
 --wait

Setting cluster critical priority class name for control plane services

In some scenarios, nodes may have memory and/or cpu pressure and the Dapr control plane pods might get selected for eviction. To prevent this, you can set a critical priority class name for the Dapr control plane pods. This ensures that the Dapr control plane pods are not evicted unless all other pods with lower priority are evicted.

It’s particularly important to protect the Dapr control plane components from eviction, especially the Scheduler service. When Schedulers are rescheduled or restarted, it can be highly disruptive to inflight jobs, potentially causing them to fire duplicate times. To prevent such disruptions, you should ensure the Dapr control plane components have a higher priority class than your application workloads.

Learn more about Protecting Mission-Critical Pods.

There are two built-in critical priority classes in Kubernetes:

  • system-cluster-critical
  • system-node-critical (highest priority)

It’s recommended to set the priorityClassName to system-cluster-critical for the Dapr control plane pods. If you have your own custom priority classes for your applications, ensure they have a lower priority value than the one assigned to the Dapr control plane to maintain system stability and prevent disruption of core Dapr services.

For a new Dapr control plane deployment, the system-cluster-critical priority class mode can be set via the helm value global.priorityClassName.

This priority class can be set with both the Dapr CLI and Helm charts, using the helm --set global.priorityClassName=system-cluster-critical argument.

Dapr version < 1.14

For versions of Dapr below v1.14, it’s recommended that you add a ResourceQuota to the Dapr control plane namespace. This prevents problems associated with scheduling pods where the cluster may be configured with limitations on which pods can be assigned high priority classes. For v1.14 onwards the Helm chart adds this automatically.

If you have Dapr installed in namespace dapr-system, you can create a ResourceQuota with the following content:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: dapr-system-critical-quota
  namespace: dapr-system
spec:
  scopeSelector:
    matchExpressions:
      - operator : In
        scopeName: PriorityClass
        values: [system-cluster-critical]

Deploy Dapr with Helm

Visit the full guide on deploying Dapr with Helm.

Parameters file

It’s recommended to create a values file, instead of specifying parameters on the command. Check the values file into source control so that you can track its changes.

See a full list of available parameters and settings.

The following command runs three replicas of each control plane service in the dapr-system namespace.

# Add/update a official Dapr Helm repo.
helm repo add dapr https://dapr.github.io/helm-charts/
# or add/update a private Dapr Helm repo.
helm repo add dapr http://helm.custom-domain.com/dapr/dapr/ \
   --username=xxx --password=xxx
helm repo update

# See which chart versions are available
helm search repo dapr --devel --versions

# create a values file to store variables
touch values.yml
cat << EOF >> values.yml
global:
  ha:
    enabled: true
EOF

# run install/upgrade
helm install dapr dapr/dapr \
  --version=<Dapr chart version> \
  --namespace dapr-system \
  --create-namespace \
  --values values.yml \
  --wait

# verify the installation
kubectl get pods --namespace dapr-system

The Dapr Helm chart automatically deploys with affinity for nodes with the label kubernetes.io/os=linux. You can deploy the Dapr control plane to Windows nodes. For more information, see Deploying to a Hybrid Linux/Windows K8s Cluster.

Upgrade Dapr with Helm

Dapr supports zero-downtime upgrades in the following steps.

Upgrading the CLI is optional, but recommended.

  1. Download the latest version of the CLI.
  2. Verify the Dapr CLI is in your path.

Upgrade the control plane

Upgrade Dapr on a Kubernetes cluster.

Update the data plane (sidecars)

Update pods that are running Dapr to pick up the new version of the Dapr runtime.

  1. Issue a rollout restart command for any deployment that has the dapr.io/enabled annotation:

    kubectl rollout restart deploy/<Application deployment name>
    
  2. View a list of all your Dapr enabled deployments via either:

    • The Dapr Dashboard

    • Running the following command using the Dapr CLI:

      dapr list -k
      
      APP ID     APP PORT  AGE  CREATED
      nodeapp    3000      16h  2020-07-29 17:16.22
      

Enable high availability in an existing Dapr deployment

Enabling HA mode for an existing Dapr deployment requires two steps:

  1. Delete the existing placement stateful set.

    kubectl delete statefulset.apps/dapr-placement-server -n dapr-system
    

    You delete the placement stateful set because, in HA mode, the placement service adds Raft for leader election. However, Kubernetes only allows for limited fields in stateful sets to be patched, subsequently failing upgrade of the placement service.

    Deletion of the existing placement stateful set is safe. The agents reconnect and re-register with the newly created placement service, which persist its table in Raft.

  2. Issue the upgrade command.

    helm upgrade dapr ./charts/dapr -n dapr-system --set global.ha.enabled=true
    

When properly configured, Dapr ensures secure communication and can make your application more secure with a number of built-in features.

Verify your production-ready deployment includes the following settings:

  1. Mutual Authentication (mTLS) is enabled. Dapr has mTLS on by default. Learn more about how to bring your own certificates.

  2. App to Dapr API authentication is enabled. This is the communication between your application and the Dapr sidecar. To secure the Dapr API from unauthorized application access, enable Dapr’s token-based authentication.

  3. Dapr to App API authentication is enabled. This is the communication between Dapr and your application. Let Dapr know that it is communicating with an authorized application using token authentication.

  4. Component secret data is configured in a secret store and not hard-coded in the component YAML file. Learn how to use secrets with Dapr components.

  5. The Dapr control plane is installed on a dedicated namespace, such as dapr-system.

  6. Dapr supports and is enabled to scope components for certain applications. This is not a required practice. Learn more about component scopes.

The Placement service is a component in Dapr, responsible for disseminating information about actor addresses to all Dapr sidecars via a placement table (more information on this can be found here).

When running in production, it’s recommended to configure the Placement service with the following values:

  1. High availability. Ensure the Placement service is highly available (three replicas) and can survive individual node failures. Helm chart value: dapr_placement.ha=true
  2. In-memory logs. Use in-memory Raft log store for faster writes. The tradeoff is more placement table disseminations (and thus, network traffic) in an eventual Placement service pod failure. Helm chart value: dapr_placement.cluster.forceInMemoryLog=true
  3. No metadata endpoint. Disable the unauthenticated /placement/state endpoint which exposes placement table information for the Placement service. Helm chart value: dapr_placement.metadataEnabled=false
  4. Timeouts Control the sensitivity of network connectivity between the Placement service and the sidecars using the below timeout values. Default values are set, but you can adjust these based on your network conditions.
    1. dapr_placement.keepAliveTime sets the interval at which the Placement service sends keep alive pings to Dapr sidecars on the gRPC stream to check if the connection is still alive. Lower values will lead to shorter actor rebalancing time in case of pod loss/restart, but higher network traffic during normal operation. Accepts values between 1s and 10s. Default is 2s.
    2. dapr_placement.keepAliveTimeout sets the timeout period for Dapr sidecars to respond to the Placement service’s keep alive pings before the Placement service closes the connection. Lower values will lead to shorter actor rebalancing time in case of pod loss/restart, but higher network traffic during normal operation. Accepts values between 1s and 10s. Default is 3s.
    3. dapr_placement.disseminateTimeout sets the timeout period for dissemination to be delayed after actor membership change (usually related to pod restarts) to avoid excessive dissemination during multiple pod restarts. Higher values will reduce the frequency of dissemination, but delay the table dissemination. Accepts values between 1s and 3s. Default is 2s.

Service account tokens

By default, Kubernetes mounts a volume containing a Service Account token in each container. Applications can use this token, whose permissions vary depending on the configuration of the cluster and namespace, among other things, to perform API calls against the Kubernetes control plane.

When creating a new Pod (or a Deployment, StatefulSet, Job, etc), you can disable auto-mounting the Service Account token by setting automountServiceAccountToken: false in your pod’s spec.

It’s recommended that you consider deploying your apps with automountServiceAccountToken: false to improve the security posture of your pods, unless your apps depend on having a Service Account token. For example, you may need a Service Account token if:

Thus, Dapr does not set automountServiceAccountToken: false automatically for you. However, in all situations where the Service Account is not required by your solution, it’s recommended that you set this option in the pods spec.

Tracing and metrics configuration

Tracing and metrics are enabled in Dapr by default. It’s recommended that you set up distributed tracing and metrics for your applications and the Dapr control plane in production.

If you already have your own observability setup, you can disable tracing and metrics for Dapr.

Tracing

Configure a tracing backend for Dapr.

Metrics

For metrics, Dapr exposes a Prometheus endpoint listening on port 9090, which can be scraped by Prometheus.

Set up Prometheus, Grafana, and other monitoring tools with Dapr.

Injector watchdog

The Dapr Operator service includes an injector watchdog, which can be used to detect and remediate situations where your application’s pods may be deployed without the Dapr sidecar (the daprd container). For example, it can assist with recovering the applications after a total cluster failure.

The injector watchdog is disabled by default when running Dapr in Kubernetes mode. However, you should consider enabling it with the appropriate values for your specific situation.

Refer to the Dapr operator service documentation for more details on the injector watchdog and how to enable it.

Configure seccompProfile for sidecar containers

By default, the Dapr sidecar injector injects a sidecar without any seccompProfile. However, for the Dapr sidecar container to run successfully in a namespace with the Restricted profile, the sidecar container needs securityContext.seccompProfile.Type to not be nil.

Refer to the Arguments and Annotations overview to set the appropriate seccompProfile on the sidecar container.

Best Practices

Watch this video for a deep dive into the best practices for running Dapr in production with Kubernetes.

2.2.6 - Deploy Dapr per-node or per-cluster with Dapr Shared

Learn more about using Dapr Shared as an alternative deployment to sidecars

Dapr automatically injects a sidecar to enable the Dapr APIs for your applications for the best availability and reliability.

Dapr Shared enables two alternative deployment strategies to create Dapr applications using a Kubernetes Daemonset for a per-node deployment or a Deployment for a per-cluster deployment.

  • DaemonSet: When running Dapr Shared as a Kubernetes DaemonSet resource, the daprd container runs on each Kubernetes node in the cluster. This can reduce network hops between the applications and Dapr.
  • Deployment: When running Dapr Shared as a Kubernetes Deployment, the Kubernetes scheduler decides on which single node in the cluster the daprd container instance runs.

Why Dapr Shared?

By default, when Dapr is installed into a Kubernetes cluster, the Dapr control plane injects Dapr as a sidecar to applications annotated with Dapr annotations ( dapr.io/enabled: "true"). Sidecars offer many advantages, including improved resiliency, since there is an instance per application and all communication between the application and the sidecar happens without involving the network.

While sidecars are Dapr’s default deployment, some use cases require other approaches. Let’s say you want to decouple the lifecycle of your workloads from the Dapr APIs. A typical example of this is functions, or function-as-a-service runtimes, which might automatically downscale your idle workloads to free up resources. For such cases, keeping the Dapr APIs and all the Dapr async functionalities (such as subscriptions) separate might be required.

Dapr Shared was created for these scenarios, extending the Dapr sidecar model with two new deployment approaches: DaemonSet (per-node) and Deployment (per-cluster).

DaemonSet(Per-node)

With Kubernetes DaemonSet, you can define applications that need to be deployed once per node in the cluster. This enables applications that are running on the same node to communicate with local Dapr APIs, no matter where the Kubernetes Scheduler schedules your workload.

Deployment (Per-cluster)

Kubernetes Deployments are installed once per cluster. Based on available resources, the Kubernetes Scheduler decides on which node the workload is scheduled. For Dapr Shared, this means that your workload and the Dapr instance might be located on separate nodes, which can introduce considerable network latency with the trade-off of reduce resource usage.

Getting Started with Dapr Shared

If you want to get started with Dapr Shared, you can create a new Dapr Shared instance by installing the official Helm Chart:

helm install my-shared-instance oci://registry-1.docker.io/daprio/dapr-shared-chart --set shared.appId=<DAPR_APP_ID> --set shared.remoteURL=<REMOTE_URL> --set shared.remotePort=<REMOTE_PORT> --set shared.strategy=deployment

Your Dapr-enabled applications can now make use of the Dapr Shared instance by pointing the Dapr SDKs to or sending requests to the my-shared-instance-dapr Kubernetes service exposed by the Dapr Shared instance.

The my-shared-instance above is the Helm Chart release name.

If you are using the Dapr SDKs, you can set the following environment variables for your application to connect to the Dapr Shared instance (in this case, running on the default namespace):

        env:
        - name: DAPR_HTTP_ENDPOINT
          value: http://my-shared-instance-dapr.default.svc.cluster.local:3500
        - name: DAPR_GRPC_ENDPOINT
          value: http://my-shared-instance-dapr.default.svc.cluster.local:50001 

If you are not using the SDKs, you can send HTTP or gRPC requests to those endpoints.

Next steps

2.2.7 - How-to: Persist Scheduler Jobs

Configure Scheduler to persist its database to make it resilient to restarts

The Scheduler service is responsible for writing jobs to its embedded Etcd database and scheduling them for execution. By default, the Scheduler service database writes data to a Persistent Volume Claim volume of size 1Gb, using the cluster’s default storage class. This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need additional configuration if a default StorageClass is not available or when running a production environment.

Production Setup

ETCD Storage Disk Size

The default storage size for the Scheduler is 1Gb. This size is likely not sufficient for most production deployments. When the storage size is exceeded, the Scheduler will log an error similar to the following:

error running scheduler: etcdserver: mvcc: database space exceeded

Knowing the safe upper bound for your storage size is not an exact science, and relies heavily on the number, persistence, and the data payload size of your application jobs. The Job API and Actor Reminders transparently maps one to one to the usage of your applications. Workflows create a large number of jobs as Actor Reminders, however these jobs are short lived- matching the lifecycle of each workflow execution. The data payload of jobs created by Workflows is typically empty or small.

The Scheduler uses Etcd as its storage backend database. By design, Etcd persists historical transactions and data in form of Write-Ahead Logs (WAL) and snapshots. This means the actual disk usage of Scheduler will be higher than the current observable database state, often by a number of multiples.

Setting the Storage Size on Installation

If you need to increase an existing Scheduler storage size, see the Increase Scheduler Storage Size section below. To increase the storage size (in this example- 16Gi) for a fresh Dapr installation, you can use the following command:

dapr init -k --set dapr_scheduler.cluster.storageSize=16Gi --set dapr_scheduler.etcdSpaceQuota=16Gi
helm upgrade --install dapr dapr/dapr \
--version=1.15 \
--namespace dapr-system \
--create-namespace \
--set dapr_scheduler.cluster.storageSize=16Gi \
--set dapr_scheduler.etcdSpaceQuota=16Gi \
--wait

Increase existing Scheduler Storage Size

By default, each Scheduler will create a Persistent Volume and Persistent Volume Claim of size 1Gi against the default standard storage class for each Scheduler replica. These will look similar to the following, where in this example we are running Scheduler in HA mode.

NAMESPACE     NAME                                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
dapr-system   dapr-scheduler-data-dir-dapr-scheduler-server-0   Bound    pvc-9f699d2e-f347-43b0-aa98-57dcf38229c5   1Gi        RWO            standard       <unset>                 3m25s
dapr-system   dapr-scheduler-data-dir-dapr-scheduler-server-1   Bound    pvc-f4c8be7b-ffbe-407b-954e-7688f2482caa   1Gi        RWO            standard       <unset>                 3m25s
dapr-system   dapr-scheduler-data-dir-dapr-scheduler-server-2   Bound    pvc-eaad5fb1-98e9-42a5-bcc8-d45dba1c4b9f   1Gi        RWO            standard       <unset>                 3m25s
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                         STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-9f699d2e-f347-43b0-aa98-57dcf38229c5   1Gi        RWO            Delete           Bound    dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-0   standard       <unset>                          4m24s
pvc-eaad5fb1-98e9-42a5-bcc8-d45dba1c4b9f   1Gi        RWO            Delete           Bound    dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-2   standard       <unset>                          4m24s
pvc-f4c8be7b-ffbe-407b-954e-7688f2482caa   1Gi        RWO            Delete           Bound    dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-1   standard       <unset>                          4m24s

To expand the storage size of the Scheduler, follow these steps:

  1. First, ensure that the storage class supports volume expansion, and that the allowVolumeExpansion field is set to true if it is not already.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: my.driver
allowVolumeExpansion: true
...
  1. Delete the Scheduler StatefulSet whilst preserving the Bound Persistent Volume Claims.
kubectl delete sts -n dapr-system dapr-scheduler-server --cascade=orphan
  1. Increase the size of the Persistent Volume Claims to the desired size by editing the spec.resources.requests.storage field. Again in this case, we are assuming that the Scheduler is running in HA mode with 3 replicas.
kubectl edit pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 dapr-scheduler-data-dir-dapr-scheduler-server-1 dapr-scheduler-data-dir-dapr-scheduler-server-2
  1. Recreate the Scheduler StatefulSet by installing Dapr with the desired storage size.

Storage Class

In case your Kubernetes deployment does not have a default storage class or you are configuring a production cluster, defining a storage class is required.

A persistent volume is backed by a real disk that is provided by the hosted Cloud Provider or Kubernetes infrastructure platform. Disk size is determined by how many jobs are expected to be persisted at once; however, 64Gb should be more than sufficient for most production scenarios. Some Kubernetes providers recommend using a CSI driver to provision the underlying disks. Below are a list of useful links to the relevant documentation for creating a persistent disk for the major cloud providers:

Once the storage class is available, you can install Dapr using the following command, with Scheduler configured to use the storage class (replace my-storage-class with the name of the storage class):

dapr init -k --set dapr_scheduler.cluster.storageClassName=my-storage-class
helm upgrade --install dapr dapr/dapr \
--version=1.15 \
--namespace dapr-system \
--create-namespace \
--set dapr_scheduler.cluster.storageClassName=my-storage-class \
--wait

Ephemeral Storage

When running in non-HA mode, the Scheduler can be optionally made to use ephemeral storage, which is in-memory storage that is not resilient to restarts. For example, all jobs data is lost after a Scheduler restart. This is useful in non-production deployments or for testing where storage is not available or required.

dapr init -k --set dapr_scheduler.cluster.inMemoryStorage=true
helm upgrade --install dapr dapr/dapr \
--version=1.15 \
--namespace dapr-system \
--create-namespace \
--set dapr_scheduler.cluster.inMemoryStorage=true \
--wait

2.2.8 - Deploy to hybrid Linux/Windows Kubernetes clusters

How to run Dapr apps on Kubernetes clusters with Windows nodes

Dapr supports running your microservices on Kubernetes clusters on:

  • Windows
  • Linux
  • A combination of both

This is especially helpful during a piecemeal migration of a legacy application into a Dapr Kubernetes cluster.

Kubernetes uses a concept called node affinity to denote whether you want your application to be launched on a Linux node or a Windows node. When deploying to a cluster which has both Windows and Linux nodes, you must provide affinity rules for your applications, otherwise the Kubernetes scheduler might launch your application on the wrong type of node.

Prerequisites

Before you begin, set up a Kubernetes cluster with Windows nodes. Many Kubernetes providers support the automatic provisioning of Windows enabled Kubernetes clusters.

  1. Follow your preferred provider’s instructions for setting up a cluster with Windows enabled.

  2. Once you have set up the cluster, verify that both Windows and Linux nodes are available.

    kubectl get nodes -o wide
    
    NAME                                STATUS   ROLES   AGE     VERSION   INTERNAL-IP    EXTERNAL-IP      OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
    aks-nodepool1-11819434-vmss000000   Ready    agent   6d      v1.17.9   10.240.0.4     <none>        Ubuntu 16.04.6    LTS               4.15.0-1092-azure   docker://3.0.10+azure
    aks-nodepool1-11819434-vmss000001   Ready    agent   6d      v1.17.9   10.240.0.35    <none>        Ubuntu 16.04.6    LTS               4.15.0-1092-azure   docker://3.0.10+azure
    aks-nodepool1-11819434-vmss000002   Ready    agent   5d10h   v1.17.9   10.240.0.129   <none>        Ubuntu 16.04.6    LTS               4.15.0-1092-azure   docker://3.0.10+azure
    akswin000000                        Ready    agent   6d      v1.17.9   10.240.0.66    <none>        Windows Server 2019    Datacenter   10.0.17763.1339     docker://19.3.5
    akswin000001                        Ready    agent   6d      v1.17.9   10.240.0.97    <none>        Windows Server 2019    Datacenter   10.0.17763.1339     docker://19.3.5
    

Install the Dapr control plane

If you are installing using the Dapr CLI or via a Helm chart, simply follow the normal deployment procedures: Installing Dapr on a Kubernetes cluster

Affinity will be automatically set for kubernetes.io/os=linux. This will be sufficient for most users, as Kubernetes requires at least one Linux node pool.

Install Dapr applications

Windows applications

  1. Follow the Microsoft documentation to create a Docker Windows container with your application installed.

  2. Once you’ve created a Docker container with your application, create a deployment YAML file with the node affinity set to kubernetes.io/os: windows. In the example deploy_windows.yaml deployment file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: yourwinapp
      labels:
        app: applabel
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: applablel
      template:
        metadata:
          labels:
            app: applabel
          annotations:
            dapr.io/enabled: "true"
            dapr.io/id: "addapp"
            dapr.io/port: "6000"
            dapr.io/config: "appconfig"
        spec:
          containers:
          - name: add
            image: yourreponsitory/your-windows-dapr-container:your-tag
            ports:
            - containerPort: 6000
            imagePullPolicy: Always
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                    - key: kubernetes.io/os
                      operator: In
                      values:
                      - windows
    
  3. Deploy the YAML file to your Kubernetes cluster.

    kubectl apply -f deploy_windows.yaml
    

Linux applications

If you already have a Dapr application that runs on Linux, you still need to add affinity rules.

  1. Create a deployment YAML file with the node affinity set to kubernetes.io/os: linux. In the example deploy_linux.yaml deployment file below:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: yourlinuxapp
      labels:
        app: yourlabel
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: yourlabel
      template:
        metadata:
          labels:
            app: yourlabel
          annotations:
            dapr.io/enabled: "true"
            dapr.io/id: "addapp"
            dapr.io/port: "6000"
            dapr.io/config: "appconfig"
        spec:
          containers:
          - name: add
            image: yourreponsitory/your-application:your-tag
            ports:
            - containerPort: 6000
            imagePullPolicy: Always
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                    - key: kubernetes.io/os
                      operator: In
                      values:
                      - linux
    
  2. Deploy the YAML to your Kubernetes cluster.

    kubectl apply -f deploy_linux.yaml
    

That’s it!

Clean up

To remove the deployments from this guide, run the following commands:

kubectl delete -f deploy_linux.yaml
kubectl delete -f deploy_windows.yaml
helm uninstall dapr

2.2.9 - Running Dapr with a Kubernetes Job

Use Dapr API in a Kubernetes Job context

The Dapr sidecar is designed to be a long running process. In the context of a Kubernetes Job this behavior can block your job completion.

To address this issue, the Dapr sidecar has an endpoint to Shutdown the sidecar.

When running a basic Kubernetes Job, you need to call the /shutdown endpoint for the sidecar to gracefully stop and the job to be considered Completed.

When a job is finished without calling Shutdown, your job is in a NotReady state with only the daprd container running endlessly.

Stopping the Dapr sidecar causes its readiness and liveness probes to fail in your container.

To prevent Kubernetes from trying to restart your job, set your job’s restartPolicy to Never.

Be sure to use the POST HTTP verb when calling the shutdown HTTP API. For example:

apiVersion: batch/v1
kind: Job
metadata:
  name: job-with-shutdown
spec:
  template:
    metadata:
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "with-shutdown"
    spec:
      containers:
      - name: job
        image: alpine
        command: ["/bin/sh", "-c", "apk --no-cache add curl && sleep 20 && curl -X POST localhost:3500/v1.0/shutdown"]
      restartPolicy: Never

You can also call the Shutdown from any of the Dapr SDKs. For example, for the Go SDK:

package main

import (
	"context"
	"log"
	"os"

	dapr "github.com/dapr/go-sdk/client"
)

func main() {
  client, err := dapr.NewClient()
  if err != nil {
    log.Panic(err)
  }
  defer client.Close()
  defer client.Shutdown()
  // Job
}

2.2.10 - How-to: Mount Pod volumes to the Dapr sidecar

Configure the Dapr sidecar to mount Pod Volumes

The Dapr sidecar can be configured to mount any Kubernetes Volume attached to the application Pod. These Volumes can be accessed by the daprd (sidecar) container in read-only or read-write modes. If a Volume is configured to be mounted but it does not exist in the Pod, Dapr logs a warning and ignores it.

For more information on different types of Volumes, check the Kubernetes documentation.

Configuration

You can set the following annotations in your deployment YAML:

AnnotationDescription
dapr.io/volume-mountsFor read-only volume mounts
dapr.io/volume-mounts-rwFor read-write volume mounts

These annotations are comma separated pairs of volume-name:path/in/container. Verify the corresponding Volumes exist in the Pod spec.

Within the official container images, Dapr runs as a process with user ID (UID) 65532. Make sure that folders and files inside the mounted Volume are writable or readable by user 65532 as appropriate.

Although you can mount a Volume in any folder within the Dapr sidecar container, prevent conflicts and ensure smooth operations going forward by placing all mountpoints within one of the following locations, or in a subfolder within them:

LocationDescription
/mntRecommended for Volumes containing persistent data that the Dapr sidecar process can read and/or write.
/tmpRecommended for Volumes containing temporary data, such as scratch disks.

Examples

Basic deployment resource example

In the example Deployment resource below:

  • my-volume1 is available inside the sidecar container at /mnt/sample1 in read-only mode
  • my-volume2 is available inside the sidecar container at /mnt/sample2 in read-only mode
  • my-volume3 is available inside the sidecar container at /tmp/sample3 in read-write mode
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "myapp"
        dapr.io/app-port: "8000"
        dapr.io/volume-mounts: "my-volume1:/mnt/sample1,my-volume2:/mnt/sample2"
        dapr.io/volume-mounts-rw: "my-volume3:/tmp/sample3"
    spec:
      volumes:
        - name: my-volume1
          hostPath:
            path: /sample
        - name: my-volume2
          persistentVolumeClaim:
            claimName: pv-sample
        - name: my-volume3
          emptyDir: {}
...

Custom secrets storage using local file secret store

Since any type of Kubernetes Volume can be attached to the sidecar, you can use the local file secret store to read secrets from a variety of places. For example, if you have a Network File Share (NFS) server running at 10.201.202.203, with secrets stored at /secrets/stage/secrets.json, you can use that as a secrets storage.

  1. Configure the application pod to mount the NFS and attach it to the Dapr sidecar.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: myapp
    ...
    spec:
      ...
      template:
        ...
          annotations:
            dapr.io/enabled: "true"
            dapr.io/app-id: "myapp"
            dapr.io/app-port: "8000"
            dapr.io/volume-mounts: "nfs-secrets-vol:/mnt/secrets"
        spec:
          volumes:
            - name: nfs-secrets-vol
              nfs:
                server: 10.201.202.203
                path: /secrets/stage
    ...
    
  2. Point the local file secret store component to the attached file.

    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: local-secret-store
    spec:
      type: secretstores.local.file
      version: v1
      metadata:
      - name: secretsFile
        value: /mnt/secrets/secrets.json
    
  3. Use the secrets.

    GET http://localhost:<daprPort>/v1.0/secrets/local-secret-store/my-secret
    

Dapr Kubernetes pod annotations spec

2.3 - Run Dapr in a serverless offering

Learn how to run your Dapr applications on a serverless cloud offering

If you’d like to run your Dapr applications without managing any of the underlying infrastructure such as VMs or Kubernetes, you can choose a serverless cloud offering. These platforms integrate with Dapr to make it easy to deploy and manage your applications.

Offerings

2.3.1 - Azure Container Apps

Learn how to run your Dapr applications on the Azure Container Apps serverless platform

Azure Container Apps is a serverless application hosting service where users do not see nor manage any underlying VMs, orchestrators, or other cloud infrastructure. Azure Container Apps enables you to run your application code packaged in multiple containers and is unopinionated about runtimes or programming models that are used.

Dapr is built-in to Container Apps, enabling you to use the Dapr API building blocks without any manual deployment of the Dapr runtime. You simply deploy your services with their Dapr components.

Learn more

Tutorial

Visit the Azure docs to try out a microservices tutorial, where you’ll deploy two Dapr-enabled applications to Azure Container Apps.

Diagram of a Container Apps environment with two Dapr services Try out Dapr on Container Apps

3 - Manage Dapr configuration

How to set your Dapr configuration and manage your deployment

3.1 - Dapr configuration

Overview of Dapr configuration

Dapr configurations are settings and policies that enable you to change both the behavior of individual Dapr applications, or the global behavior of the Dapr control plane system services.

for more information, read the configuration concept.

Application configuration

Set up application configuration

You can set up application configuration either in self-hosted or Kubernetes mode.

In self hosted mode, the Dapr configuration is a configuration file - for example, config.yaml. By default, the Dapr sidecar looks in the default Dapr folder for the runtime configuration:

  • Linux/MacOs: $HOME/.dapr/config.yaml
  • Windows: %USERPROFILE%\.dapr\config.yaml

An application can also apply a configuration by using a --config flag to the file path with dapr run CLI command.

In Kubernetes mode, the Dapr configuration is a Configuration resource, that is applied to the cluster. For example:

kubectl apply -f myappconfig.yaml

You can use the Dapr CLI to list the Configuration resources for applications.

dapr configurations -k

A Dapr sidecar can apply a specific configuration by using a dapr.io/config annotation. For example:

  annotations:
    dapr.io/enabled: "true"
    dapr.io/app-id: "nodeapp"
    dapr.io/app-port: "3000"
    dapr.io/config: "myappconfig"

Note: See all Kubernetes annotations available to configure the Dapr sidecar on activation by sidecar Injector system service.

Application configuration settings

The following menu includes all of the configuration settings you can set on the sidecar.

Tracing

Tracing configuration turns on tracing for an application.

The tracing section under the Configuration spec contains the following properties:

tracing:
  samplingRate: "1"
  otel: 
    endpointAddress: "otelcollector.observability.svc.cluster.local:4317"
  zipkin:
    endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"

The following table lists the properties for tracing:

PropertyTypeDescription
samplingRatestringSet sampling rate for tracing to be enabled or disabled.
stdoutboolTrue write more verbose information to the traces
otel.endpointAddressstringSet the Open Telemetry (OTEL) server address to send traces to. This may or may not require the https:// or http:// depending on your OTEL provider.
otel.isSecureboolIs the connection to the endpoint address encrypted
otel.protocolstringSet to http or grpc protocol
zipkin.endpointAddressstringSet the Zipkin server address to send traces to. This should include the protocol (http:// or https://) on the endpoint.
samplingRate

samplingRate is used to enable or disable the tracing. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value.

samplingRate : "1" samples all traces. By default, the sampling rate is (0.0001), or 1 in 10,000 traces.

To disable the sampling rate, set samplingRate : "0" in the configuration.

otel

The OpenTelemetry (otel) endpoint can also be configured via an environment variable. The presence of the OTEL_EXPORTER_OTLP_ENDPOINT environment variable turns on tracing for the sidecar.

Environment VariableDescription
OTEL_EXPORTER_OTLP_ENDPOINTSets the Open Telemetry (OTEL) server address, turns on tracing
OTEL_EXPORTER_OTLP_INSECURESets the connection to the endpoint as unencrypted (true/false)
OTEL_EXPORTER_OTLP_PROTOCOLTransport protocol (grpc, http/protobuf, http/json)

See Observability distributed tracing for more information.

Metrics

The metrics section under the Configuration spec can be used to enable or disable metrics for an application.

The metrics section contains the following properties:

metrics:
  enabled: true
  rules: []
  latencyDistributionBuckets: []
  http:
    increasedCardinality: true
    pathMatching:
      - /items
      - /orders/{orderID}
      - /orders/{orderID}/items/{itemID}
      - /payments/{paymentID}
      - /payments/{paymentID}/status
      - /payments/{paymentID}/refund
      - /payments/{paymentID}/details
    excludeVerbs: false
  recordErrorCodes: true

In the examples above, the path filter /orders/{orderID}/items/{itemID} would return a single metric count matching all the orderIDs and all the itemIDs, rather than multiple metrics for each itemID. For more information, see HTTP metrics path matching.

The above example also enables recording error code metrics, which is disabled by default.

The following table lists the properties for metrics:

PropertyTypeDescription
enabledbooleanWhen set to true, the default, enables metrics collection and the metrics endpoint.
rulesarrayNamed rule to filter metrics. Each rule contains a set of labels to filter on and a regex expression to apply to the metrics path.
latencyDistributionBucketsarrayArray of latency distribution buckets in milliseconds for latency metrics histograms.
http.increasedCardinalitybooleanWhen set to true (default), in the Dapr HTTP server each request path causes the creation of a new “bucket” of metrics. This can cause issues, including excessive memory consumption, when there many different requested endpoints (such as when interacting with RESTful APIs).
To mitigate high memory usage and egress costs associated with high cardinality metrics with the HTTP server, you should set the metrics.http.increasedCardinality property to false.
http.pathMatchingarrayArray of paths for path matching, allowing users to define matching paths to manage cardinality.
http.excludeVerbsbooleanWhen set to true (default is false), the Dapr HTTP server ignores each request HTTP verb when building the method metric label.

To further help manage cardinality, path matching allows you to match specified paths according to defined patterns, reducing the number of unique metrics paths and thus controlling metric cardinality. This feature is particularly useful for applications with dynamic URLs, ensuring that metrics remain meaningful and manageable without excessive memory consumption.

Using rules, you can set regular expressions for every metric exposed by the Dapr sidecar. For example:

metrics:
  enabled: true
  rules:
    - name: dapr_runtime_service_invocation_req_sent_total
      labels:
      - name: method
        regex:
          "orders/": "orders/.+"

See metrics documentation for more information.

Logging

The logging section under the Configuration spec is used to configure how logging works in the Dapr Runtime.

The logging section contains the following properties:

logging:
  apiLogging:
    enabled: false
    obfuscateURLs: false
    omitHealthChecks: false

The following table lists the properties for logging:

PropertyTypeDescription
apiLogging.enabledbooleanThe default value for the --enable-api-logging flag for daprd (and the corresponding dapr.io/enable-api-logging annotation): the value set in the Configuration spec is used as default unless a true or false value is passed to each Dapr Runtime. Default: false.
apiLogging.obfuscateURLsbooleanWhen enabled, obfuscates the values of URLs in HTTP API logs (if enabled), logging the abstract route name rather than the full path being invoked, which could contain Personal Identifiable Information (PII). Default: false.
apiLogging.omitHealthChecksbooleanIf true, calls to health check endpoints (e.g. /v1.0/healthz) are not logged when API logging is enabled. This is useful if those calls are adding a lot of noise in your logs. Default: false

See logging documentation for more information.

Middleware

Middleware configuration sets named HTTP pipeline middleware handlers. The httpPipeline and the appHttpPipeline section under the Configuration spec contain the following properties:

httpPipeline: # for incoming http calls
  handlers:
    - name: oauth2
      type: middleware.http.oauth2
    - name: uppercase
      type: middleware.http.uppercase
appHttpPipeline: # for outgoing http calls
  handlers:
    - name: oauth2
      type: middleware.http.oauth2
    - name: uppercase
      type: middleware.http.uppercase

The following table lists the properties for HTTP handlers:

PropertyTypeDescription
namestringName of the middleware component
typestringType of middleware component

See Middleware pipelines for more information.

Name resolution component

You can set name resolution components to use within the configuration file. For example, to set the spec.nameResolution.component property to "sqlite", pass configuration options in the spec.nameResolution.configuration dictionary as shown below.

This is a basic example of a configuration resource:

apiVersion: dapr.io/v1alpha1
kind: Configuration 
metadata:
  name: appconfig
spec:
  nameResolution:
    component: "sqlite"
    version: "v1"
    configuration:
      connectionString: "/home/user/.dapr/nr.db"

For more information, see:

Scope secret store access

See the Scoping secrets guide for information and examples on how to scope secrets to an application.

Access Control allow lists for building block APIs

See the guide for selectively enabling Dapr APIs on the Dapr sidecar for information and examples on how to set access control allow lists (ACLs) on the building block APIs lists.

Access Control allow lists for service invocation API

See the Allow lists for service invocation guide for information and examples on how to set allow lists with ACLs which use the service invocation API.

Disallow usage of certain component types

Using the components.deny property in the Configuration spec you can specify a denylist of component types that cannot be initialized.

For example, the configuration below disallows the initialization of components of type bindings.smtp and secretstores.local.file:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: myappconfig
spec: 
  components:
    deny:
      - bindings.smtp
      - secretstores.local.file

Optionally, you can specify a version to disallow by adding it at the end of the component name. For example, state.in-memory/v1 disables initializing components of type state.in-memory and version v1, but does not disable a (hypothetical) v2 version of the component.

Turning on preview features

See the preview features guide for information and examples on how to opt-in to preview features for a release.

Enabling preview features unlock new capabilities to be added for dev/test, since they still need more time before becoming generally available (GA) in the runtime.

Example sidecar configuration

The following YAML shows an example configuration file that can be applied to an applications’ Dapr sidecar.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: myappconfig
  namespace: default
spec:
  tracing:
    samplingRate: "1"
    stdout: true
    otel:
      endpointAddress: "localhost:4317"
      isSecure: false
      protocol: "grpc"
  httpPipeline:
    handlers:
      - name: oauth2
        type: middleware.http.oauth2
  secrets:
    scopes:
      - storeName: localstore
        defaultAccess: allow
        deniedSecrets: ["redis-password"]
  components:
    deny:
      - bindings.smtp
      - secretstores.local.file
  accessControl:
    defaultAction: deny
    trustDomain: "public"
    policies:
      - appId: app1
        defaultAction: deny
        trustDomain: 'public'
        namespace: "default"
        operations:
          - name: /op1
            httpVerb: ['POST', 'GET']
            action: deny
          - name: /op2/*
            httpVerb: ["*"]
            action: allow

Control plane configuration

A single configuration file called daprsystem is installed with the Dapr control plane system services that applies global settings.

This is only set up when Dapr is deployed to Kubernetes.

Control plane configuration settings

A Dapr control plane configuration contains the following sections:

  • mtls for mTLS (Mutual TLS)

mTLS (Mutual TLS)

The mtls section contains properties for mTLS.

PropertyTypeDescription
enabledboolIf true, enables mTLS for communication between services and apps in the cluster.
allowedClockSkewstringAllowed tolerance when checking the expiration of TLS certificates, to allow for clock skew. Follows the format used by Go’s time.ParseDuration. Default is 15m (15 minutes).
workloadCertTTLstringHow long a certificate TLS issued by Dapr is valid for. Follows the format used by Go’s time.ParseDuration. Default is 24h (24 hours).
sentryAddressstringHostname port address for connecting to the Sentry server.
controlPlaneTrustDomainstringTrust domain for the control plane. This is used to verify connection to control plane services.
tokenValidatorsarrayAdditional Sentry token validators to use for authenticating certificate requests.

See the mTLS how-to and security concepts for more information.

Example control plane configuration

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: daprsystem
  namespace: default
spec:
  mtls:
    enabled: true
    allowedClockSkew: 15m
    workloadCertTTL: 24h

Next steps

Learn about concurrency and rate limits

3.2 - How-To: Control concurrency and rate limit applications

Learn how to control how many requests and events can invoke your application simultaneously

Typically, in distributed computing, you may only want to allow for a given number of requests to execute concurrently. Using Dapr’s app-max-concurrency, you can control how many requests and events can invoke your application simultaneously.

Default app-max-concurreny is set to -1, meaning no concurrency limit is enforced.

Different approaches

While this guide focuses on app-max-concurrency, you can also limit request rate per second using the middleware.http.ratelimit middleware. However, it’s important to understand the difference between the two approaches:

  • middleware.http.ratelimit: Time bound and limits the number of requests per second
  • app-max-concurrency: Specifies the max number of concurrent requests (and events) at any point of time.

See Rate limit middleware for more information about that approach.

Demo

Watch this video on how to control concurrency and rate limiting.

Configure app-max-concurrency

Without using Dapr, you would need to create some sort of a semaphore in the application and take care of acquiring and releasing it.

Using Dapr, you don’t need to make any code changes to your application.

Select how you’d like to configure app-max-concurrency.

To set concurrency limits with the Dapr CLI for running on your local dev machine, add the app-max-concurrency flag:

dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py

The above example effectively turns your app into a sequential processing service.

To configure concurrency limits in Kubernetes, add the following annotation to your pod:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodesubscriber
  namespace: default
  labels:
    app: nodesubscriber
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nodesubscriber
  template:
    metadata:
      labels:
        app: nodesubscriber
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "nodesubscriber"
        dapr.io/app-port: "3000"
        dapr.io/app-max-concurrency: "1"
#...

Limitations

Controlling concurrency on external requests

Rate limiting is guaranteed for every event coming from Dapr, including pub/sub events, direct invocation from other services, bindings events, etc. However, Dapr can’t enforce the concurrency policy on requests that are coming to your app externally.

Arguments and annotations

Next steps

Limit secret store access

3.3 - How-To: Limit the secrets that can be read from secret stores

Define secret scopes by augmenting the existing configuration resource with restrictive permissions.

In addition to scoping which applications can access a given component, you can also scope a named secret store component to one or more secrets for an application. By defining allowedSecrets and/or deniedSecrets lists, you restrict applications to access only specific secrets. In addition to scoping which applications can access a given component, you can also scope a named secret store component to one or more secrets for an application. By defining allowedSecrets and/or deniedSecrets lists, you restrict applications to access only specific secrets.

For more information about configuring a Configuration resource:

Configure secrets access

The secrets section under the Configuration spec contains the following properties:

secrets:
  scopes:
    - storeName: kubernetes
      defaultAccess: allow
      allowedSecrets: ["redis-password"]
    - storeName: localstore
      defaultAccess: allow
      deniedSecrets: ["redis-password"]

The following table lists the properties for secret scopes:

PropertyTypeDescription
storeNamestringName of the secret store component. storeName must be unique within the list
defaultAccessstringAccess modifier. Accepted values “allow” (default) or “deny”
allowedSecretslistList of secret keys that can be accessed
deniedSecretslistList of secret keys that cannot be accessed

When an allowedSecrets list is present with at least one element, only those secrets defined in the list can be accessed by the application.

Permission priority

The allowedSecrets and deniedSecrets list values take priority over the defaultAccess. See how this works in the following example scenarios:

ScenariosdefaultAccessallowedSecretsdeniedSecretspermission
1Only default accessdeny/allowemptyemptydeny/allow
2Default deny with allowed listdeny["s1"]emptyonly "s1" can be accessed
3Default allow with denied listallowempty["s1"]only "s1" cannot be accessed
4Default allow with allowed listallow["s1"]emptyonly "s1" can be accessed
5Default deny with denied listdenyempty["s1"]deny
6Default deny/allow with both listsdeny/allow["s1"]["s2"]only "s1" can be accessed

Examples

Scenario 1: Deny access to all secrets for a secret store

In a Kubernetes cluster, the native Kubernetes secret store is added to your Dapr application by default. In some scenarios, it may be necessary to deny access to Dapr secrets for a given application. To add this configuration: In a Kubernetes cluster, the native Kubernetes secret store is added to your Dapr application by default. In some scenarios, it may be necessary to deny access to Dapr secrets for a given application. To add this configuration:

  1. Define the following appconfig.yaml.

  2. Define the following appconfig.yaml.

    apiVersion: dapr.io/v1alpha1
    kind: Configuration
    metadata:
      name: appconfig
    spec:
      secrets:
        scopes:
          - storeName: kubernetes
            defaultAccess: deny
    
    apiVersion: dapr.io/v1alpha1
    kind: Configuration
    metadata:
      name: appconfig
    spec:
      secrets:
        scopes:
          - storeName: kubernetes
            defaultAccess: deny
    
  3. Apply it to the Kubernetes cluster using the following command:

    kubectl apply -f appconfig.yaml`.
    

For applications that you need to deny access to the Kubernetes secret store, follow the Kubernetes instructions, adding the following annotation to the application pod.

  1. Apply it to the Kubernetes cluster using the following command:

    kubectl apply -f appconfig.yaml`.
    

For applications that you need to deny access to the Kubernetes secret store, follow the Kubernetes instructions, adding the following annotation to the application pod.

dapr.io/config: appconfig

With this defined, the application no longer has access to Kubernetes secret store.

Scenario 2: Allow access to only certain secrets in a secret store

Scenario 2: Allow access to only certain secrets in a secret store

To allow a Dapr application to have access to only certain secrets, define the following config.yaml:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  secrets:
    scopes:
      - storeName: vault
        defaultAccess: deny
        allowedSecrets: ["secret1", "secret2"]

This example defines configuration for secret store named vault. The default access to the secret store is deny. Meanwhile, some secrets are accessible by the application based on the allowedSecrets list. Follow the Sidecar configuration instructions to apply configuration to the sidecar. This example defines configuration for secret store named vault. The default access to the secret store is deny. Meanwhile, some secrets are accessible by the application based on the allowedSecrets list. Follow the Sidecar configuration instructions to apply configuration to the sidecar.

Scenario 3: Deny access to certain sensitive secrets in a secret store

Define the following config.yaml:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  secrets:
    scopes:
      - storeName: vault
        defaultAccess: allow # this is the default value, line can be omitted
        deniedSecrets: ["secret1", "secret2"]

This configuration explicitly denies access to secret1 and secret2 from the secret store named vault, while allowing access to all other secrets. Follow the Sidecar configuration instructions to apply configuration to the sidecar.

Next steps

Service invocation access control

This configuration explicitly denies access to secret1 and secret2 from the secret store named vault, while allowing access to all other secrets. Follow the Sidecar configuration instructions to apply configuration to the sidecar.

Next steps

Service invocation access control

3.4 - How-To: Apply access control list configuration for service invocation

Restrict what operations calling applications can perform

Using access control, you can configure policies that restrict what the operations calling applications can perform, via service invocation, on the called application. You can define an access control policy specification in the Configuration schema to limit access:

  • To a called application from specific operations, and
  • To HTTP verbs from the calling applications.

An access control policy is specified in Configuration and applied to the Dapr sidecar for the called application. Access to the called app is based on the matched policy action.

You can provide a default global action for all calling applications. If no access control policy is specified, the default behavior is to allow all calling applications to access to the called app.

See examples of access policies.

Terminology

trustDomain

A “trust domain” is a logical group that manages trust relationships. Every application is assigned a trust domain, which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value “public” is used. This trust domain is used to generate the identity of the application in the TLS cert.

App Identity

Dapr requests the sentry service to generate a SPIFFE ID for all applications. This ID is attached in the TLS cert.

The SPIFFE ID is of the format: **spiffe://\<trustdomain>/ns/\<namespace\>/\<appid\>**.

For matching policies, the trust domain, namespace, and app ID values of the calling app are extracted from the SPIFFE ID in the TLS cert of the calling app. These values are matched against the trust domain, namespace, and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched.

Configuration properties

The following tables lists the different properties for access control, policies, and operations:

Access Control

PropertyTypeDescription
defaultActionstringGlobal default action when no other policy is matched
trustDomainstringTrust domain assigned to the application. Default is “public”.
policiesstringPolicies to determine what operations the calling app can do on the called app

Policies

PropertyTypeDescription
appstringAppId of the calling app to allow/deny service invocation from
namespacestringNamespace value that needs to be matched with the namespace of the calling app
trustDomainstringTrust domain that needs to be matched with the trust domain of the calling app. Default is “public”
defaultActionstringApp level default action in case the app is found but no specific operation is matched
operationsstringoperations that are allowed from the calling app

Operations

PropertyTypeDescription
namestringPath name of the operations allowed on the called app. Wildcard “*” can be used in a path to match. Wildcard “**” can be used to match under multiple paths.
httpVerblistList specific http verbs that can be used by the calling app. Wildcard “*” can be used to match any http verb. Unused for grpc invocation.
actionstringAccess modifier. Accepted values “allow” (default) or “deny”

Policy rules

  1. If no access policy is specified, the default behavior is to allow all apps to access to all methods on the called app.
  2. If no global default action is specified and no app specific policies defined, the empty access policy is treated like no access policy is specified. The default behavior is to allow all apps to access to all methods on the called app.
  3. If no global default action is specified but some app specific policies have been defined, then we resort to a more secure option of assuming the global default action to deny access to all methods on the called app.
  4. If an access policy is defined and if the incoming app credentials cannot be verified, then the global default action takes effect.
  5. If either the trust domain or namespace of the incoming app do not match the values specified in the app policy, the app policy is ignored and the global default action takes effect.

Policy priority

The action corresponding to the most specific policy matched takes effect as ordered below:

  1. Specific HTTP verbs in the case of HTTP or the operation level action in the case of GRPC.
  2. The default action at the app level
  3. The default action at the global level

Example scenarios

Below are some example scenarios for using access control list for service invocation. See configuration guidance to understand the available configuration settings for an application sidecar.

Scenario 1:

Deny access to all apps except where trustDomain = public, namespace = default, appId = app1

With this configuration, all calling methods with appId = app1 are allowed. All other invocation requests from other applications are denied.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  accessControl:
    defaultAction: deny
    trustDomain: "public"
    policies:
    - appId: app1
      defaultAction: allow
      trustDomain: 'public'
      namespace: "default"

Scenario 2:

Deny access to all apps except trustDomain = public, namespace = default, appId = app1, operation = op1

With this configuration, only the method op1 from appId = app1 is allowed. All other method requests from all other apps, including other methods on app1, are denied.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  accessControl:
    defaultAction: deny
    trustDomain: "public"
    policies:
    - appId: app1
      defaultAction: deny
      trustDomain: 'public'
      namespace: "default"
      operations:
      - name: /op1
        httpVerb: ['*']
        action: allow

Scenario 3:

Deny access to all apps except when a specific verb for HTTP and operation for GRPC is matched

With this configuration, only the scenarios below are allowed access. All other method requests from all other apps, including other methods on app1 or app2, are denied.

  • trustDomain = public, namespace = default, appID = app1, operation = op1, httpVerb = POST/PUT
  • trustDomain = "myDomain", namespace = "ns1", appID = app2, operation = op2 and application protocol is GRPC

Only the httpVerb POST/PUT on method op1 from appId = app1 are allowe. All other method requests from all other apps, including other methods on app1, are denied.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  accessControl:
    defaultAction: deny
    trustDomain: "public"
    policies:
    - appId: app1
      defaultAction: deny
      trustDomain: 'public'
      namespace: "default"
      operations:
      - name: /op1
        httpVerb: ['POST', 'PUT']
        action: allow
    - appId: app2
      defaultAction: deny
      trustDomain: 'myDomain'
      namespace: "ns1"
      operations:
      - name: /op2
        action: allow

Scenario 4:

Allow access to all methods except trustDomain = public, namespace = default, appId = app1, operation = /op1/*, all httpVerb

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  accessControl:
    defaultAction: allow
    trustDomain: "public"
    policies:
    - appId: app1
      defaultAction: allow
      trustDomain: 'public'
      namespace: "default"
      operations:
      - name: /op1/*
        httpVerb: ['*']
        action: deny

Scenario 5:

Allow access to all methods for trustDomain = public, namespace = ns1, appId = app1 and deny access to all methods for trustDomain = public, namespace = ns2, appId = app1

This scenario shows how applications with the same app ID while belonging to different namespaces can be specified.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  accessControl:
    defaultAction: allow
    trustDomain: "public"
    policies:
    - appId: app1
      defaultAction: allow
      trustDomain: 'public'
      namespace: "ns1"
    - appId: app1
      defaultAction: deny
      trustDomain: 'public'
      namespace: "ns2"

Scenario 6:

Allow access to all methods except trustDomain = public, namespace = default, appId = app1, operation = /op1/**/a, all httpVerb

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  accessControl:
    defaultAction: allow
    trustDomain: "public"
    policies:
    - appId: app1
      defaultAction: allow
      trustDomain: 'public'
      namespace: "default"
      operations:
      - name: /op1/**/a
        httpVerb: ['*']
        action: deny

“hello world” examples

In these examples, you learn how to apply access control to the hello world tutorials.

Access control lists rely on the Dapr Sentry service to generate the TLS certificates with a SPIFFE ID for authentication. This means the Sentry service either has to be running locally or deployed to your hosting environment, such as a Kubernetes cluster.

The nodeappconfig example below shows how to deny access to the neworder method from the pythonapp, where the Python app is in the myDomain trust domain and default namespace. The Node.js app is in the public trust domain.

nodeappconfig.yaml

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: nodeappconfig
spec:
  tracing:
    samplingRate: "1"
  accessControl:
    defaultAction: allow
    trustDomain: "public"
    policies:
    - appId: pythonapp
      defaultAction: allow
      trustDomain: 'myDomain'
      namespace: "default"
      operations:
      - name: /neworder
        httpVerb: ['POST']
        action: deny

pythonappconfig.yaml

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: pythonappconfig
spec:
  tracing:
    samplingRate: "1"
  accessControl:
    defaultAction: allow
    trustDomain: "myDomain"

Self-hosted mode

When walking through this tutorial, you:

  • Run the Sentry service locally with mTLS enabled
  • Set up necessary environment variables to access certificates
  • Launch both the Node app and Python app each referencing the Sentry service to apply the ACLs

Prerequisites

Run the Node.js app

  1. In a command prompt, set these environment variables:

      ```bash
      export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt`
      export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt`
      export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key`
      export NAMESPACE=default
      ```
    
      ```powershell
      $env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt)
      $env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt)
      $env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key)
      $env:NAMESPACE="default"
      ```
    
  2. Run daprd to launch a Dapr sidecar for the Node.js app with mTLS enabled, referencing the local Sentry service:

    daprd --app-id nodeapp --dapr-grpc-port 50002 -dapr-http-port 3501 --log-level debug --app-port 3000 --enable-mtls --sentry-address localhost:50001 --config nodeappconfig.yaml
    
  3. Run the Node.js app in a separate command prompt:

    node app.js
    

Run the Python app

  1. In another command prompt, set these environment variables:

    ```bash
    export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt`
    export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt`
    export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key`
    export NAMESPACE=default
    
    $env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt)
    $env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt)
    $env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key)
    $env:NAMESPACE="default"
    
  2. Run daprd to launch a Dapr sidecar for the Python app with mTLS enabled, referencing the local Sentry service:

    daprd --app-id pythonapp   --dapr-grpc-port 50003 --metrics-port 9092 --log-level debug --enable-mtls --sentry-address localhost:50001 --config pythonappconfig.yaml
    
  3. Run the Python app in a separate command prompt:

    python app.py
    

You should see the calls to the Node.js app fail in the Python app command prompt, due to the deny operation action in the nodeappconfig file. Change this action to allow and re-run the apps to see this call succeed.

Kubernetes mode

Prerequisites

Configure the Node.js and Python apps

You can create and apply the above nodeappconfig.yaml and pythonappconfig.yaml configuration files, as described in the configuration.

For example, the Kubernetes Deployment below is how the Python app is deployed to Kubernetes in the default namespace with this pythonappconfig configuration file.

Do the same for the Node.js deployment and look at the logs for the Python app to see the calls fail due to the deny operation action set in the nodeappconfig file.

Change this action to allow and re-deploy the apps to see this call succeed.

Deployment YAML example
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pythonapp
  namespace: default
  labels:
    app: python
spec:
  replicas: 1
  selector:
    matchLabels:
      app: python
  template:
    metadata:
      labels:
        app: python
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "pythonapp"
        dapr.io/config: "pythonappconfig"
    spec:
      containers:
      - name: python
        image: dapriosamples/hello-k8s-python:edge

Demo

Watch this video on how to apply access control list for service invocation.

Next steps

Dapr APIs allow list

3.5 - How-To: Selectively enable Dapr APIs on the Dapr sidecar

Choose which Dapr sidecar APIs are available to the app

In scenarios such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it’s recommended to only enable the Dapr sidecar APIs being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application.

Dapr allows you to control which APIs are accessible to the application by setting an API allowlist or denylist using a Dapr Configuration.

Default behavior

If no API allowlist or denylist is specified, the default behavior is to allow access to all Dapr APIs.

  • If you’ve only defined a denylist, all Dapr APIs are allowed except those defined in the denylist
  • If you’ve only defined an allowlist, only the Dapr APIs listed in the allowlist are allowed
  • If you’ve defined both an allowlist and a denylist, the denylist overrides the allowlist for APIs that are defined in both.
  • If neither is defined, all APIs are allowed.

For example, the following configuration enables all APIs for both HTTP and gRPC:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: myappconfig
  namespace: default
spec:
  tracing:
    samplingRate: "1"

Using an allowlist

Enabling specific HTTP APIs

The following example enables the state v1.0 HTTP API and blocks all other HTTP APIs:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: myappconfig
  namespace: default
spec:
  api:
    allowed:
      - name: state
        version: v1.0
        protocol: http

Enabling specific gRPC APIs

The following example enables the state v1 gRPC API and blocks all other gRPC APIs:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: myappconfig
  namespace: default
spec:
  api:
    allowed:
      - name: state
        version: v1
        protocol: grpc

Using a denylist

Disabling specific HTTP APIs

The following example disables the state v1.0 HTTP API, allowing all other HTTP APIs:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: myappconfig
  namespace: default
spec:
  api:
    denied:
      - name: state
        version: v1.0
        protocol: http

Disabling specific gRPC APIs

The following example disables the state v1 gRPC API, allowing all other gRPC APIs:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: myappconfig
  namespace: default
spec:
  api:
    denied:
      - name: state
        version: v1
        protocol: grpc

List of Dapr APIs

The name field takes the name of the Dapr API you would like to enable.

See this list of values corresponding to the different Dapr APIs:

API groupHTTP APIgRPC API
Service Invocationinvoke (v1.0)invoke (v1)
Statestate (v1.0 and v1.0-alpha1)state (v1 and v1alpha1)
Pub/Subpublish (v1.0 and v1.0-alpha1)publish (v1 and v1alpha1)
Output Bindingsbindings (v1.0)bindings (v1)
Subscriben/asubscribe (v1alpha1)
Secretssecrets (v1.0)secrets (v1)
Actorsactors (v1.0)actors (v1)
Metadatametadata (v1.0)metadata (v1)
Configurationconfiguration (v1.0 and v1.0-alpha1)configuration (v1 and v1alpha1)
Distributed Locklock (v1.0-alpha1)
unlock (v1.0-alpha1)
lock (v1alpha1)
unlock (v1alpha1)
Cryptographycrypto (v1.0-alpha1)crypto (v1alpha1)
Workflowworkflows (v1.0)workflows (v1)
Conversationconversation (v1.0-alpha1)conversation (v1alpha1)
Healthhealthz (v1.0)n/a
Shutdownshutdown (v1.0)shutdown (v1)

Next steps

Configure Dapr to use gRPC

3.6 - How-To: Configure Dapr to use gRPC

Configure Dapr to use gRPC for low-latency, high performance scenarios

Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients. You can see the full list of auto-generated clients (Dapr SDKs).

The Dapr runtime implements a proto service that apps can communicate with via gRPC.

Not only can you call Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implement the Dapr appcallback service

Configuring Dapr to communicate with an app via gRPC

When running in self hosted mode, use the --app-protocol flag to tell Dapr to use gRPC to talk to the app:

dapr run --app-protocol grpc --app-port 5005 node app.js

This tells Dapr to communicate with your app via gRPC over port 5005.

On Kubernetes, set the following annotations in your deployment YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "myapp"
        dapr.io/app-protocol: "grpc"
        dapr.io/app-port: "5005"
#...

Next steps

Handle large HTTP header sizes

3.7 - How-To: Handle large HTTP header size

Configure a larger HTTP read buffer size

Dapr has a default limit of 4KB for the HTTP header read buffer size. If you’re sending HTTP headers larger than the default 4KB, you may encounter a Too big request header service invocation error.

You can increase the HTTP header size by using:

  • The dapr.io/http-read-buffer-size annotation, or
  • The --dapr-http-read-buffer-size flag when using the CLI.

When running in self-hosted mode, use the --dapr-http-read-buffer-size flag to configure Dapr to use non-default http header size:

dapr run --dapr-http-read-buffer-size 16 node app.js

This tells Dapr to set maximum read buffer size to 16 KB.

On Kubernetes, set the following annotations in your deployment YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "myapp"
        dapr.io/app-port: "8000"
        dapr.io/http-read-buffer-size: "16"
#...

Dapr Kubernetes pod annotations spec

Next steps

Handle large HTTP body requests

3.8 - How-To: Handle large http body requests

Configure http requests that are bigger than 4 MB

By default, Dapr has a limit for the request body size, set to 4MB. You can change this by defining:

  • The dapr.io/http-max-request-size annotation, or
  • The --dapr-http-max-request-size flag.

When running in self-hosted mode, use the --dapr-http-max-request-size flag to configure Dapr to use non-default request body size:

dapr run --dapr-http-max-request-size 16 node app.js

This tells Dapr to set maximum request body size to 16 MB.

On Kubernetes, set the following annotations in your deployment YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "myapp"
        dapr.io/app-port: "8000"
        dapr.io/http-max-request-size: "16"
#...

Dapr Kubernetes pod annotations spec

Next steps

Install sidecar certificates

3.9 - How-To: Install certificates in the Dapr sidecar

Configure the Dapr sidecar container to trust certificates

The Dapr sidecar can be configured to trust certificates for communicating with external services. This is useful in scenarios where a self-signed certificate needs to be trusted, such as:

  • Using an HTTP binding
  • Configuring an outbound proxy for the sidecar

Both certificate authority (CA) certificates and leaf certificates are supported.

You can make the following configurations when the sidecar is running as a container.

  1. Configure certificates to be available to the sidecar container using volume mounts.
  2. Point the environment variable SSL_CERT_DIR in the sidecar container to the directory containing the certificates.

Note: For Windows containers, make sure the container is running with administrator privileges so it can install the certificates.

The following example uses Docker Compose to install certificates (present locally in the ./certificates directory) in the sidecar container:

version: '3'
services:
  dapr-sidecar:
    image: "daprio/daprd:edge" # dapr version must be at least v1.8
    command: [
      "./daprd",
     "-app-id", "myapp",
     "-app-port", "3000",
     ]
    volumes:
        - "./components/:/components"
        - "./certificates:/certificates" # (STEP 1) Mount the certificates folder to the sidecar container
    environment:
      - "SSL_CERT_DIR=/certificates" # (STEP 2) Set the environment variable to the path of the certificates folder
    # Uncomment the line below for Windows containers
    # user: ContainerAdministrator

Note: When the sidecar is not running inside a container, certificates must be directly installed on the host operating system.

On Kubernetes:

  1. Configure certificates to be available to the sidecar container using a volume mount.
  2. Point the environment variable SSL_CERT_DIR in the sidecar container to the directory containing the certificates.

The following example YAML shows a deployment that:

  • Attaches a pod volume to the sidecar
  • Sets SSL_CERT_DIR to install the certificates
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "myapp"
        dapr.io/app-port: "8000"
        dapr.io/volume-mounts: "certificates-vol:/tmp/certificates" # (STEP 1) Mount the certificates folder to the sidecar container
        dapr.io/env: "SSL_CERT_DIR=/tmp/certificates" # (STEP 2) Set the environment variable to the path of the certificates folder
    spec:
      volumes:
        - name: certificates-vol
          hostPath:
            path: /certificates
#...

Note: When using Windows containers, the sidecar container is started with admin privileges, which is required to install the certificates. This does not apply to Linux containers.

After following these steps, all the certificates in the directory pointed by SSL_CERT_DIR are installed.

Demo

Watch the demo on using installing SSL certificates and securely using the HTTP binding in community call 64:

Next steps

Enable preview features

3.10 - How-To: Enable preview features

How to specify and enable preview features

Preview features in Dapr are considered experimental when they are first released. These preview features require you to explicitly opt-in to use them. You specify this opt-in in Dapr’s Configuration file.

Preview features are enabled on a per application basis by setting configuration when running an application instance.

Configuration properties

The features section under the Configuration spec contains the following properties:

PropertyTypeDescription
namestringThe name of the preview feature that is enabled/disabled
enabledboolBoolean specifying if the feature is enabled or disabled

Enabling a preview feature

Preview features are specified in the configuration. Here is an example of a full configuration that contains multiple features:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: featureconfig
spec:
  tracing:
    samplingRate: "1"
    zipkin:
      endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
  features:
    - name: Feature1
      enabled: true
    - name: Feature2
      enabled: true

To enable preview features when running Dapr locally, either update the default configuration or specify a separate config file using dapr run.

The default Dapr config is created when you run dapr init, and is located at:

  • Windows: %USERPROFILE%\.dapr\config.yaml
  • Linux/macOS: ~/.dapr/config.yaml

Alternately, you can update preview features on all apps run locally by specifying the --config flag in dapr run and pointing to a separate Dapr config file:

dapr run --app-id myApp --config ./previewConfig.yaml ./app

In Kubernetes mode, the configuration must be provided via a configuration component. Using the same configuration as above, apply it via kubectl:

kubectl apply -f previewConfig.yaml

This configuration component can then be referenced in any application by modifying the application’s configuration to reference that specific configuration component via the dapr.io/config element. For example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodeapp
  labels:
    app: node
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node
  template:
    metadata:
      labels:
        app: node
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "nodeapp"
        dapr.io/app-port: "3000"
        dapr.io/config: "featureconfig"
    spec:
      containers:
      - name: node
        image: dapriosamples/hello-k8s-node:latest
        ports:
        - containerPort: 3000
        imagePullPolicy: Always

Next steps

Configuration schema

3.11 - How-To: Configure Environment Variables from Secrets for Dapr sidecar

Inject Environment Variables from Kubernetes Secrets into Dapr sidecar

In special cases, the Dapr sidecar needs an environment variable injected into it. This use case may be required by a component, a 3rd party library, or a module that uses environment variables to configure the said component or customize its behavior. This can be useful for both production and non-production environments.

Overview

In Dapr 1.15, the new dapr.io/env-from-secret annotation was introduced, similar to dapr.io/env. With this annotation, you can inject an environment variable into the Dapr sidecar, with a value from a secret.

Annotation format

The values of this annotation are formatted like so:

  • Single key secret: <ENV_VAR_NAME>=<SECRET_NAME>
  • Multi key/value secret: <ENV_VAR_NAME>=<SECRET_NAME>:<SECRET_KEY>

<ENV_VAR_NAME> is required to follow the C_IDENTIFIER format and captured by the [A-Za-z_][A-Za-z0-9_]* regex:

  • Must start with a letter or underscore
  • The rest of the identifier contains letters, digits, or underscores

The name field is required due to the restriction of the secretKeyRef, so both name and key must be set. Learn more from the “env.valueFrom.secretKeyRef.name” section in this Kubernetes documentation. In this case, Dapr sets both to the same value.

Configuring single key secret environment variable

In the following example, the dapr.io/env-from-secret annotation is added to the Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodeapp
spec:
  template:
    metadata:
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "nodeapp"
        dapr.io/app-port: "3000"
        dapr.io/env-from-secret: "AUTH_TOKEN=auth-headers-secret"
    spec:
      containers:
      - name: node
        image: dapriosamples/hello-k8s-node:latest
        ports:
        - containerPort: 3000
        imagePullPolicy: Always

The dapr.io/env-from-secret annotation with a value of "AUTH_TOKEN=auth-headers-secret" is injected as:

env:
- name: AUTH_TOKEN
    valueFrom:
    secretKeyRef:
        name: auth-headers-secret
        key: auth-headers-secret

This requires the secret to have both name and key fields with the same value, “auth-headers-secret”.

Example secret

Note: The following example is for demo purposes only. It’s not recommended to store secrets in plain text.

apiVersion: v1
kind: Secret
metadata:
  name: auth-headers-secret
type: Opaque
stringData:
  auth-headers-secret: "AUTH=mykey"

Configuring multi-key secret environment variable

In the following example, the dapr.io/env-from-secret annotation is added to the Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodeapp
spec:
  template:
    metadata:
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "nodeapp"
        dapr.io/app-port: "3000"
        dapr.io/env-from-secret: "AUTH_TOKEN=auth-headers-secret:auth-header-value"
    spec:
      containers:
      - name: node
        image: dapriosamples/hello-k8s-node:latest
        ports:
        - containerPort: 3000
        imagePullPolicy: Always

The dapr.io/env-from-secret annotation with a value of "AUTH_TOKEN=auth-headers-secret:auth-header-value" is injected as:

env:
- name: AUTH_TOKEN
    valueFrom:
    secretKeyRef:
        name: auth-headers-secret
        key: auth-header-value

Example secret

Note: The following example is for demo purposes only. It’s not recommended to store secrets in plain text.

apiVersion: v1
kind: Secret
metadata:
  name: auth-headers-secret
type: Opaque
stringData:
  auth-header-value: "AUTH=mykey"

4 - Managing components in Dapr

How to manage your Dapr components in your application

4.1 - Certification lifecycle

The component certification lifecycle from submission to production ready

Overview

Dapr uses a modular design where functionality is delivered as a component. Each component has an interface definition. All of the components are interchangeable, so that in ideal scenarios, you can swap out one component with the same interface for another. Each component used in production maintains a certain set of technical requirements to ensure functional compatibility and robustness.

In general a component needs to be:

  • Compliant with the defined Dapr interfaces
  • Functionally correct and robust
  • Well documented and maintained

To make sure a component conforms to the standards set by Dapr, there are a set of tests run against a component in a Dapr maintainers managed environment. Once the tests pass consistently, the maturity level can be determined for a component.

Certification levels

The levels are as follows:

Alpha

  • The component implements the required interface and works as described in the specification
  • The component has documentation
  • The component might be buggy or might expose bugs on integration
  • The component may not pass all conformance tests
  • The component may not have conformance tests
  • Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases

All components start at the Alpha stage.

Beta

  • The component must pass all the component conformance tests defined to satisfy the component specification
  • The component conformance tests have been run in a Dapr maintainers managed environment
  • The component contains a record of the conformance test result reviewed and approved by Dapr maintainers with specific components-contrib version
  • Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases

Stable

  • The component must have component certification tests validating functionality and resiliency
  • The component is maintained by Dapr maintainers and supported by the community
  • The component is well documented and tested
  • The component has been available as Alpha or Beta for at least 1 minor version release of Dapr runtime prior
  • A maintainer will address component security, core functionality and test issues according to the Dapr support policy and issue a patch release that includes the patched stable component

Previous Generally Available (GA) components

Any component that was previously certified as GA is allowed into Stable even if the new requirements are not met.

Conformance tests

Each component in the components-contrib repository needs to adhere to a set of interfaces defined by Dapr. Conformance tests are tests that are run on these component definitions with their associated backing services such that the component is tested to be conformant with the Dapr interface specifications and behavior.

The conformance tests are defined for the following building blocks:

  • State store
  • Secret store
  • Bindings
  • Pub/Sub

To understand more about them see the readme here.

Test requirements

  • The tests should validate the functional behavior and robustness of component based on the component specification
  • All the details needed to reproduce the tests are added as part of the component conformance test documentation

Certification tests

Each stable component in the components-contrib repository must have a certification test plan and automated certification tests validating all features supported by the component via Dapr.

Test plan for stable components should include the following scenarios:

  • Client reconnection: in case the client library cannot connect to the service for a moment, Dapr sidecar should not require a restart once the service is back online.
  • Authentication options: validate the component can authenticate with all the supported options.
  • Validate resource provisioning: validate if the component automatically provisions resources on initialization, if applicable.
  • All scenarios relevant to the corresponding building block and component.

The test plan must be approved by a Dapr maintainer and be published in a README.md file along with the component code.

Test requirements

  • The tests should validate the functional behavior and robustness of the component based on the component specification, reflecting the scenarios from the test plan
  • The tests must run successfully as part of the continuous integration of the components-contrib repository

Component certification process

In order for a component to be certified, tests are run in an environment maintained by the Dapr project.

New component certification: Alpha->Beta

For a new component requiring a certification change from Alpha to Beta, a request for component certification follows these steps:

  • Requestor creates an issue in the components-contrib repository for certification of the component with the current and the new certification levels
  • Requestor submits a PR to integrate the component with the defined conformance test suite, if not already included
    • The user details the environment setup in the issue created, so a Dapr maintainer can setup the service in a managed environment
    • After the environment setup is complete, Dapr maintainers review the PR and if approved merges that PR
  • Requestor submits a PR in the docs repository, updating the component’s certification level

New component certification: Beta->Stable

For a new component requiring a certification change from Beta to Stable, a request for component certification follows these steps:

  • Requestor creates an issue in the components-contrib repository for certification of the component with the current and the new certification levels
  • Requestor submits a PR for the test plan as a README.md file in the component’s source code directory
    • The requestor details the test environment requirements in the created PR, including any manual steps or credentials needed
    • A Dapr maintainer reviews the test plan, provides feedback or approves it, and eventually merges the PR
  • Requestor submits a PR for the automated certification tests, including scripts to provision resources when applicable
  • After the test environment setup is completed and credentials provisioned, Dapr maintainers review the PR and, if approved, merges the PR
  • Requestor submits a PR in the docs repository, updating the component’s certification level

4.2 - Updating components

Updating deployed components used by applications

When making an update to an existing deployed component used by an application, Dapr does not update the component automatically unless the HotReload feature gate is enabled. The Dapr sidecar needs to be restarted in order to pick up the latest version of the component. How this is done depends on the hosting environment.

Kubernetes

When running in Kubernetes, the process of updating a component involves two steps:

  1. Apply the new component YAML to the desired namespace
  2. Unless the HotReload feature gate is enabled, perform a rollout restart operation on your deployments to pick up the latest component

Self Hosted

Unless the HotReload feature gate is enabled, the process of updating a component involves a single step of stopping and restarting the daprd process to pick up the latest component.

Hot Reloading (Preview Feature)

This feature is currently in preview. Hot reloading is enabled by via the HotReload feature gate.

Dapr can be made to “hot reload” components whereby component updates are picked up automatically without the need to restart the Dapr sidecar process or Kubernetes pod. This means creating, updating, or deleting a component manifest will be reflected in the Dapr sidecar during runtime.

All components are supported for hot reloading except for the following types. Any create, update, or deletion of these component types is ignored by the sidecar with a restart required to pick up changes.

Further reading

4.3 - How-To: Scope components to one or more applications

Limit component access to particular Dapr instances

Dapr components are namespaced (separate from the Kubernetes namespace concept), meaning a Dapr runtime instance can only access components that have been deployed to the same namespace.

When Dapr runs, it matches it’s own configured namespace with the namespace of the components that it loads and initializes only the ones matching its namespaces. All other components in a different namespace are not loaded.

Namespaces

Namespaces can be used to limit component access to particular Dapr instances.

In self hosted mode, a developer can specify the namespace to a Dapr instance by setting the NAMESPACE environment variable. If the NAMESPACE environment variable is set, Dapr does not load any component that does not specify the same namespace in its metadata.

For example given this component in the production namespace

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
  namespace: production
spec:
  type: state.redis
  version: v1
  metadata:
  - name: redisHost
    value: redis-master:6379

To tell Dapr which namespace it is deployed to, set the environment variable:

MacOS/Linux:

export NAMESPACE=production
# run Dapr as usual

Windows:

setx NAMESPACE "production"
# run Dapr as usual

Let’s consider the following component in Kubernetes:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
  namespace: production
spec:
  type: state.redis
  version: v1
  metadata:
  - name: redisHost
    value: redis-master:6379

In this example, the Redis component is only accessible to Dapr instances running inside the production namespace.

Application access to components with scopes

Developers and operators might want to limit access to one database from a certain application, or a specific set of applications. To achieve this, Dapr allows you to specify scopes on the component YAML. Application scopes added to a component limit only the applications with specific IDs from using the component.

The following example shows how to give access to two Dapr enabled apps, with the app IDs of app1 and app2 to the Redis component named statestore which itself is in the production namespace

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
  namespace: production
spec:
  type: state.redis
  version: v1
  metadata:
  - name: redisHost
    value: redis-master:6379
scopes:
- app1
- app2

Community call demo

Using namespaces with service invocation

Read Service invocation across namespaces for more information on using namespaces when calling between services.

Using namespaces with pub/sub

Read Configure Pub/Sub components with multiple namespaces for more information on using namespaces with pub/sub.

4.4 - How-To: Reference secrets in components

How to securly reference secrets from a component definition

Overview

Components can reference secrets for the spec.metadata section within the components definition.

In order to reference a secret, you need to set the auth.secretStore field to specify the name of the secret store that holds the secrets.

When running in Kubernetes, if the auth.secretStore is empty, the Kubernetes secret store is assumed.

Supported secret stores

Go to this link to see all the secret stores supported by Dapr, along with information on how to configure and use them.

Referencing secrets

While you have the option to use plain text secrets (like MyPassword), as shown in the yaml below for the value of redisPassword, this is not recommended for production:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
spec:
  type: state.redis
  version: v1
  metadata:
  - name: redisHost
    value: localhost:6379
  - name: redisPassword
    value: MyPassword

Instead create the secret in your secret store and reference it in the component definition. There are two cases for this shown below – the “Secret contains an embedded key” and the “Secret is a string”.

The “Secret contains an embedded key” case applies when there is a key embedded within the secret, i.e. the secret is not an entire connection string. This is shown in the following component definition yaml.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
spec:
  type: state.redis
  version: v1
  metadata:
  - name: redisHost
    value: localhost:6379
  - name: redisPassword
    secretKeyRef:
      name: redis-secret
      key:  redis-password
auth:
  secretStore: <SECRET_STORE_NAME>

SECRET_STORE_NAME is the name of the configured secret store component. When running in Kubernetes and using a Kubernetes secret store, the field auth.SecretStore defaults to kubernetes and can be left empty.

The above component definition tells Dapr to extract a secret named redis-secret from the defined secretStore and assign the value associated with the redis-password key embedded in the secret to the redisPassword field in the component. One use of this case is when your code is constructing a connection string, for example putting together a URL, a secret, plus other information as necessary, into a string.

On the other hand, the below “Secret is a string” case applies when there is NOT a key embedded in the secret. Rather, the secret is just a string. Therefore, in the secretKeyRef section both the secret name and the secret key will be identical. This is the case when the secret itself is an entire connection string with no embedded key whose value needs to be extracted. Typically a connection string consists of connection information, some sort of secret to allow connection, plus perhaps other information and does not require a separate “secret”. This case is shown in the below component definition yaml.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: servicec-inputq-azkvsecret-asbqueue
spec:
  type: bindings.azure.servicebusqueues
  version: v1
  metadata:
  - name: connectionString
    secretKeyRef:
      name: asbNsConnString
      key: asbNsConnString
  - name: queueName
    value: servicec-inputq
auth:
  secretStore: <SECRET_STORE_NAME>

The above “Secret is a string” case yaml tells Dapr to extract a connection string named asbNsConnstring from the defined secretStore and assign the value to the connectionString field in the component since there is no key embedded in the “secret” from the secretStore because it is a plain string. This requires the secret name and secret key to be identical.

Example

Referencing a Kubernetes secret

The following example shows you how to create a Kubernetes secret to hold the connection string for an Event Hubs binding.

  1. First, create the Kubernetes secret:

    kubectl create secret generic eventhubs-secret --from-literal=connectionString=*********
    
  2. Next, reference the secret in your binding:

    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: eventhubs
    spec:
      type: bindings.azure.eventhubs
      version: v1
      metadata:
      - name: connectionString
        secretKeyRef:
          name: eventhubs-secret
          key: connectionString
    
  3. Finally, apply the component to the Kubernetes cluster:

    kubectl apply -f ./eventhubs.yaml
    

Scoping access to secrets

Dapr can restrict access to secrets in a secret store using its configuration. Read How To: Use secret scoping and How-To: Limit the secrets that can be read from secret stores for more information. This is the recommended way to limit access to secrets using Dapr.

Kubernetes permissions

Default namespace

When running in Kubernetes, Dapr, during installation, defines default Role and RoleBinding for secrets access from Kubernetes secret store in the default namespace. For Dapr enabled apps that fetch secrets from default namespace, a secret can be defined and referenced in components as shown in the example above.

Non-default namespaces

If your Dapr enabled apps are using components that fetch secrets from non-default namespaces, apply the following resources to that namespace:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: secret-reader
  namespace: <NAMESPACE>
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list"]
---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: dapr-secret-reader
  namespace: <NAMESPACE>
subjects:
- kind: ServiceAccount
  name: default
roleRef:
  kind: Role
  name: secret-reader
  apiGroup: rbac.authorization.k8s.io

These resources grant Dapr permissions to get secrets from the Kubernetes secret store for the namespace defined in the Role and RoleBinding.

4.5 - State stores components

Guidance on setting up different state stores for Dapr state management

Dapr integrates with existing databases to provide apps with state management capabilities for CRUD operations, transactions and more. It also supports the configuration of multiple, named, state store components per application.

State stores are extensible and can be found in the components-contrib repo.

A state store in Dapr is described using a Component file:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
spec:
  type: state.<DATABASE>
  version: v1
  metadata:
  - name: <KEY>
    value: <VALUE>
  - name: <KEY>
    value: <VALUE>
...

The type of database is determined by the type field, and things like connection strings and other metadata are put in the .metadata section. Even though metadata values can contain secrets in plain text, it is recommended you use a secret store.

Visit this guide on how to configure a state store component.

Supported state stores

Visit this reference to see all of the supported state stores in Dapr.

4.6 - Pub/Sub brokers

Guidance on setting up different message brokers for Dapr Pub/Sub

Dapr integrates with pub/sub message buses to provide applications with the ability to create event-driven, loosely coupled architectures where producers send events to consumers via topics.

Dapr supports the configuration of multiple, named, pub/sub components per application. Each pub/sub component has a name and this name is used when publishing a message topic. Read the API reference for details on how to publish and subscribe to topics.

Pub/sub components are extensible. A list of support pub/sub components is here and the implementations can be found in the components-contrib repo.

Component files

A pub/sub is described using a Component file:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pubsub
  namespace: default
spec:
  type: pubsub.<NAME>
  version: v1
  metadata:
  - name: <KEY>
    value: <VALUE>
  - name: <KEY>
    value: <VALUE>
...

The type of pub/sub is determined by the type field, and properties such as connection strings and other metadata are put in the .metadata section. Even though metadata values can contain secrets in plain text, it is recommended you use a secret store using a secretKeyRef.

While all pub/sub components support consumerID metadata, the runtime creates a consumer ID if you do not supply one. All component metadata field values can carry templated metadata values, which are resolved on Dapr sidecar startup. For example, you can choose to use {namespace} as the consumerGroup to enable using the same appId in different namespaces using the same topics as described in this article.

Visit this guide for instructions on configuring and using pub/sub components.

4.6.1 - HowTo: Configure Pub/Sub components with multiple namespaces

Use Dapr Pub/Sub with multiple namespaces

In some scenarios, applications can be spread across namespaces and share a queue or topic via PubSub. In this case, the PubSub component must be provisioned on each namespace.

This example uses the PubSub sample. The Redis installation and the subscribers are in namespace-a while the publisher UI is in namespace-b. This solution will also work if Redis is installed on another namespace or if you use a managed cloud service like Azure ServiceBus, AWS SNS/SQS or GCP PubSub.

This is a diagram of the example using namespaces.



The table below shows which resources are deployed to which namespaces:

Resourcenamespace-anamespace-b
Redis master✅❌
Redis replicas✅❌
Dapr’s PubSub component✅✅
Node subscriber✅❌
Python subscriber✅❌
React UI publisher❌✅

Pre-requisites

Setup namespace-a

Create namespace and switch kubectl to use it.

kubectl create namespace namespace-a
kubectl config set-context --current --namespace=namespace-a

Install Redis (master and slave) on namespace-a, following these instructions.

Now, configure deploy/redis.yaml, paying attention to the hostname containing namespace-a.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pubsub
spec:
  type: pubsub.redis
  version: v1
  metadata:
  - name: "redisHost"
    value: "redis-master.namespace-a.svc:6379"
  - name: "redisPassword"
    value: "YOUR_PASSWORD"

Deploy resources to namespace-a:

kubectl apply -f deploy/redis.yaml
kubectl apply -f deploy/node-subscriber.yaml
kubectl apply -f deploy/python-subscriber.yaml

Setup namespace-b

Create namespace and switch kubectl to use it.

kubectl create namespace namespace-b
kubectl config set-context --current --namespace=namespace-b

Deploy resources to namespace-b, including the Redis component:

kubectl apply -f deploy/redis.yaml
kubectl apply -f deploy/react-form.yaml

Now, find the IP address for react-form, open it on your browser and publish messages to each topic (A, B and C).

kubectl get service -A

Confirm subscribers received the messages.

Switch back to namespace-a:

kubectl config set-context --current --namespace=namespace-a

Find the POD names:

kubectl get pod # Copy POD names and use in the next commands.

Display logs:

kubectl logs node-subscriber-XYZ node-subscriber
kubectl logs python-subscriber-XYZ python-subscriber

The messages published on the browser should show in the corresponding subscriber’s logs. The Node.js subscriber receives messages of type “A” and “B”, while the Python subscriber receives messages of type “A” and “C”.

Clean up

kubectl delete -f deploy/redis.yaml  --namespace namespace-a
kubectl delete -f deploy/node-subscriber.yaml  --namespace namespace-a
kubectl delete -f deploy/python-subscriber.yaml  --namespace namespace-a
kubectl delete -f deploy/react-form.yaml  --namespace namespace-b
kubectl delete -f deploy/redis.yaml  --namespace namespace-b
kubectl config set-context --current --namespace=default
kubectl delete namespace namespace-a
kubectl delete namespace namespace-b

4.7 - Secret store components

Guidance on setting up different secret store components

Dapr integrates with secret stores to provide apps and other components with secure storage and access to secrets such as access keys and passwords. Each secret store component has a name and this name is used when accessing a secret.

As with other building block components, secret store components are extensible and can be found in the components-contrib repo.

A secret store in Dapr is described using a Component file with the following fields:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: secretstore
spec:
  type: secretstores.<NAME>
  version: v1
  metadata:
  - name: <KEY>
    value: <VALUE>
  - name: <KEY>
    value: <VALUE>
...

The type of secret store is determined by the type field, and things like connection strings and other metadata are put in the .metadata section.

Different supported secret stores will have different specific fields that would need to be configured. For example, when configuring a secret store which uses AWS Secrets Manager the file would look like this:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: awssecretmanager
spec:
  type: secretstores.aws.secretmanager
  version: v1
  metadata:
  - name: region
    value: "[aws_region]"
  - name: accessKey
    value: "[aws_access_key]"
  - name: secretKey
    value: "[aws_secret_key]"
  - name: sessionToken
    value: "[aws_session_token]"

Apply the configuration

Once you have created the component’s YAML file, follow these instructions to apply it based on your hosting environment:

To run locally, create a components dir containing the YAML file and provide the path to the dapr run command with the flag --resources-path.

To deploy in Kubernetes, assuming your component file is named secret-store.yaml, run:

kubectl apply -f secret-store.yaml

Supported secret stores

Visit the secret stores reference for a full list of supported secret stores.

4.8 - Bindings components

Guidance on setting up Dapr bindings components

Dapr integrates with external resources to allow apps to both be triggered by external events and interact with the resources. Each binding component has a name and this name is used when interacting with the resource.

As with other building block components, binding components are extensible and can be found in the components-contrib repo.

A binding in Dapr is described using a Component file with the following fields:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
  namespace: <NAMESPACE>
spec:
  type: bindings.<NAME>
  version: v1
  metadata:
  - name: <KEY>
    value: <VALUE>
  - name: <KEY>
    value: <VALUE>
...

The type of binding is determined by the type field, and things like connection strings and other metadata are put in the .metadata section.

Different supported bindings will have different specific fields that would need to be configured. For example, when configuring a binding for Azure Blob Storage, the file would look like this:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.blobstorage
  version: v1
  metadata:
  - name: storageAccount
    value: myStorageAccountName
  - name: storageAccessKey
    value: ***********
  - name: container
    value: container1
  - name: decodeBase64
    value: <bool>
  - name: getBlobRetryCount
    value: <integer>

Apply the configuration

Once you have created the component’s YAML file, follow these instructions to apply it based on your hosting environment:

To run locally, create a components dir containing the YAML file and provide the path to the dapr run command with the flag --resources-path.

To deploy in Kubernetes, assuming your component file is named mybinding.yaml, run:

kubectl apply -f mybinding.yaml

Supported bindings

Visit the bindings reference for a full list of supported resources.

4.9 - How-To: Register a pluggable component

Learn how to register a pluggable component

Component registration process

Pluggable, gRPC-based components are typically run as containers or processes that need to communicate with the Dapr runtime via Unix Domain Sockets (or UDS for short). They are automatically discovered and registered in the runtime with the following steps:

  1. The component listens to an Unix Domain Socket placed on the shared volume.
  2. The Dapr runtime lists all Unix Domain Socket in the shared volume.
  3. The Dapr runtime connects with each socket and uses gRPC reflection to discover all proto services from a given building block API that the component implements.

A single component can implement multiple component interfaces at once.

While Dapr’s built-in components come included with the runtime, pluggable components require a few setup steps before they can be used with Dapr.

  1. Pluggable components need to be started and ready to take requests before Dapr itself is started.
  2. The Unix Domain Socket file used for the pluggable component communication need to be made accessible to both Dapr and pluggable component.

In standalone mode, pluggable components run as processes or containers. On Kubernetes, pluggable components run as containers and are automatically injected to the application’s pod by Dapr’s sidecar injector, allowing customization via the standard Kubernetes Container spec.

This also changes the approach to share Unix Domain Socket files between Dapr and pluggable components.

Select your environment to begin making your component discoverable.

Run the component

Both your component and the Unix Socket must be running before Dapr starts.

By default, Dapr sidecar looks for components as Unix Domain Socket files in /tmp/dapr-components-sockets.

Filenames in this folder are significant for component registration. They must be formed by appending the component’s name with a file extension of your choice, more commonly .sock. For example, the filename my-component.sock is a valid Unix Domain Socket file name for a component named my-component.

Since you are running Dapr in the same host as the component, verify that this folder and the files within it are accessible and writable by both your component and Dapr. If you are using Dapr’s sidecar injector capability, this volume is created and mounted automatically.

Component discovery and multiplexing

A pluggable component accessible through a Unix Domain Socket (UDS) can host multiple distinct component APIs. During the components’ initial discovery process, Dapr uses reflection to enumerate all the component APIs behind a UDS. The my-component pluggable component in the example above can contain both state store (state) and a pub/sub (pubsub) component APIs.

Typically, a pluggable component implements a single component API for packaging and deployment. However, at the expense of increasing its dependencies and broadening its security attack surface, a pluggable component can have multiple component APIs implemented. This could be done to ease the deployment and monitoring burden. Best practice for isolation, fault tolerance, and security is a single component API implementation for each pluggable component.

Define the component

Define your component using a component spec. Your component’s spec.type value is made by concatenating the following 2 parts with a .:

  1. The component’s API (state, pubsub, bindings etc)
  2. The component’s name, which is derived from the Unix Domain Socket filename, without the file extension.

You will need to define one component spec for each API exposed by your pluggable component’s Unix Domain Socket. The Unix Domain Socket my-component.sock from the previous example exposes a pluggable component named my-component with both a state and a pubsub API. Two components specs, each in their own YAML file, placed in the resources-path, will be required: one for state.my-component and another for pubsub.my-component.

For instance, the component spec for state.my-component could be:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: my-production-state-store
spec:
  type: state.my-component
  version: v1
  metadata:

In the sample above, notice the following:

  • The contents of the field spec.type is state.my-component, referring to a state store being exposed as a pluggable component named my-component.
  • The field metadata.name, which is the name of the state store being defined here, is not related to the pluggable component name.

Save this file as component.yaml in Dapr’s component configuration folder. Just like the contents of metadata.name field, the filename for this YAML file has no impact and does not depend on the pluggable component name.

Run Dapr

Initialize Dapr, and make sure that your component file is placed in the right folder.

That’s it! Now you’re able to call the state store APIs via Dapr API. See it in action by running the following. Replace $PORT with the Dapr HTTP port:

curl -X POST -H "Content-Type: application/json" -d '[{ "key": "name", "value": "Bruce Wayne", "metadata": {}}]' http://localhost:$PORT/v1.0/state/prod-mystore

Retrieve the value, replacing $PORT with the Dapr HTTP port:

curl http://localhost:$PORT/v1.0/state/prod-mystore/name

Build and publish a container for your pluggable component

Make sure your component is running as a container, published first and accessible to your Kubernetes cluster.

Deploy Dapr on a Kubernetes cluster

Follow the steps provided in the Deploy Dapr on a Kubernetes cluster docs.

Add the pluggable component container in your deployments

Pluggable components are deployed as containers in the same pod as your application.

Since pluggable components are backed by Unix Domain Sockets, make the socket created by your pluggable component accessible by Dapr runtime. Configure the deployment spec to:

  1. Mount volumes
  2. Hint to Dapr the mounted Unix socket volume location
  3. Attach volume to your pluggable component container

In the following example, your configured pluggable component is deployed as a container within the same pod as your application container.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  labels:
    app: app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
      annotations:
        # Recommended to automatically inject pluggable components.
        dapr.io/inject-pluggable-components: "true" 
        dapr.io/app-id: "my-app"
        dapr.io/enabled: "true"
    spec:
      containers:
      # Your application's container spec, as usual.
        - name: app
           image: YOUR_APP_IMAGE:YOUR_APP_IMAGE_VERSION

The dapr.io/inject-pluggable-components annotation is recommended to be set to “true”, indicating Dapr’s sidecar injector that this application’s pod will have additional containers for pluggable components.

Alternatively, you can skip Dapr’s sidecar injection capability and manually add the pluggable component’s container and annotate your pod, telling Dapr which containers within that pod are pluggable components, like in the example below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  labels:
    app: app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
      annotations:
        dapr.io/pluggable-components: "component" ## the name of the pluggable component container separated by `,`, e.g "componentA,componentB".
        dapr.io/app-id: "my-app"
        dapr.io/enabled: "true"
    spec:
      containers:
      ### --------------------- YOUR APPLICATION CONTAINER GOES HERE -----------
        - name: app
           image: YOUR_APP_IMAGE:YOUR_APP_IMAGE_VERSION
      ### --------------------- YOUR PLUGGABLE COMPONENT CONTAINER GOES HERE -----------
        - name: component
          image: YOUR_IMAGE_GOES_HERE:YOUR_IMAGE_VERSION

Before applying the deployment, let’s add one more configuration: the component spec.

Define a component

Pluggable components are defined using a component spec. The component type is derived from the socket name (without the file extension). In the following example YAML, replace:

  • your_socket_goes_here with your component socket name (no extension)
  • your_component_type with your component type
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: prod-mystore
  # When running on Kubernetes and automatic container injection, add annotation below:
  annotations:
    dapr.io/component-container: >
      {
        "name": "my-component",
        "image": "<registry>/<image_name>:<image_tag>"
      }
spec:
  type: your_component_type.your_socket_goes_here
  version: v1
  metadata:
scopes:
  - backend

The dapr.io/component-container annotation is mandatory on Kubernetes when you want Dapr’s sidecar injector to handle the container and volume injection for the pluggable component. At minimum, you’ll need the name and image attributes for the Dapr’s sidecar injector to successfully add the container to the Application’s pod. Volume for Unix Domain Socket is automatically created and mounted by Dapr’s sidecar injector.

Scope your component to make sure that only the target application can connect with the pluggable component, since it will only be running in its deployment. Otherwise the runtime fails when initializing the component.

That’s it! Apply the created manifests to your Kubernetes cluster, and call the state store APIs via Dapr API.

Use Kubernetes pod forwarder to access the daprd runtime.

See it in action by running the following. Replace $PORT with the Dapr HTTP port:

curl -X POST -H "Content-Type: application/json" -d '[{ "key": "name", "value": "Bruce Wayne", "metadata": {}}]' http://localhost:$PORT/v1.0/state/prod-mystore

Retrieve the value, replacing $PORT with the Dapr HTTP port:

curl http://localhost:$PORT/v1.0/state/prod-mystore/name

Next Steps

Get started with developing .NET pluggable component using this sample code

4.10 - Configure middleware components

Customize processing pipelines by adding middleware components

Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. There are two places that you can use a middleware pipeline:

  1. Building block APIs - HTTP middleware components are executed when invoking any Dapr HTTP APIs.
  2. Service-to-Service invocation - HTTP middleware components are applied to service-to-service invocation calls.

Configure API middleware pipelines

When launched, a Dapr sidecar constructs a middleware processing pipeline for incoming HTTP calls. By default, the pipeline consists of the tracing and CORS middlewares. Additional middlewares, configured by a Dapr Configuration, can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, secrets, configuration, distributed lock, etc.

A request goes through all the defined middleware components before it’s routed to user code, and then goes through the defined middleware, in reverse order, before it’s returned to the client, as shown in the following diagram.

Diagram showing the flow of a request and a response through the middlewares, as described in the paragraph above

HTTP middleware components are executed when invoking Dapr HTTP APIs using the httpPipeline configuration.

The following configuration example defines a custom pipeline that uses an OAuth 2.0 middleware and an uppercase middleware component. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: pipeline
  namespace: default
spec:
  httpPipeline:
    handlers:
      - name: oauth2
        type: middleware.http.oauth2
      - name: uppercase
        type: middleware.http.uppercase

As with other components, middleware components can be found in the supported Middleware reference and in the dapr/components-contrib repo.

See all middleware components

Configure app middleware pipelines

You can also use any middleware component when making service-to-service invocation calls. For example, to add token validation in a zero-trust environment, to transform a request for a specific app endpoint, or to apply OAuth policies.

Service-to-service invocation middleware components apply to all outgoing calls from a Dapr sidecar to the receiving application (service), as shown in the diagram below.

Diagram showing the flow of a service invocation request. Requests from the callee Dapr sidecar to the callee application go through the app middleware pipeline as described in the paragraph above.

Any middleware component that can be used as HTTP middleware can also be applied to service-to-service invocation calls as a middleware component using the appHttpPipeline configuration. The example below adds the uppercase middleware component for all outgoing calls from the Dapr sidecar (target of service invocation) to the application that this configuration is applied to.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: pipeline
  namespace: default
spec:
  appHttpPipeline:
    handlers:
      - name: uppercase
        type: middleware.http.uppercase

5 - Securing Dapr deployments

Best practices and instructions on how to secure your Dapr applications

5.1 - Setup & configure mTLS certificates

Encrypt communication between applications using self-signed or user supplied x.509 certificates

Dapr supports in-transit encryption of communication between Dapr instances using the Dapr control plane, Sentry service, which is a central Certificate Authority (CA).

Dapr allows operators and developers to bring in their own certificates, or instead let Dapr automatically create and persist self-signed root and issuer certificates.

For detailed information on mTLS, read the security concepts section.

If custom certificates have not been provided, Dapr automatically creates and persist self-signed certs valid for one year. In Kubernetes, the certs are persisted to a secret that resides in the namespace of the Dapr system pods, accessible only to them. In self-hosted mode, the certs are persisted to disk.

Control plane Sentry service configuration

The mTLS settings reside in a Dapr control plane configuration file. For example when you deploy the Dapr control plane to Kubernetes this configuration file is automatically created and then you can edit this. The following file shows the available settings for mTLS in a configuration resource, deployed in the daprsystem namespace:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: daprsystem
  namespace: default
spec:
  mtls:
    enabled: true
    workloadCertTTL: "24h"
    allowedClockSkew: "15m"

The file here shows the default daprsystem configuration settings. The examples below show you how to change and apply this configuration to the control plane Sentry service either in Kubernetes and self-hosted modes.

Kubernetes

Setting up mTLS with the configuration resource

In Kubernetes, Dapr creates a default control plane configuration resource with mTLS enabled. The Sentry service, the certificate authority system pod, is installed both with Helm and with the Dapr CLI using dapr init --kubernetes.

You can view the control plane configuration resource with the following command:

kubectl get configurations/daprsystem --namespace <DAPR_NAMESPACE> -o yaml.

To make changes to the control plane configuration resource, run the following command to edit it:

kubectl edit configurations/daprsystem --namespace <DAPR_NAMESPACE>

Once the changes are saved, perform a rolling update to the control plane:

kubectl rollout restart deploy/dapr-sentry -n <DAPR_NAMESPACE>
kubectl rollout restart deploy/dapr-operator -n <DAPR_NAMESPACE>
kubectl rollout restart statefulsets/dapr-placement-server -n <DAPR_NAMESPACE>

Note: the control plane Sidecar Injector service does not need to be redeployed

Disabling mTLS with Helm

The control plane will continue to use mTLS

kubectl create ns dapr-system

helm install \
  --set global.mtls.enabled=false \
  --namespace dapr-system \
  dapr \
  dapr/dapr

Disabling mTLS with the CLI

The control plane will continue to use mTLS

dapr init --kubernetes --enable-mtls=false

Viewing logs

In order to view the Sentry service logs, run the following command:

kubectl logs --selector=app=dapr-sentry --namespace <DAPR_NAMESPACE>

Bringing your own certificates

Using Helm, you can provide the PEM encoded root cert, issuer cert and private key that will be populated into the Kubernetes secret used by the Sentry service.

Note: This example uses the OpenSSL command line tool, this is a widely distributed package, easily installed on Linux via the package manager. On Windows OpenSSL can be installed using chocolatey. On MacOS it can be installed using brew brew install openssl

Create config files for generating the certificates, this is necessary for generating v3 certificates with the SAN (Subject Alt Name) extension fields. First save the following to a file named root.conf:

[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
C = US
ST = VA
L = Daprville
O = dapr.io/sentry
OU = dapr.io/sentry
CN = cluster.local
[v3_req]
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = cluster.local

Repeat this for issuer.conf, paste the same contents into the file, but add pathlen:0 to the end of the basicConstraints line, as shown below:

basicConstraints = critical, CA:true, pathlen:0

Run the following to generate the root cert and key

# skip the following line to reuse an existing root key, required for rotating expiring certificates
openssl ecparam -genkey -name prime256v1 | openssl ec -out root.key
openssl req -new -nodes -sha256 -key root.key -out root.csr -config root.conf -extensions v3_req
openssl x509 -req -sha256 -days 365 -in root.csr -signkey root.key -outform PEM -out root.pem -extfile root.conf -extensions v3_req

Next run the following to generate the issuer cert and key:

# skip the following line to reuse an existing issuer key, required for rotating expiring certificates
openssl ecparam -genkey -name prime256v1 | openssl ec -out issuer.key
openssl req -new -sha256 -key issuer.key -out issuer.csr -config issuer.conf -extensions v3_req
openssl x509 -req -in issuer.csr -CA root.pem -CAkey root.key -CAcreateserial -outform PEM -out issuer.pem -days 365 -sha256 -extfile issuer.conf -extensions v3_req

Install Helm and pass the root cert, issuer cert and issuer key to Sentry via configuration:

kubectl create ns dapr-system

helm install \
  --set-file dapr_sentry.tls.issuer.certPEM=issuer.pem \
  --set-file dapr_sentry.tls.issuer.keyPEM=issuer.key \
  --set-file dapr_sentry.tls.root.certPEM=root.pem \
  --namespace dapr-system \
  dapr \
  dapr/dapr

The CLI commands below can be used to renew root and issuer certificates in your Kubernetes cluster.

Generate brand new certificates

  1. The command below generates brand new root and issuer certificates, signed by a newly generated private root key.

Note: The Dapr sentry service followed by rest of the control plane services must be restarted for them to be able to read the new certificates. This can be done by supplying --restart flag to the command.

dapr mtls renew-certificate -k --valid-until <days> --restart
  1. The command below generates brand new root and issuer certificates, signed by provided private root key.

Note: If your existing deployed certificates are signed by this same private root key, the Dapr Sentry service can then read these new certificates without restarting.

dapr mtls renew-certificate -k --private-key <private_key_file_path> --valid-until <days>

Renew certificates by using provided custom certificates

To update the provided certificates in the Kubernetes cluster, the CLI command below can be used.

Note - It does not support valid-until flag to specify validity for new certificates.

dapr mtls renew-certificate -k --ca-root-certificate <ca.crt> --issuer-private-key <issuer.key> --issuer-public-certificate <issuer.crt> --restart

The recommended way to do this is to perform a rollout restart of your deployment:

kubectl rollout restart deploy/myapp

Updating root or issuer certs using Kubectl

If the Root or Issuer certs are about to expire, you can update them and restart the required system services.

Dapr-generated self-signed certificates

  1. Clear the existing Dapr Trust Bundle secret by saving the following YAML to a file (e.g. clear-trust-bundle.yaml) and applying this secret.
apiVersion: v1
kind: Secret
metadata:
  name: dapr-trust-bundle
  labels:
    app: dapr-sentry
data:
kubectl apply -f `clear-trust-bundle.yaml` -n <DAPR_NAMESPACE>
  1. Restart the Dapr Sentry service. This will generate a new certificate bundle and update the dapr-trust-bundle Kubernetes secret.
kubectl rollout restart -n <DAPR_NAMESPACE> deployment/dapr-sentry
  1. Once the Sentry service has been restarted, restart the rest of the Dapr control plane to pick up the new Dapr Trust Bundle.
kubectl rollout restart deploy/dapr-operator -n <DAPR_NAMESPACE>
kubectl rollout restart statefulsets/dapr-placement-server -n <DAPR_NAMESPACE>
kubectl rollout restart deploy/dapr-sidecar-injector -n <DAPR_NAMESPACE>
kubectl rollout restart deploy/dapr-scheduler-server -n <DAPR_NAMESPACE>
  1. Restart your Dapr applications to pick up the latest trust bundle.
kubectl rollout restart deployment/mydaprservice1 kubectl deployment/myotherdaprservice2

Custom certificates (bring your own)

First, issue new certificates using the step above in Bringing your own certificates.

Now that you have the new certificates, use Helm to upgrade the certificates:

helm upgrade \
  --set-file dapr_sentry.tls.issuer.certPEM=issuer.pem \
  --set-file dapr_sentry.tls.issuer.keyPEM=issuer.key \
  --set-file dapr_sentry.tls.root.certPEM=root.pem \
  --namespace dapr-system \
  dapr \
  dapr/dapr

Alternatively, you can update the Kubernetes secret that holds them:

kubectl edit secret dapr-trust-bundle -n <DAPR_NAMESPACE>

Replace the ca.crt, issuer.crt and issuer.key keys in the Kubernetes secret with their corresponding values from the new certificates. Note: The values must be base64 encoded

If you signed the new cert root with the same private key the Dapr Sentry service will pick up the new certificates automatically. You can restart your application deployments using kubectl rollout restart with zero downtime. It is not necessary to restart all deployments at once, as long as deployments are restarted before original certificate expiration.

If you signed the new cert root with a different private key, you must restart the Dapr Sentry service, followed by the remainder of the Dapr control plane service.

kubectl rollout restart deploy/dapr-sentry -n <DAPR_NAMESPACE>

Once Sentry has been completely restarted run:

kubectl rollout restart deploy/dapr-operator -n <DAPR_NAMESPACE>
kubectl rollout restart statefulsets/dapr-placement-server -n <DAPR_NAMESPACE>

Next, you must restart all Dapr-enabled pods. The recommended way to do this is to perform a rollout restart of your deployment:

kubectl rollout restart deploy/myapp

You will experience potential downtime due to mismatching certificates until all deployments have successfully been restarted (and hence loaded the new Dapr certificates).

Kubernetes video demo

Watch this video to show how to update mTLS certificates on Kubernetes

Set up monitoring for Dapr control plane mTLS certificate expiration

Beginning 30 days prior to mTLS root certificate expiration the Dapr sentry service will emit hourly warning level logs indicating that the root certificate is about to expire.

As an operational best practice for running Dapr in production we recommend configuring monitoring for these particular sentry service logs so that you are aware of the upcoming certificate expiration.

"Dapr root certificate expiration warning: certificate expires in 2 days and 15 hours"

Once the certificate has expired you will see the following message:

"Dapr root certificate expiration warning: certificate has expired."

In Kubernetes you can view the sentry service logs like so:

kubectl logs deployment/dapr-sentry -n dapr-system

The log output will appear like the following:"

{"instance":"dapr-sentry-68cbf79bb9-gdqdv","level":"warning","msg":"Dapr root certificate expiration warning: certificate expires in 2 days and 15 hours","scope":"dapr.sentry","time":"2022-04-01T23:43:35.931825236Z","type":"log","ver":"1.6.0"}

As an additional tool to alert you to the upcoming certificate expiration beginning with release 1.7.0 the CLI now prints the certificate expiration status whenever you interact with a Kubernetes-based deployment.

Example:

dapr status -k

  NAME                   NAMESPACE    HEALTHY  STATUS   REPLICAS  VERSION   AGE  CREATED
  dapr-operator          dapr-system  True     Running  1         1.15.1    4m   2025-02-19 17:36.26
  dapr-placement-server  dapr-system  True     Running  1         1.15.1    4m   2025-02-19 17:36.27
  dapr-dashboard         dapr-system  True     Running  1         0.15.0    4m   2025-02-19 17:36.27
  dapr-sentry            dapr-system  True     Running  1         1.15.1    4m   2025-02-19 17:36.26
  dapr-scheduler-server  dapr-system  True     Running  3         1.15.1    4m   2025-02-19 17:36.27
  dapr-sidecar-injector  dapr-system  True     Running  1         1.15.1    4m   2025-02-19 17:36.26
⚠  Dapr root certificate of your Kubernetes cluster expires in 2 days. Expiry date: Mon, 04 Apr 2025 15:01:03 UTC.
 Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.

Self hosted

Running the control plane Sentry service

In order to run the Sentry service, you can either build from source, or download a release binary from here.

When building from source, please refer to this guide on how to build Dapr.

Second, create a directory for the Sentry service to create the self signed root certs:

mkdir -p $HOME/.dapr/certs

Run the Sentry service locally with the following command:

./sentry --issuer-credentials $HOME/.dapr/certs --trust-domain cluster.local

If successful, the Sentry service runs and creates the root certs in the given directory. This command uses default configuration values as no custom config file was given. See below on how to start the Sentry service with a custom configuration.

Setting up mTLS with the configuration resource

Dapr instance configuration

When running Dapr in self hosted mode, mTLS is disabled by default. you can enable it by creating the following configuration file:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: daprsystem
  namespace: default
spec:
  mtls:
    enabled: true

In addition to the Dapr configuration, you also need to provide the TLS certificates to each Dapr sidecar instance. You can do so by setting the following environment variables before running the Dapr instance:

export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt`
export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt`
export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key`
export NAMESPACE=default
$env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt)
$env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt)
$env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key)
$env:NAMESPACE="default"

If using the Dapr CLI, point Dapr to the config file above to run the Dapr instance with mTLS enabled:

dapr run --app-id myapp --config ./config.yaml node myapp.js

If using daprd directly, use the following flags to enable mTLS:

daprd --app-id myapp --enable-mtls --sentry-address localhost:50001 --config=./config.yaml

Sentry service configuration

Here’s an example of a configuration for Sentry that changes the workload cert TTL to 25 seconds:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: daprsystem
  namespace: default
spec:
  mtls:
    enabled: true
    workloadCertTTL: "25s"

In order to start Sentry service with a custom config, use the following flag:

./sentry --issuer-credentials $HOME/.dapr/certs --trust-domain cluster.local --config=./config.yaml

Bringing your own certificates

In order to provide your own credentials, create ECDSA PEM encoded root and issuer certificates and place them on the file system. Tell the Sentry service where to load the certificates from using the --issuer-credentials flag.

The next examples creates root and issuer certs and loads them with the Sentry service.

Note: This example uses the step tool to create the certificates. You can install step tool from here. Windows binaries available here

Create the root certificate:

step certificate create cluster.local ca.crt ca.key --profile root-ca --no-password --insecure

Create the issuer certificate:

step certificate create cluster.local issuer.crt issuer.key --ca ca.crt --ca-key ca.key --profile intermediate-ca --not-after 8760h --no-password --insecure

This creates the root and issuer certs and keys. Place ca.crt, issuer.crt and issuer.key in a desired path ($HOME/.dapr/certs in the example below), and launch Sentry:

./sentry --issuer-credentials $HOME/.dapr/certs --trust-domain cluster.local

Updating root or issuer certificates

If the Root or Issuer certs are about to expire, you can update them and restart the required system services.

To have Dapr generate new certificates, delete the existing certificates at $HOME/.dapr/certs and restart the sentry service to generate new certificates.

./sentry --issuer-credentials $HOME/.dapr/certs --trust-domain cluster.local --config=./config.yaml

To replace with your own certificates, first generate new certificates using the step above in Bringing your own certificates.

Copy ca.crt, issuer.crt and issuer.key to the filesystem path of every configured system service, and restart the process or container. By default, system services will look for the credentials in /var/run/dapr/credentials. The examples above use $HOME/.dapr/certs as a custom location.

Note: If you signed the cert root with a different private key, restart the Dapr instances.

Community call video on certificate rotation

Watch this video on how to perform certificate rotation if your certificates are expiring.

Sentry Token Validators

Tokens are often used for authentication and authorization purposes. Token validators are components responsible for verifying the validity and authenticity of these tokens. For example in Kubernetes environments, a common approach to token validation is through the Kubernetes bound service account mechanism. This validator checks bound service account tokens against Kubernetes to ensure their legitimacy.

Sentry service can be configured to:

  • Enable extra token validators beyond the Kubernetes bound Service Account validator
  • Replace the insecure validator enabled by default in self hosted mode

Sentry token validators are used for joining extra non-Kubernetes clients to the Dapr cluster running in Kubernetes mode, or replace the insecure “allow all” validator in self hosted mode to enable proper identity validation. It is not expected that you will need to configure a token validator unless you are using an exotic deployment scenario.

The only token validator currently supported is the jwks validator.

JWKS

The jwks validator enables Sentry service to validate JWT tokens using a JWKS endpoint. The contents of the token must contain the sub claim which matches the SPIFFE identity of the Dapr client, in the same Dapr format spiffe://<trust-domain>/ns/<namespace>/<app-id>. The audience of the token must by the SPIFFE ID of the Sentry identity, For example, spiffe://cluster.local/ns/dapr-system/dapr-sentry. Other basic JWT rules regarding signature, expiry etc. apply.

The jwks validator can accept either a remote source to fetch the public key list or a static array for public keys.

The configuration below enables the jwks token validator with a remote source. This remote source uses HTTPS so the caCertificate field contains the root of trust for the remote source.

kind: Configuration
apiVersion: dapr.io/v1alpha1
metadata:
  name: sentryconfig
spec:
  mtls:
    enabled: true
    tokenValidators:
      - name: jwks
        options:
          minRefreshInterval: 2m
          requestTimeout: 1m
          source: "https://localhost:1234/"
          caCertificate: "<optional ca certificate bundle string>"

The configuration below enables the jwks token validator with a static array of public keys.

kind: Configuration
apiVersion: dapr.io/v1alpha1
metadata:
  name: sentryconfig
spec:
  mtls:
    enabled: true
    tokenValidators:
      - name: jwks
        options:
          minRefreshInterval: 2m
          requestTimeout: 1m
          source: |
            {"keys":[ "12345.." ]}

5.2 - Configure endpoint authorization with OAuth

Enable OAuth authorization on application endpoints for your web APIs

Dapr OAuth 2.0 middleware allows you to enable OAuth authorization on Dapr endpoints for your web APIs using the Authorization Code Grant flow. You can also inject authorization tokens into your endpoint APIs which can be used for authorization towards external APIs called by your APIs using the Client Credentials Grant flow. When the middleware is enabled any method invocation through Dapr needs to be authorized before getting passed to the user code.

The main difference between the two flows is that the Authorization Code Grant flow needs user interaction and authorizes a user where the Client Credentials Grant flow doesn’t need a user interaction and authorizes a service/application.

Register your application with an authorization server

Different authorization servers provide different application registration experiences. Here are some samples:

To configure the Dapr OAuth middleware, you’ll need to collect the following information:

  • Client ID (see here)
  • Client secret (see here)
  • Scopes (see here)
  • Authorization URL
  • Token URL

Authorization/Token URLs of some of the popular authorization servers:

ServerAuthorization URLToken URL
Microsoft Entra IDhttps://login.microsoftonline.com/{tenant}/oauth2/authorizehttps://login.microsoftonline.com/{tenant}/oauth2/token
GitHubhttps://github.com/login/oauth/authorizehttps://github.com/login/oauth/access_token
Googlehttps://accounts.google.com/o/oauth2/v2/authhttps://accounts.google.com/o/oauth2/token https://www.googleapis.com/oauth2/v4/token
Twitterhttps://api.twitter.com/oauth/authorizehttps://api.twitter.com/oauth2/token

Define the middleware component definition

Define an Authorization Code Grant component

An OAuth middleware (Authorization Code) is defined by a component:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: oauth2
  namespace: default
spec:
  type: middleware.http.oauth2
  version: v1
  metadata:
  - name: clientId
    value: "<your client ID>"
  - name: clientSecret
    value: "<your client secret>"
  - name: scopes
    value: "<comma-separated scope names>"
  - name: authURL
    value: "<authorization URL>"
  - name: tokenURL
    value: "<token exchange URL>"
  - name: redirectURL
    value: "<redirect URL>"
  - name: authHeaderName
    value: "<header name under which the secret token is saved>"
    # forceHTTPS:
    # This key is used to set HTTPS schema on redirect to your API method
    # after Dapr successfully received Access Token from Identity Provider.
    # By default, Dapr will use HTTP on this redirect.
  - name: forceHTTPS
    value: "<set to true if you invoke an API method through Dapr from https origin>"

Define a custom pipeline for an Authorization Code Grant

To use the OAuth middleware (Authorization Code), you should create a custom pipeline using Dapr configuration, as shown in the following sample:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: pipeline
  namespace: default
spec:
  httpPipeline:
    handlers:
    - name: oauth2
      type: middleware.http.oauth2

Define a Client Credentials Grant component

An OAuth (Client Credentials) middleware is defined by a component:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: myComponent
spec:
  type: middleware.http.oauth2clientcredentials
  version: v1
  metadata:
  - name: clientId
    value: "<your client ID>"
  - name: clientSecret
    value: "<your client secret>"
  - name: scopes
    value: "<comma-separated scope names>"
  - name: tokenURL
    value: "<token issuing URL>"
  - name: headerName
    value: "<header name under which the secret token is saved>"
  - name: endpointParamsQuery
    value: "<list of additional key=value settings separated by ampersands or semicolons forwarded to the token issuing service>"
    # authStyle:
    # "0" means to auto-detect which authentication
    # style the provider wants by trying both ways and caching
    # the successful way for the future.

    # "1" sends the "client_id" and "client_secret"
    # in the POST body as application/x-www-form-urlencoded parameters.

    # "2" sends the client_id and client_password
    # using HTTP Basic Authorization. This is an optional style
    # described in the OAuth2 RFC 6749 section 2.3.1.
  - name: authStyle
    value: "<see comment>"

Define a custom pipeline for a Client Credentials Grant

To use the OAuth middleware (Client Credentials), you should create a custom pipeline using Dapr configuration, as shown in the following sample:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: pipeline
  namespace: default
spec:
  httpPipeline:
    handlers:
    - name: myComponent
      type: middleware.http.oauth2clientcredentials

Apply the configuration

To apply the above configuration (regardless of grant type) to your Dapr sidecar, add a dapr.io/config annotation to your pod spec:

apiVersion: apps/v1
kind: Deployment
...
spec:
  ...
  template:
    metadata:
      ...
      annotations:
        dapr.io/enabled: "true"
        ...
        dapr.io/config: "pipeline"
...

Accessing the access token

Authorization Code Grant

Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar (such as calling the v1.0/invoke/ endpoint), it will be redirected to the authorization’s consent page if an access token is not found. Otherwise, the access token is written to the authHeaderName header and made available to the app code.

Client Credentials Grant

Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar (such as calling the v1.0/invoke/ endpoint), it will retrieve a new access token if an existing valid one is not found. The access token is written to the headerName header and made available to the app code. In that way the app can forward the token in the authorization header in calls towards the external API requesting that token.

5.3 - Enable API token authentication in Dapr

Require every incoming API request for Dapr to include an authentication token before allowing that request to pass through

By default, Dapr relies on the network boundary to limit access to its public API. If you plan on exposing the Dapr API outside of that boundary, or if your deployment demands an additional level of security, consider enabling the token authentication for Dapr APIs. This will cause Dapr to require every incoming gRPC and HTTP request for its APIs for to include authentication token, before allowing that request to pass through.

Create a token

Dapr uses shared tokens for API authentication. You are free to define the API token to use.

Although Dapr does not impose any format for the shared token, a good idea is to generate a random byte sequence and encode it to Base64. For example, this command generates a random 32-byte key and encodes that as Base64:

openssl rand 16 | base64

Configure API token authentication in Dapr

The token authentication configuration is slightly different for either Kubernetes or self-hosted Dapr deployments:

Self-hosted

In self-hosting scenario, Dapr looks for the presence of DAPR_API_TOKEN environment variable. If that environment variable is set when the daprd process launches, Dapr enforces authentication on its public APIs:

export DAPR_API_TOKEN=<token>

To rotate the configured token, update the DAPR_API_TOKEN environment variable to the new value and restart the daprd process.

Kubernetes

In a Kubernetes deployment, Dapr leverages Kubernetes secrets store to hold the shared token. To configure Dapr APIs authentication, start by creating a new secret:

kubectl create secret generic dapr-api-token --from-literal=token=<token>

Note, the above secret needs to be created in each namespace in which you want to enable Dapr token authentication.

To indicate to Dapr to use that secret to secure its public APIs, add an annotation to your Deployment template spec:

annotations:
  dapr.io/enabled: "true"
  dapr.io/api-token-secret: "dapr-api-token" # name of the Kubernetes secret

When deployed, Dapr sidecar injector will automatically create a secret reference and inject the actual value into DAPR_API_TOKEN environment variable.

Rotate a token

Self-hosted

To rotate the configured token in self-hosted, update the DAPR_API_TOKEN environment variable to the new value and restart the daprd process.

Kubernetes

To rotate the configured token in Kubernetes, update the previously-created secret with the new token in each namespace. You can do that using kubectl patch command, but a simpler way to update these in each namespace is by using a manifest:

apiVersion: v1
kind: Secret
metadata:
  name: dapr-api-token
type: Opaque
data:
  token: <your-new-token>

And then apply it to each namespace:

kubectl apply --file token-secret.yaml --namespace <namespace-name>

To tell Dapr to start using the new token, trigger a rolling upgrade to each one of your deployments:

kubectl rollout restart deployment/<deployment-name> --namespace <namespace-name>

Assuming your service is configured with more than one replica, the key rotation process does not result in any downtime.

Adding API token to client API invocations

Once token authentication is configured in Dapr, all clients invoking Dapr API need to append the dapr-api-token token to every request.

Note: The Dapr SDKs read the DAPR_API_TOKEN environment variable and set it for you by default.

HTTP

In case of HTTP, Dapr requires the API token in the dapr-api-token header. For example:

GET http://<daprAddress>/v1.0/metadata
dapr-api-token: <token>

Using curl, you can pass the header using the --header (or -H) option. For example:

curl http://localhost:3500/v1.0/metadata \
  --header "dapr-api-token: my-token"

gRPC

When using gRPC protocol, Dapr will inspect the incoming calls for the API token on the gRPC metadata:

dapr-api-token[0].

Accessing the token from the app

Kubernetes

In Kubernetes, it’s recommended to mount the secret to your pod as an environment variable, as shown in the example below, where a Kubernetes secret with the name dapr-api-token is used to hold the token.

containers:
  - name: mycontainer
    image: myregistry/myapp
    envFrom:
    - secretRef:
      name: dapr-api-token

Self-hosted

In self-hosted mode, you can set the token as an environment variable for your app:

export DAPR_API_TOKEN=<my-dapr-token>

5.4 - Authenticate requests from Dapr using token authentication

Require every incoming API request from Dapr to include an authentication token

For some building blocks such as pub/sub, service invocation and input bindings, Dapr communicates with an app over HTTP or gRPC. To enable the application to authenticate requests that are arriving from the Dapr sidecar, you can configure Dapr to send an API token as a header (in HTTP requests) or metadata (in gRPC requests).

Create a token

Dapr uses shared tokens for API authentication. You are free to define the API token to use.

Although Dapr does not impose any format for the shared token, a good idea is to generate a random byte sequence and encode it to Base64. For example, this command generates a random 32-byte key and encodes that as Base64:

openssl rand 16 | base64

Configure app API token authentication in Dapr

The token authentication configuration is slightly different for either Kubernetes or self-hosted Dapr deployments:

Self-hosted

In self-hosting scenario, Dapr looks for the presence of APP_API_TOKEN environment variable. If that environment variable is set when the daprd process launches, Dapr includes the token when calling an app:

export APP_API_TOKEN=<token>

To rotate the configured token, update the APP_API_TOKEN environment variable to the new value and restart the daprd process.

Kubernetes

In a Kubernetes deployment, Dapr leverages Kubernetes secrets store to hold the shared token. To start, create a new secret:

kubectl create secret generic app-api-token --from-literal=token=<token>

Note, the above secret needs to be created in each namespace in which you want to enable app token authentication

To indicate to Dapr to use the token in the secret when sending requests to the app, add an annotation to your Deployment template spec:

annotations:
  dapr.io/enabled: "true"
  dapr.io/app-token-secret: "app-api-token" # name of the Kubernetes secret

When deployed, the Dapr Sidecar Injector automatically creates a secret reference and injects the actual value into APP_API_TOKEN environment variable.

Rotate a token

Self-hosted

To rotate the configured token in self-hosted, update the APP_API_TOKEN environment variable to the new value and restart the daprd process.

Kubernetes

To rotate the configured token in Kubernetes, update the previously-created secret with the new token in each namespace. You can do that using kubectl patch command, but a simpler way to update these in each namespace is by using a manifest:

apiVersion: v1
kind: Secret
metadata:
  name: app-api-token
type: Opaque
data:
  token: <your-new-token>

And then apply it to each namespace:

kubectl apply --file token-secret.yaml --namespace <namespace-name>

To tell Dapr to start using the new token, trigger a rolling upgrade to each one of your deployments:

kubectl rollout restart deployment/<deployment-name> --namespace <namespace-name>

Assuming your service is configured with more than one replica, the key rotation process does not result in any downtime.

Authenticating requests from Dapr

Once app token authentication is configured using the environment variable or Kubernetes secret app-api-token, the Dapr sidecar always includes the HTTP header/gRPC metadata dapr-api-token: <token> in the calls to the app. From the app side, ensure you are authenticating using the dapr-api-token value which uses the app-api-token you set to authenticate requests from Dapr.

HTTP

In your code, look for the HTTP header dapr-api-token in incoming requests:

dapr-api-token: <token>

gRPC

When using gRPC protocol, inspect the incoming calls for the API token on the gRPC metadata:

dapr-api-token[0].

Accessing the token from the app

Kubernetes

In Kubernetes, it’s recommended to mount the secret to your pod as an environment variable. Assuming we created a secret with the name app-api-token to hold the token:

containers:
  - name: mycontainer
    image: myregistry/myapp
    envFrom:
    - secretRef:
      name: app-api-token

Self-hosted

In self-hosted mode, you can set the token as an environment variable for your app:

export APP_API_TOKEN=<my-app-token>

6 - Error recovery using resiliency policies

How to configure and customize Dapr error retries, timeouts and circuit breakers

6.1 - Overview

Configure Dapr retries, timeouts, and circuit breakers

Dapr provides the capability for defining and applying fault tolerance resiliency policies via a resiliency spec. Resiliency specs are saved in the same location as components specs and are applied when the Dapr sidecar starts. The sidecar determines how to apply resiliency policies to your Dapr API calls.

  • In self-hosted mode: The resiliency spec must be named resiliency.yaml.
  • In Kubernetes: Dapr finds the named resiliency specs used by your application.

Policies

You can configure Dapr resiliency policies with the following parts:

  • Metadata defining where the policy applies (like namespace and scope)
  • Policies specifying the resiliency name and behaviors, like:
  • Targets determining which interactions these policies act on, including:

Once defined, you can apply this configuration to your local Dapr components directory, or to your Kubernetes cluster using:

kubectl apply -f <resiliency-spec-name>.yaml

Additionally, you can scope resiliency policies to specific apps.

See known limitations.

Resiliency policy structure

Below is the general structure of a resiliency policy:

apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
  name: myresiliency
scopes:
  # optionally scope the policy to specific apps
spec:
  policies:
    timeouts:
      # timeout policy definitions

    retries:
      # retry policy definitions

    circuitBreakers:
      # circuit breaker policy definitions

  targets:
    apps:
      # apps and their applied policies here

    actors:
      # actor types and their applied policies here

    components:
      # components and their applied policies here

Complete example policy

apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
  name: myresiliency
# similar to subscription and configuration specs, scopes lists the Dapr App IDs that this
# resiliency spec can be used by.
scopes:
  - app1
  - app2
spec:
  # policies is where timeouts, retries and circuit breaker policies are defined. 
  # each is given a name so they can be referred to from the targets section in the resiliency spec.
  policies:
    # timeouts are simple named durations.
    timeouts:
      general: 5s
      important: 60s
      largeResponse: 10s

    # retries are named templates for retry configurations and are instantiated for life of the operation.
    retries:
      pubsubRetry:
        policy: constant
        duration: 5s
        maxRetries: 10

      retryForever:
        policy: exponential
        maxInterval: 15s
        maxRetries: -1 # retry indefinitely

      important:
        policy: constant
        duration: 5s
        maxRetries: 30

      someOperation:
        policy: exponential
        maxInterval: 15s

      largeResponse:
        policy: constant
        duration: 5s
        maxRetries: 3

    # circuit breakers are automatically instantiated per component and app instance.
    # circuit breakers maintain counters that live as long as the Dapr sidecar is running. They are not persisted.
    circuitBreakers:
      simpleCB:
        maxRequests: 1
        timeout: 30s 
        trip: consecutiveFailures >= 5

      pubsubCB:
        maxRequests: 1
        interval: 8s
        timeout: 45s
        trip: consecutiveFailures > 8

  # targets are what named policies are applied to. Dapr supports 3 target types - apps, components and actors
  targets:
    apps:
      appB:
        timeout: general
        retry: important
        # circuit breakers for services are scoped app instance.
        # when a breaker is tripped, that route is removed from load balancing for the configured `timeout` duration.
        circuitBreaker: simpleCB

    actors:
      myActorType: # custom Actor Type Name
        timeout: general
        retry: important
        # circuit breakers for actors are scoped by type, id, or both.
        # when a breaker is tripped, that type or id is removed from the placement table for the configured `timeout` duration.
        circuitBreaker: simpleCB
        circuitBreakerScope: both ## 
        circuitBreakerCacheSize: 5000

    components:
      # for state stores, policies apply to saving and retrieving state.
      statestore1: # any component name -- happens to be a state store here
        outbound:
          timeout: general
          retry: retryForever
          # circuit breakers for components are scoped per component configuration/instance. For example myRediscomponent.
          # when this breaker is tripped, all interaction to that component is prevented for the configured `timeout` duration.
          circuitBreaker: simpleCB

      pubsub1: # any component name -- happens to be a pubsub broker here
        outbound:
          retry: pubsubRetry
          circuitBreaker: pubsubCB

      pubsub2: # any component name -- happens to be another pubsub broker here
        outbound:
          retry: pubsubRetry
          circuitBreaker: pubsubCB
        inbound: # inbound only applies to delivery from sidecar to app
          timeout: general
          retry: important
          circuitBreaker: pubsubCB

Limitations

  • Service invocation via gRPC: Currently, resiliency policies are not supported for service invocation via gRPC.

Demos

Watch this video for how to use resiliency:

Learn more about how to write resilient microservices with Dapr.

Next steps

Learn more about resiliency policies and targets:

Try out one of the Resiliency quickstarts:

6.2 - Resiliency policies

Configure resiliency policies for timeouts, retries, and circuit breakers

Define timeouts, retries, and circuit breaker policies under policies. Each policy is given a name so you can refer to them from the targets section in the resiliency spec.

6.2.1 - Timeout resiliency policies

Configure resiliency policies for timeouts

Network calls can fail for many reasons, causing your application to wait indefinitely for responses. By setting a timeout duration, you can cut off those unresponsive services, freeing up resources to handle new requests.

Timeouts are optional policies that can be used to early-terminate long-running operations. Set a realistic timeout duration that reflects actual response times in production. If you’ve exceeded a timeout duration:

  • The operation in progress is terminated (if possible).
  • An error is returned.

Timeout policy format

spec:
  policies:
    # Timeouts are simple named durations.
    timeouts:
      timeoutName: timeout1
      general: 5s
      important: 60s
      largeResponse: 10s

Spec metadata

| Field | Details | Example | | timeoutName | Name of the timeout policy | timeout1 | | general | Time duration for timeouts marked as “general”. Uses Go’s time.ParseDuration format. No set maximum value. | 15s, 2m, 1h30m | | important | Time duration for timeouts marked as “important”. Uses Go’s time.ParseDuration format. No set maximum value. | 15s, 2m, 1h30m | | largeResponse | Time duration for timeouts awaiting a large response. Uses Go’s time.ParseDuration format. No set maximum value. | 15s, 2m, 1h30m |

If you don’t specify a timeout value, the policy does not enforce a time and defaults to whatever you set up per the request client.

Next steps

Try out one of the Resiliency quickstarts:

6.2.2 - Retry and back-off resiliency policies

Configure resiliency policies for retries and back-offs

6.2.2.1 - Retry resiliency policies

Configure resiliency policies for retries

Requests can fail due to transient errors, like encountering network congestion, reroutes to overloaded instances, and more. Sometimes, requests can fail due to other resiliency policies set in place, like triggering a defined timeout or circuit breaker policy.

In these cases, configuring retries can either:

  • Send the same request to a different instance, or
  • Retry sending the request after the condition has cleared.

Retries and timeouts work together, with timeouts ensuring your system fails fast when needed, and retries recovering from temporary glitches.

Dapr provides default resiliency policies, which you can overwrite with user-defined retry policies.

Retry policy format

Example 1

spec:
  policies:
    # Retries are named templates for retry configurations and are instantiated for life of the operation.
    retries:
      pubsubRetry:
        policy: constant
        duration: 5s
        maxRetries: 10

      retryForever:
        policy: exponential
        maxInterval: 15s
        maxRetries: -1 # Retry indefinitely

Example 2

spec:
  policies:
    retries:
      retry5xxOnly:
        policy: constant
        duration: 5s
        maxRetries: 3
        matching:
          httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried. 
          gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.

Spec metadata

The following retry options are configurable:

Retry optionDescription
policyDetermines the back-off and retry interval strategy. Valid values are constant and exponential.
Defaults to constant.
durationDetermines the time interval between retries. Only applies to the constant policy.
Valid values are of the form 200ms, 15s, 2m, etc.
Defaults to 5s.
maxIntervalDetermines the maximum interval between retries to which the exponential back-off policy can grow.
Additional retries always occur after a duration of maxInterval. Defaults to 60s. Valid values are of the form 5s, 1m, 1m30s, etc
maxRetriesThe maximum number of retries to attempt.
-1 denotes an unlimited number of retries, while 0 means the request will not be retried (essentially behaving as if the retry policy were not set).
Defaults to -1.
matching.httpStatusCodesOptional: a comma-separated string of HTTP status codes or code ranges to retry. Status codes not listed are not retried.
Valid values: 100-599, Reference
Format: <code> or range <start>-<end>
Example: “429,501-503”
Default: empty string "" or field is not set. Retries on all HTTP errors.
matching.gRPCStatusCodesOptional: a comma-separated string of gRPC status codes or code ranges to retry. Status codes not listed are not retried.
Valid values: 0-16, Reference
Format: <code> or range <start>-<end>
Example: “4,8,14”
Default: empty string "" or field is not set. Retries on all gRPC errors.

Exponential back-off policy

The exponential back-off window uses the following formula:

BackOffDuration = PreviousBackOffDuration * (Random value from 0.5 to 1.5) * 1.5
if BackOffDuration > maxInterval {
  BackoffDuration = maxInterval
}

Retry status codes

When applications span multiple services, especially on dynamic environments like Kubernetes, services can disappear for all kinds of reasons and network calls can start hanging. Status codes provide a glimpse into our operations and where they may have failed in production.

HTTP

The following table includes some examples of HTTP status codes you may receive and whether you should or should not retry certain operations.

HTTP Status CodeRetry Recommended?Description
404 Not Found❌ NoThe resource doesn’t exist.
400 Bad Request❌ NoYour request is invalid.
401 Unauthorized❌ NoTry getting new credentials.
408 Request Timeout✅ YesThe server timed out waiting for the request.
429 Too Many Requests✅ Yes(Respect the Retry-After header, if present).
500 Internal Server Error✅ YesThe server encountered an unexpected condition.
502 Bad Gateway✅ YesA gateway or proxy received an invalid response.
503 Service Unavailable✅ YesService might recover.
504 Gateway Timeout✅ YesTemporary network issue.

gRPC

The following table includes some examples of gRPC status codes you may receive and whether you should or should not retry certain operations.

gRPC Status CodeRetry Recommended?Description
Code 1 CANCELLED❌ NoN/A
Code 3 INVALID_ARGUMENT❌ NoN/A
Code 4 DEADLINE_EXCEEDED✅ YesRetry with backoff
Code 5 NOT_FOUND❌ NoN/A
Code 8 RESOURCE_EXHAUSTED✅ YesRetry with backoff
Code 14 UNAVAILABLE✅ YesRetry with backoff

Retry filter based on status codes

The retry filter enables granular control over retry policies by allowing users to specify HTTP and gRPC status codes or ranges for which retries should apply.

spec:
  policies:
    retries:
      retry5xxOnly:
        # ...
        matching:
          httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried. 
          gRPCStatusCodes: "4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.

Demo

Watch a demo presented during Diagrid’s Dapr v1.15 celebration to see how to set retry status code filters using Diagrid Conductor

Next steps

Try out one of the Resiliency quickstarts:

6.2.2.2 - Override default retry resiliency policies

Learn how to override the default retry resiliency policies for specific APIs

Dapr provides default retries for any unsuccessful request, such as failures and transient errors. Within a resiliency spec, you have the option to override Dapr’s default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name DaprBuiltInServiceRetries, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.

Note: Although you can override default values with more robust retries, you cannot override with lesser values than the provided default value, or completely remove default retries. This prevents unexpected downtime.

Below is a table that describes Dapr’s default retries and the policy keywords to override them:

CapabilityOverride KeywordDefault Retry BehaviorDescription
Service InvocationDaprBuiltInServiceRetriesPer call retries are performed with a backoff interval of 1 second, up to a threshold of 3 times.Sidecar-to-sidecar requests (a service invocation method call) that fail and result in a gRPC code Unavailable or Unauthenticated
ActorsDaprBuiltInActorRetriesPer call retries are performed with a backoff interval of 1 second, up to a threshold of 3 times.Sidecar-to-sidecar requests (an actor method call) that fail and result in a gRPC code Unavailable or Unauthenticated
Actor RemindersDaprBuiltInActorReminderRetriesPer call retries are performed with an exponential backoff with an initial interval of 500ms, up to a maximum of 60s for a duration of 15minsRequests that fail to persist an actor reminder to a state store
Initialization RetriesDaprBuiltInInitializationRetriesPer call retries are performed 3 times with an exponential backoff, an initial interval of 500ms and for a duration of 10sFailures when making a request to an application to retrieve a given spec. For example, failure to retrieve a subscription, component or resiliency specification

The resiliency spec example below shows overriding the default retries for all service invocation requests by using the reserved, named keyword ‘DaprBuiltInServiceRetries’.

Also defined is a retry policy called ‘retryForever’ that is only applied to the appB target. appB uses the ‘retryForever’ retry policy, while all other application service invocation retry failures use the overridden ‘DaprBuiltInServiceRetries’ default policy.

spec:
  policies:
    retries:
      DaprBuiltInServiceRetries: # Overrides default retry behavior for service-to-service calls
        policy: constant
        duration: 5s
        maxRetries: 10

      retryForever: # A user defined retry policy replaces default retries. Targets rely solely on the applied policy. 
        policy: exponential
        maxInterval: 15s
        maxRetries: -1 # Retry indefinitely

  targets:
    apps:
      appB: # app-id of the target service
        retry: retryForever

Try out one of the Resiliency quickstarts:

6.2.3 - Circuit breaker resiliency policies

Configure resiliency policies for circuit breakers

Circuit breaker policies are used when other applications/services/components are experiencing elevated failure rates. Circuit breakers reduce load by monitoring the requests and shutting off all traffic to the impacted service when a certain criteria is met.

After a certain number of requests fail, circuit breakers “trip” or open to prevent cascading failures. By doing this, circuit breakers give the service time to recover from their outage instead of flooding it with events.

The circuit breaker can also enter a “half-open” state, allowing partial traffic through to see if the system has healed.

Once requests resume being successful, the circuit breaker gets into “closed” state and allows traffic to completely resume.

Circuit breaker policy format

spec:
  policies:
    circuitBreakers:
      pubsubCB:
        maxRequests: 1
        interval: 8s
        timeout: 45s
        trip: consecutiveFailures > 8

Spec metadata

Retry optionDescription
maxRequestsThe maximum number of requests allowed to pass through when the circuit breaker is half-open (recovering from failure). Defaults to 1.
intervalThe cyclical period of time used by the circuit breaker to clear its internal counts. If set to 0 seconds, this never clears. Defaults to 0s.
timeoutThe period of the open state (directly after failure) until the circuit breaker switches to half-open. Defaults to 60s.
tripA Common Expression Language (CEL) statement that is evaluated by the circuit breaker. When the statement evaluates to true, the circuit breaker trips and becomes open. Defaults to consecutiveFailures > 5. Other possible values are requests and totalFailures where requests represents the number of either successful or failed calls before the circuit opens and totalFailures represents the total (not necessarily consecutive) number of failed attempts before the circuit opens. Example: requests > 5 and totalFailures >3.

Next steps

Try out one of the Resiliency quickstarts:

6.2.4 - Default resiliency policies

Learn more about the default resiliency policies for timeouts, retries, and circuit breakers

In resiliency, you can set default policies, which have a broad scope. This is done through reserved keywords that let Dapr know when to apply the policy. There are 3 default policy types:

  • DefaultRetryPolicy
  • DefaultTimeoutPolicy
  • DefaultCircuitBreakerPolicy

If these policies are defined, they are used for every operation to a service, application, or component. They can also be modified to be more specific through the appending of additional keywords. The specific policies follow the following pattern, Default%sRetryPolicy, Default%sTimeoutPolicy, and Default%sCircuitBreakerPolicy. Where the %s is replaced by a target of the policy.

Below is a table of all possible default policy keywords and how they translate into a policy name.

KeywordTarget OperationExample Policy Name
AppService invocation.DefaultAppRetryPolicy
ActorActor invocation.DefaultActorTimeoutPolicy
ComponentAll component operations.DefaultComponentCircuitBreakerPolicy
ComponentInboundAll inbound component operations.DefaultComponentInboundRetryPolicy
ComponentOutboundAll outbound component operations.DefaultComponentOutboundTimeoutPolicy
StatestoreComponentOutboundAll statestore component operations.DefaultStatestoreComponentOutboundCircuitBreakerPolicy
PubsubComponentOutboundAll outbound pubusub (publish) component operations.DefaultPubsubComponentOutboundRetryPolicy
PubsubComponentInboundAll inbound pubsub (subscribe) component operations.DefaultPubsubComponentInboundTimeoutPolicy
BindingComponentOutboundAll outbound binding (invoke) component operations.DefaultBindingComponentOutboundCircuitBreakerPolicy
BindingComponentInboundAll inbound binding (read) component operations.DefaultBindingComponentInboundRetryPolicy
SecretstoreComponentOutboundAll secretstore component operations.DefaultSecretstoreComponentTimeoutPolicy
ConfigurationComponentOutboundAll configuration component operations.DefaultConfigurationComponentOutboundCircuitBreakerPolicy
LockComponentOutboundAll lock component operations.DefaultLockComponentOutboundRetryPolicy

Policy hierarchy resolution

Default policies are applied if the operation being executed matches the policy type and if there is no more specific policy targeting it. For each target type (app, actor, and component), the policy with the highest priority is a Named Policy, one that targets that construct specifically.

If none exists, the policies are applied from most specific to most broad.

How default policies and built-in retries work together

In the case of the built-in retries, default policies do not stop the built-in retry policies from running. Both are used together but only under specific circumstances.

For service and actor invocation, the built-in retries deal specifically with issues connecting to the remote sidecar (when needed). As these are important to the stability of the Dapr runtime, they are not disabled unless a named policy is specifically referenced for an operation. In some instances, there may be additional retries from both the built-in retry and the default retry policy, but this prevents an overly weak default policy from reducing the sidecar’s availability/success rate.

Policy resolution hierarchy for applications, from most specific to most broad:

  1. Named Policies in App Targets
  2. Default App Policies / Built-In Service Retries
  3. Default Policies / Built-In Service Retries

Policy resolution hierarchy for actors, from most specific to most broad:

  1. Named Policies in Actor Targets
  2. Default Actor Policies / Built-In Actor Retries
  3. Default Policies / Built-In Actor Retries

Policy resolution hierarchy for components, from most specific to most broad:

  1. Named Policies in Component Targets
  2. Default Component Type + Component Direction Policies / Built-In Actor Reminder Retries (if applicable)
  3. Default Component Direction Policies / Built-In Actor Reminder Retries (if applicable)
  4. Default Component Policies / Built-In Actor Reminder Retries (if applicable)
  5. Default Policies / Built-In Actor Reminder Retries (if applicable)

As an example, take the following solution consisting of three applications, three components and two actor types:

Applications:

  • AppA
  • AppB
  • AppC

Components:

  • Redis Pubsub: pubsub
  • Redis statestore: statestore
  • CosmosDB Statestore: actorstore

Actors:

  • EventActor
  • SummaryActor

Below is policy that uses both default and named policies as applies these to the targets.

spec:
  policies:
    retries:
      # Global Retry Policy
      DefaultRetryPolicy:
        policy: constant
        duration: 1s
        maxRetries: 3
      
      # Global Retry Policy for Apps
      DefaultAppRetryPolicy:
        policy: constant
        duration: 100ms
        maxRetries: 5

      # Global Retry Policy for Apps
      DefaultActorRetryPolicy:
        policy: exponential
        maxInterval: 15s
        maxRetries: 10

      # Global Retry Policy for Inbound Component operations
      DefaultComponentInboundRetryPolicy:
        policy: constant
        duration: 5s
        maxRetries: 5

      # Global Retry Policy for Statestores
      DefaultStatestoreComponentOutboundRetryPolicy:
        policy: exponential
        maxInterval: 60s
        maxRetries: -1

     # Named policy
      fastRetries:
        policy: constant
        duration: 10ms
        maxRetries: 3

     # Named policy
      retryForever:
        policy: exponential
        maxInterval: 10s
        maxRetries: -1

  targets:
    apps:
      appA:
        retry: fastRetries

      appB:
        retry: retryForever
    
    actors:
      EventActor:
        retry: retryForever

    components:
      actorstore:
        retry: fastRetries

The table below is a break down of which policies are applied when attempting to call the various targets in this solution.

TargetPolicy Used
AppAfastRetries
AppBretryForever
AppCDefaultAppRetryPolicy / DaprBuiltInActorRetries
pubsub - PublishDefaultRetryPolicy
pubsub - SubscribeDefaultComponentInboundRetryPolicy
statestoreDefaultStatestoreComponentOutboundRetryPolicy
actorstorefastRetries
EventActorretryForever
SummaryActorDefaultActorRetryPolicy

Next steps

Learn how to override default retry policies.

Try out one of the Resiliency quickstarts:

6.3 - Targets

Apply resiliency policies to targets including apps, components and actors

Targets

Named policies are applied to targets. Dapr supports three target types that apply all Dapr building block APIs:

  • apps
  • components
  • actors

Apps

With the apps target, you can apply retry, timeout, and circuitBreaker policies to service invocation calls between Dapr apps. Under targets/apps, policies are applied to each target service’s app-id. The policies are invoked when a failure occurs in communication between sidecars, as shown in the diagram below.

Dapr provides built-in service invocation retries, so any applied retry policies are additional.

Diagram showing service invocation resiliency

Example of policies to a target app with the app-id “appB”:

specs:
  targets:
    apps:
      appB: # app-id of the target service
        timeout: general
        retry: general
        circuitBreaker: general

Components

With the components target, you can apply retry, timeout and circuitBreaker policies to component operations.

Policies can be applied for outbound operations (calls to the Dapr sidecar) and/or inbound (the sidecar calling your app).

Outbound

outbound operations are calls from the sidecar to a component, such as:

  • Persisting or retrieving state.
  • Publishing a message on a PubSub component.
  • Invoking an output binding.

Some components may have built-in retry capabilities and are configured on a per-component basis.

Diagram showing service invocation resiliency
spec:
  targets:
    components:
      myStateStore:
        outbound:
          retry: retryForever
          circuitBreaker: simpleCB
Inbound

inbound operations are calls from the sidecar to your application, such as:

  • PubSub subscriptions when delivering a message.
  • Input bindings.

Some components may have built-in retry capabilities and are configured on a per-component basis.

Diagram showing service invocation resiliency
spec:
  targets:
    components:
      myInputBinding:
        inbound: 
          timeout: general
          retry: general
          circuitBreaker: general
PubSub

In a PubSub target/component, you can specify both inbound and outbound operations.

Diagram showing service invocation resiliency
spec:
  targets:
    components:
      myPubsub:
        outbound:
          retry: pubsubRetry
          circuitBreaker: pubsubCB
        inbound: # inbound only applies to delivery from sidecar to app
          timeout: general
          retry: general
          circuitBreaker: general

Actors

With the actors target, you can apply retry, timeout, and circuitBreaker policies to actor operations.

When using a circuitBreaker policy for the actors target, you can specify how circuit breaking state should be scoped by using circuitBreakerScope:

  • id: an individual actor ID
  • type: all actors of a given actor type
  • both: both of the above

You can also specify a cache size for the number of circuit breakers to keep in memory with the circuitBreakerCacheSize property, providing an integer value, e.g. 5000.

Example

spec:
  targets:
    actors:
      myActorType:
        timeout: general
        retry: general
        circuitBreaker: general
        circuitBreakerScope: both
        circuitBreakerCacheSize: 5000

Next steps

Try out one of the Resiliency quickstarts:

6.4 - Health checks

How to setup health checks for the Dapr sidecar and your application

6.4.1 - App health checks

Reacting to apps’ health status changes

The app health checks feature allows probing for the health of your application and reacting to status changes.

Applications can become unresponsive for a variety of reasons. For example, your application:

  • Could be too busy to accept new work;
  • Could have crashed; or
  • Could be in a deadlock state.

Sometimes the condition can be transitory, for example:

  • If the app is just busy and will resume accepting new work eventually
  • If the application is being restarted for whatever reason and is in its initialization phase

App health checks are disabled by default. Once you enable app health checks, the Dapr runtime (sidecar) periodically polls your application via HTTP or gRPC calls. When it detects a failure in the app’s health, Dapr stops accepting new work on behalf of the application by:

  • Unsubscribing from all pub/sub subscriptions
  • Stopping all input bindings
  • Short-circuiting all service-invocation requests, which terminate in the Dapr runtime and are not forwarded to the application
  • Unregistering Dapr Actor types, thereby causing Actor instances to migrate to a different replica if one is available

These changes are meant to be temporary, and Dapr resumes normal operations once it detects that the application is responsive again.

Diagram showing the app health feature. Running Dapr with app health enabled causes Dapr to periodically probe the app for its health.

App health checks vs platform-level health checks

App health checks in Dapr are meant to be complementary to, and not replace, any platform-level health checks, like liveness probes when running on Kubernetes.

Platform-level health checks (or liveness probes) generally ensure that the application is running, and cause the platform to restart the application in case of failures.

Unlike platform-level health checks, Dapr’s app health checks focus on pausing work to an application that is currently unable to accept it, but is expected to be able to resume accepting work eventually. Goals include:

  • Not bringing more load to an application that is already overloaded.
  • Do the “polite” thing by not taking messages from queues, bindings, or pub/sub brokers when Dapr knows the application won’t be able to process them.

In this regard, Dapr’s app health checks are “softer”, waiting for an application to be able to process work, rather than terminating the running process in a “hard” way.

Configuring app health checks

App health checks are disabled by default, but can be enabled with either:

  • The --enable-app-health-check CLI flag; or
  • The dapr.io/enable-app-health-check: true annotation when running on Kubernetes.

Adding this flag is both necessary and sufficient to enable app health checks with the default options.

The full list of options are listed in this table:

CLI flagsKubernetes deployment annotationDescriptionDefault value
--enable-app-health-checkdapr.io/enable-app-health-checkBoolean that enables the health checksDisabled
--app-health-check-pathdapr.io/app-health-check-pathPath that Dapr invokes for health probes when the app channel is HTTP (this value is ignored if the app channel is using gRPC)/healthz
--app-health-probe-intervaldapr.io/app-health-probe-intervalNumber of seconds between each health probe5
--app-health-probe-timeoutdapr.io/app-health-probe-timeoutTimeout in milliseconds for health probe requests500
--app-health-thresholddapr.io/app-health-thresholdMax number of consecutive failures before the app is considered unhealthy3

See the full Dapr arguments and annotations reference for all options and how to enable them.

Additionally, app health checks are impacted by the protocol used for the app channel, which is configured with the following flag or annotation:

CLI flagKubernetes deployment annotationDescriptionDefault value
--app-protocoldapr.io/app-protocolProtocol used for the app channel. supported values are http, grpc, https, grpcs, and h2c (HTTP/2 Cleartext).http

Health check paths

HTTP

When using HTTP (including http, https, and h2c) for app-protocol, Dapr performs health probes by making an HTTP call to the path specified in app-health-check-path, which is /health by default.

For your app to be considered healthy, the response must have an HTTP status code in the 200-299 range. Any other status code is considered a failure. Dapr is only concerned with the status code of the response, and ignores any response header or body.

gRPC

When using gRPC for the app channel (app-protocol set to grpc or grpcs), Dapr invokes the method /dapr.proto.runtime.v1.AppCallbackHealthCheck/HealthCheck in your application. Most likely, you will use a Dapr SDK to implement the handler for this method.

While responding to a health probe request, your app may decide to perform additional internal health checks to determine if it’s ready to process work from the Dapr runtime. However, this is not required; it’s a choice that depends on your application’s needs.

Intervals, timeouts, and thresholds

Intervals

By default, when app health checks are enabled, Dapr probes your application every 5 seconds. You can configure the interval, in seconds, with app-health-probe-interval. These probes happen regularly, regardless of whether your application is healthy or not.

Timeouts

When the Dapr runtime (sidecar) is initially started, Dapr waits for a successful health probe before considering the app healthy. This means that pub/sub subscriptions, input bindings, and service invocation requests won’t be enabled for your application until this first health check is complete and successful.

Health probe requests are considered successful if the application sends a successful response (as explained above) within the timeout configured in app-health-probe-timeout. The default value is 500, corresponding to 500 milliseconds (half a second).

Thresholds

Before Dapr considers an app to have entered an unhealthy state, it will wait for app-health-threshold consecutive failures, whose default value is 3. This default value means that your application must fail health probes 3 times in a row to be considered unhealthy.

If you set the threshold to 1, any failure causes Dapr to assume your app is unhealthy and will stop delivering work to it.

A threshold greater than 1 can help exclude transient failures due to external circumstances. The right value for your application depends on your requirements.

Thresholds only apply to failures. A single successful response is enough for Dapr to consider your app to be healthy and resume normal operations.

Example

Use the CLI flags with the dapr run command to enable app health checks:

dapr run \
  --app-id my-app \
  --app-port 7001 \
  --app-protocol http \
  --enable-app-health-check \
  --app-health-check-path=/healthz \
  --app-health-probe-interval 3 \
  --app-health-probe-timeout 200 \
  --app-health-threshold 2 \
  -- \
    <command to execute>

To enable app health checks in Kubernetes, add the relevant annotations to your Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  template:
    metadata:
      labels:
        app: my-app
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "my-app"
        dapr.io/app-port: "7001"
        dapr.io/app-protocol: "http"
        dapr.io/enable-app-health-check: "true"
        dapr.io/app-health-check-path: "/healthz"
        dapr.io/app-health-probe-interval: "3"
        dapr.io/app-health-probe-timeout: "200"
        dapr.io/app-health-threshold: "2"

Demo

Watch this video for an overview of using app health checks:

6.4.2 - Sidecar health

Dapr sidecar health checks

Dapr provides a way to determine its health using an HTTP /healthz endpoint. With this endpoint, the daprd process, or sidecar, can be:

  • Probed for its overall health
  • Probed for Dapr sidecar readiness from infrastructure platforms
  • Determined for readiness and liveness with Kubernetes

In this guide, you learn how the Dapr /healthz endpoint integrates with health probes from the application hosting platform (for example, Kubernetes) as well as the Dapr SDKs.

The following diagram shows the steps when a Dapr sidecar starts, the healthz endpoint and when the app channel is initialized.

Diagram of Dapr checking oubound health connections.

Outbound health endpoint

As shown by the red boundary lines in the diagram above, the v1.0/healthz/ endpoint is used to wait for when:

  • All components are initialized;
  • The Dapr HTTP port is available; and,
  • The app channel is initialized.

This is used to check the complete initialization of the Dapr sidecar and its health.

Setting the DAPR_HEALTH_TIMEOUT environment variable lets you control the health timeout, which, for example, can be important in different environments with higher latency.

On the other hand, as shown by the green boundary lines in the diagram above, the v1.0/healthz/outbound endpoint returns successfully when:

  • All the components are initialized;
  • The Dapr HTTP port is available; but,
  • The app channel is not yet established.

In the Dapr SDKs, the waitForSidecar/wait_until_ready method (depending on which SDK you use) is used for this specific check with the v1.0/healthz/outbound endpoint. Using this behavior, instead of waiting for the app channel to be available (see: red boundary lines) with the v1.0/healthz/ endpoint, Dapr waits for a successful response from v1.0/healthz/outbound. This approach enables your application to perform calls on the Dapr sidecar APIs before the app channel is initalized - for example, reading secrets with the secrets API.

If you are using the waitForSidecar/wait_until_ready method on the SDKs, then the correct initialization is performed. Otherwise, you can call the v1.0/healthz/outbound endpoint during initalization, and if successesful, you can call the Dapr sidecar APIs.

SDKs supporting outbound health endpoint

Currently, the v1.0/healthz/outbound endpoint is supported in the:

Health endpoint: Integration with Kubernetes

When deploying Dapr to a hosting platform like Kubernetes, the Dapr health endpoint is automatically configured for you.

Kubernetes uses readiness and liveness probes to determines the health of the container.

Liveness

The kubelet uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock (a running application that is unable to make progress). Restarting a container in such a state can help to make the application more available despite having bugs.

How to configure a liveness probe in Kubernetes

In the pod configuration file, the liveness probe is added in the containers spec section as shown below:

    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 3
      periodSeconds: 3

In the above example, the periodSeconds field specifies that the kubelet should perform a liveness probe every 3 seconds. The initialDelaySeconds field tells the kubelet that it should wait 3 seconds before performing the first probe. To perform a probe, the kubelet sends an HTTP GET request to the server that is running in the container and listening on port 8080. If the handler for the server’s /healthz path returns a success code, the kubelet considers the container to be alive and healthy. If the handler returns a failure code, the kubelet kills the container and restarts it.

Any HTTP status code between 200 and 399 indicates success; any other status code indicates failure.

Readiness

The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A pod is considered ready when all of its containers are ready. One use of this readiness signal is to control which pods are used as backends for Kubernetes services. When a pod is not ready, it is removed from Kubernetes service load balancers.

How to configure a readiness probe in Kubernetes

Readiness probes are configured similarly to liveness probes. The only difference is that you use the readinessProbe field instead of the livenessProbe field:

    readinessProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 3
      periodSeconds: 3

Sidecar Injector

When integrating with Kubernetes, the Dapr sidecar is injected with a Kubernetes probe configuration telling it to use the Dapr healthz endpoint. This is done by the “Sidecar Injector” system service. The integration with the kubelet is shown in the diagram below.

Diagram of Dapr services interacting

How the Dapr sidecar health endpoint is configured with Kubernetes

As mentioned above, this configuration is done automatically by the Sidecar Injector service. This section describes the specific values that are set on the liveness and readiness probes.

Dapr has its HTTP health endpoint /v1.0/healthz on port 3500. This can be used with Kubernetes for readiness and liveness probe. When the Dapr sidecar is injected, the readiness and liveness probes are configured in the pod configuration file with the following values:

    livenessProbe:
      httpGet:
        path: v1.0/healthz
        port: 3500
      initialDelaySeconds: 5
      periodSeconds: 10
      timeoutSeconds : 5
      failureThreshold : 3
    readinessProbe:
      httpGet:
        path: v1.0/healthz
        port: 3500
      initialDelaySeconds: 5
      periodSeconds: 10
      timeoutSeconds : 5
      failureThreshold: 3

Delay graceful shutdown

Dapr accepts a dapr.io/block-shutdown-duration annotation or --dapr-block-shutdown-duration CLI flag, which delays the full shutdown procedure for the specified duration, or until the app reports as unhealthy, whichever is sooner.

During this period, all subscriptions and input bindings are closed. This is useful for applications that need to use the Dapr APIs as part of their own shutdown procedure.

Applicable annotations or CLI flags include:

  • --dapr-graceful-shutdown-seconds/dapr.io/graceful-shutdown-seconds
  • --dapr-block-shutdown-duration/dapr.io/block-shutdown-duration
  • --dapr-graceful-shutdown-seconds/dapr.io/graceful-shutdown-seconds
  • --dapr-block-shutdown-duration/dapr.io/block-shutdown-duration

Learn more about these and how to use them in the Annotations and arguments guide.

7 - Support and versioning

The support and versioning options available for Dapr

7.1 - Versioning policy

Dapr’s versioning policies

Introduction

Dapr is designed for future changes in the runtime, APIs and components with versioning schemes. This topic describes the versioning schemes and strategies for APIs, manifests such as components and Github repositories.

Versioning

Versioning is the process of assigning either unique version names or unique version numbers to unique states of computer software.

  • Versioning provides compatibility, explicit change control and handling changes, in particular breaking changes.
  • Dapr strives to be backwards compatible. If a breaking change is needed it’ll be announced in advance.
  • Deprecated features are done over multiple releases with both new and deprecated features working side-by-side.

Versioning refers to the following Dapr repos: dapr, CLI, stable language SDKs, dashboard, components-contrib, quickstarts, helm-charts and documentation.

Dapr has the following versioning schemes:

  • Dapr HTTP API versioned with MAJOR.MINOR
  • Dapr GRPC API with MAJOR
  • Releases (GitHub repositories including dapr, CLI, SDKs and Helm Chart) with MAJOR.MINOR.PATCH
  • Documentation and Quickstarts repositories are versioned with the Dapr runtime repository versioning.
  • Dapr Components with MAJOR in components-contrib GitHub repositories.
  • Dapr Manifests with MAJOR.MINOR. These include subscriptions and configurations.

Note that the Dapr APIs, binaries releases (runtime, CLI, SDKs) and components are all independent from one another.

Dapr HTTP API

The Dapr HTTP API is versioned according to these REST API guidelines.

Based to the these guidelines;

  • A MAJOR version of the API is incremented when a deprecation is expected of the older version. Any such deprecation will be communicated and an upgrade path made available.
  • A MINOR versions may be incremented for any other changes. For example a change to the JSON schema of the message sent to the API. The definition of a breaking change to the API can be viewed here.
  • Experimental APIs include an “alpha” suffix to denote for their alpha status. For example v1.0alpha, v2.0alpha, etc.

Dapr runtime

Dapr releases use MAJOR.MINOR.PATCH versioning. For example 1.0.0. Read Supported releases for more on the versioning of releases.

Helm Charts

Helm charts in the helm-charts repo are versioned with the Dapr runtime. The Helm charts are used in the Kubernetes deployment

Language SDKs, CLI and dashboard

The Dapr language SDKs, CLI and dashboard are versioned independently from the Dapr runtime and can be released at different schedules. See this table to show the compatibility between versions of the SDKs, CLI, dashboard and runtime. Each new release on the runtime lists the corresponding supported SDKs, CLI and Dashboard.

SDKs, CLIs and Dashboard are versioning follows a MAJOR.MINOR.PATCH format. A major version is incremented when there’s a non-backwards compatible change in an SDK (for example, changing a parameter on a client method. A minor version is updated for new features and bug fixes and the patch version is incremented in case of bug or security hot fixes.

Samples and examples in SDKs version with that repo.

Components

Components are implemented in the components-contrib repository and follow a MAJOR versioning scheme. The version for components adheres to major versions (vX), as patches and non-breaking changes are added to the latest major version. The version is incremented when there’s a non-backwards compatible change in a component interface, for example, changing an existing method in the State Store interface.

The components-contrib repo release is a flat version across all components inside. That is, a version for the components-contrib repo release is made up of all the schemas for the components inside it. A new version of Dapr does not mean there is a new release of components-contrib if there are no component changes.

Note: Components have a production usage lifecycle status: Alpha, Beta and Stable. These statuses are not related to their versioning. The tables of supported components shows both their versions and their status.

For more information on component versioning read Version 2 and beyond of a component

Component schemas

Versioning for component YAMLs comes in two forms:

  • Versioning for the component manifest. The apiVersion
  • Version for the component implementation. The .spec.version

A component manifest includes the schema for an implementation in the .spec.metadata field, with the .type field denoting the implementation

See the comments in the example below:

apiVersion: dapr.io/v1alpha1 # <-- This is the version of the component manifest
kind: Component
metadata:
  name: pubsub
spec:
  version: v1 # <-- This is the version of the pubsub.redis schema implementation
  type: pubsub.redis
  metadata:
  - name: redisHost
    value: redis-master:6379
  - name: redisPassword
    value: general-kenobi

Component manifest version

The Component YAML manifest is versioned with dapr.io/v1alpha1.

Component implementation version

The version for a component implementation is determined by the .spec.version field as can be seen in the example above. The .spec.version field is mandatory in a schema instance and the component fails to load if this is not present. For the release of Dapr 1.0.0 all components are marked as v1.The component implementation version is incremented only for non-backward compatible changes.

Component deprecations

Deprecations of components will be announced two (2) releases ahead. Deprecation of a component, results in major version update of the component version. After 2 releases, the component is unregistered from the Dapr runtime, and trying to load it will throw a fatal exception.

Component deprecations and removal are announced in the release notes.

Quickstarts and Samples

Quickstarts in the Quickstarts repo are versioned with the runtime, where a table of corresponding versions is on the front page of the samples repo. Users should only use Quickstarts corresponding to the version of the runtime being run.

Samples in the Samples repo are each versioned on a case by case basis depending on the sample maintainer. Samples that become very out of date with the runtime releases (many versions behind) or have not been maintained for more than 1 year will be removed.

7.2 - Supported runtime and SDK releases

Runtime and SDK release support and upgrade policies

Introduction

This topic details the supported versions of Dapr releases, the upgrade policies and how deprecations and breaking changes are communicated in all Dapr repositories (runtime, CLI, SDKs, etc) at versions 1.x and above.

Dapr releases use MAJOR.MINOR.PATCH versioning. For example, 1.0.0.

VersioningDescription
MAJORUpdated when there’s a non-backward compatible change to the runtime, such as an API change. A MAJOR release can also occur then there is a considered a significant addition/change of functionality that needs to differentiate from the previous version.
MINORUpdated as part of the regular release cadence, including new features, bug, and security fixes.
PATCHIncremented for a critical issue (P0) and security hot fixes.

A supported release means:

  • A hoxfix patch is released if the release has a critical issue such as a mainline broken scenario or a security issue. Each of these are reviewed on a case by case basis.
  • Issues are investigated for the supported releases. If a release is no longer supported, you need to upgrade to a newer release and determine if the issue is still relevant.

From the 1.8.0 release onwards three (3) versions of Dapr are supported; the current and previous two (2) versions. Typically these are MINORrelease updates. This means that there is a rolling window that moves forward for supported releases and it is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr you may have to do intermediate upgrades to get to a supported version.

There will be at least 13 weeks (3 months) between major.minor version releases giving users at least a 9 month rolling window for upgrading from a non-supported version. For more details on the release process read release cycle and cadence

Patch support is for supported versions (current and previous).

Build variations

The Dapr’s sidecar image is published to both GitHub Container Registry and Docker Registry. The default image contains all components. From version 1.11, Dapr also offers a variation of the sidecar image, containing only stable components.

  • Default sidecar images: daprio/daprd:<version> or ghcr.io/dapr/daprd:<version> (for example ghcr.io/dapr/daprd:1.11.1)
  • Sidecar images for stable components: daprio/daprd:<version>-stablecomponents or ghcr.io/dapr/daprd:<version>-stablecomponents (for example ghcr.io/dapr/daprd:1.11.1-stablecomponents)

On Kubernetes, the sidecar image can be overwritten for the application Deployment resource with the dapr.io/sidecar-image annotation. See more about Dapr’s arguments and annotations. The default ‘daprio/daprd:latest’ image is used if not specified.

Learn more about Dapr components’ certification lifecycle.

Supported versions

The table below shows the versions of Dapr releases that have been tested together and form a “packaged” release. Any other combinations of releases are not supported.

Release dateRuntimeCLISDKsDashboardStatusRelease notes
May 5th 20251.15.5
1.15.0Java 1.14.1
Go 1.12.0
PHP 1.2.0
Python 1.15.0
.NET 1.15.4
JS 3.5.2
Rust 0.16.1
0.15.0Supported (current)v1.15.5 release notes
April 4th 20251.15.4
1.15.0Java 1.14.0
Go 1.12.0
PHP 1.2.0
Python 1.15.0
.NET 1.15.4
JS 3.5.2
Rust 0.16.1
0.15.0Supported (current)v1.15.4 release notes
March 5rd 20251.15.3
1.15.0Java 1.14.0
Go 1.12.0
PHP 1.2.0
Python 1.15.0
.NET 1.15.4
JS 3.5.2
Rust 0.16.1
0.15.0Supported (current)v1.15.3 release notes
March 3rd 20251.15.2
1.15.0Java 1.14.0
Go 1.12.0
PHP 1.2.0
Python 1.15.0
.NET 1.15.0
JS 3.5.0
Rust 0.16
0.15.0Supported (current)v1.15.2 release notes
February 28th 20251.15.1
1.15.0Java 1.14.0
Go 1.12.0
PHP 1.2.0
Python 1.15.0
.NET 1.15.0
JS 3.5.0
Rust 0.16
0.15.0Supported (current)v1.15.1 release notes
February 27th 20251.15.0
1.15.0Java 1.14.0
Go 1.12.0
PHP 1.2.0
Python 1.15.0
.NET 1.15.0
JS 3.5.0
Rust 0.16
0.15.0Supportedv1.15.0 release notes
September 16th 20241.14.4
1.14.1Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1
0.15.0Supportedv1.14.4 release notes
September 13th 20241.14.3
1.14.1Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1
0.15.0âš ī¸ Recalledv1.14.3 release notes
September 6th 20241.14.2
1.14.1Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1
0.15.0Supportedv1.14.2 release notes
August 14th 20241.14.1
1.14.1Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1
0.15.0Supportedv1.14.1 release notes
August 14th 20241.14.0
1.14.0Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1
0.15.0Supportedv1.14.0 release notes
May 29th 20241.13.4
1.13.0Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0
0.14.0Supportedv1.13.4 release notes
May 21st 20241.13.3
1.13.0Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0
0.14.0Supportedv1.13.3 release notes
April 3rd 20241.13.2
1.13.0Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0
0.14.0Supportedv1.13.2 release notes
March 26th 20241.13.1
1.13.0Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0
0.14.0Supportedv1.13.1 release notes
March 6th 20241.13.0
1.13.0Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0
0.14.0Supportedv1.13.0 release notes
January 17th 20241.12.4
1.12.0Java 1.10.0
Go 1.9.1
PHP 1.2.0
Python 1.12.0
.NET 1.12.0
JS 3.2.0
0.14.0Unsupportedv1.12.4 release notes
January 2nd 20241.12.3
1.12.0Java 1.10.0
Go 1.9.1
PHP 1.2.0
Python 1.12.0
.NET 1.12.0
JS 3.2.0
0.14.0Unsupportedv1.12.3 release notes
November 18th 20231.12.2
1.12.0Java 1.10.0
Go 1.9.1
PHP 1.2.0
Python 1.12.0
.NET 1.12.0
JS 3.2.0
0.14.0Unsupportedv1.12.2 release notes
November 16th 20231.12.1
1.12.0Java 1.10.0
Go 1.9.1
PHP 1.2.0
Python 1.12.0
.NET 1.12.0
JS 3.2.0
0.14.0Unsupportedv1.12.1 release notes
October 11th 20231.12.0
1.12.0Java 1.10.0
Go 1.9.0
PHP 1.1.0
Python 1.11.0
.NET 1.12.0
JS 3.1.2
0.14.0Unsupportedv1.12.0 release notes
November 18th 20231.11.6
1.11.0Java 1.9.0
Go 1.8.0
PHP 1.1.0
Python 1.10.0
.NET 1.11.0
JS 3.1.0
0.13.0Unsupportedv1.11.6 release notes
November 3rd 20231.11.5
1.11.0Java 1.9.0
Go 1.8.0
PHP 1.1.0
Python 1.10.0
.NET 1.11.0
JS 3.1.0
0.13.0Unsupportedv1.11.5 release notes
October 5th 20231.11.4
1.11.0Java 1.9.0
Go 1.8.0
PHP 1.1.0
Python 1.10.0
.NET 1.11.0
JS 3.1.0
0.13.0Unsupportedv1.11.4 release notes
August 31st 20231.11.3
1.11.0Java 1.9.0
Go 1.8.0
PHP 1.1.0
Python 1.10.0
.NET 1.11.0
JS 3.1.0
0.13.0Unsupportedv1.11.3 release notes
July 20th 20231.11.2
1.11.0Java 1.9.0
Go 1.8.0
PHP 1.1.0
Python 1.10.0
.NET 1.11.0
JS 3.1.0
0.13.0Unsupportedv1.11.2 release notes
June 22nd 20231.11.1
1.11.0Java 1.9.0
Go 1.8.0
PHP 1.1.0
Python 1.10.0
.NET 1.11.0
JS 3.1.0
0.13.0Unsupportedv1.11.1 release notes
June 12th 20231.11.0
1.11.0Java 1.9.0
Go 1.8.0
PHP 1.1.0
Python 1.10.0
.NET 1.11.0
JS 3.1.0
0.13.0Unsupportedv1.11.0 release notes
November 18th 20231.10.10
1.10.0Java 1.8.0
Go 1.7.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 3.0.0
0.11.0Unsupported
July 20th 20231.10.9
1.10.0Java 1.8.0
Go 1.7.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 3.0.0
0.11.0Unsupported
June 22nd 20231.10.8
1.10.0Java 1.8.0
Go 1.7.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 3.0.0
0.11.0Unsupported
May 15th 20231.10.7
1.10.0Java 1.8.0
Go 1.7.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 3.0.0
0.11.0Unsupported
May 12th 20231.10.6
1.10.0Java 1.8.0
Go 1.7.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 3.0.0
0.11.0Unsupported
April 13 20231.10.5
1.10.0Java 1.8.0
Go 1.6.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 3.0.0
0.11.0Unsupported
March 16 20231.10.4
1.10.0Java 1.8.0
Go 1.6.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 2.5.0
0.11.0Unsupported
March 14 20231.10.3
1.10.0Java 1.8.0
Go 1.6.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 2.5.0
0.11.0Unsupported
February 24 20231.10.2
1.10.0Java 1.8.0
Go 1.6.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 2.5.0
0.11.0Unsupported
February 20 20231.10.1
1.10.0Java 1.8.0
Go 1.6.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 2.5.0
0.11.0Unsupported
February 14 20231.10.0
1.10.0Java 1.8.0
Go 1.6.0
PHP 1.1.0
Python 1.9.0
.NET 1.10.0
JS 2.5.0
0.11.0Unsupported
December 2nd 20221.9.5
1.9.1Java 1.7.0
Go 1.6.0
PHP 1.1.0
Python 1.8.3
.NET 1.9.0
JS 2.4.2
0.11.0Unsupported
November 17th 20221.9.4
1.9.1Java 1.7.0
Go 1.6.0
PHP 1.1.0
Python 1.8.3
.NET 1.9.0
JS 2.4.2
0.11.0Unsupported
November 4th 20221.9.3
1.9.1Java 1.7.0
Go 1.6.0
PHP 1.1.0
Python 1.8.3
.NET 1.9.0
JS 2.4.2
0.11.0Unsupported
November 1st 20221.9.2
1.9.1Java 1.7.0
Go 1.6.0
PHP 1.1.0
Python 1.8.1
.NET 1.9.0
JS 2.4.2
0.11.0Unsupported
October 26th 20221.9.1
1.9.1Java 1.7.0
Go 1.6.0
PHP 1.1.0
Python 1.8.1
.NET 1.9.0
JS 2.4.2
0.11.0Unsupported
October 13th 20221.9.0
1.9.1Java 1.7.0
Go 1.6.0
PHP 1.1.0
Python 1.8.3
.NET 1.9.0
JS 2.4.2
0.11.0Unsupported
October 26th 20221.8.6
1.8.1Java 1.6.0
Go 1.5.0
PHP 1.1.0
Python 1.7.0
.NET 1.8.0
JS 2.3.0
0.11.0Unsupported
October 13th 20221.8.5
1.8.1Java 1.6.0
Go 1.5.0
PHP 1.1.0
Python 1.7.0
.NET 1.8.0
JS 2.3.0
0.11.0Unsupported
August 10th 20221.8.4
1.8.1Java 1.6.0
Go 1.5.0
PHP 1.1.0
Python 1.7.0
.NET 1.8.0
JS 2.3.0
0.11.0Unsupported
July 29th 20221.8.3
1.8.0Java 1.6.0
Go 1.5.0
PHP 1.1.0
Python 1.7.0
.NET 1.8.0
JS 2.3.0
0.11.0Unsupported
July 21st 20221.8.2
1.8.0Java 1.6.0
Go 1.5.0
PHP 1.1.0
Python 1.7.0
.NET 1.8.0
JS 2.3.0
0.11.0Unsupported
July 20th 20221.8.1
1.8.0Java 1.6.0
Go 1.5.0
PHP 1.1.0
Python 1.7.0
.NET 1.8.0
JS 2.3.0
0.11.0Unsupported
July 7th 20221.8.0
1.8.0Java 1.6.0
Go 1.5.0
PHP 1.1.0
Python 1.7.0
.NET 1.8.0
JS 2.3.0
0.11.0Unsupported
October 26th 20221.7.5
1.7.0Java 1.5.0
Go 1.4.0
PHP 1.1.0
Python 1.6.0
.NET 1.7.0
JS 2.2.1
0.10.0Unsupported
May 31st 20221.7.4
1.7.0Java 1.5.0
Go 1.4.0
PHP 1.1.0
Python 1.6.0
.NET 1.7.0
JS 2.2.1
0.10.0Unsupported
May 17th 20221.7.3
1.7.0Java 1.5.0
Go 1.4.0
PHP 1.1.0
Python 1.6.0
.NET 1.7.0
JS 2.2.1
0.10.0Unsupported
Apr 22th 20221.7.2
1.7.0Java 1.5.0
Go 1.4.0
PHP 1.1.0
Python 1.6.0
.NET 1.7.0
JS 2.1.0
0.10.0Unsupported
Apr 20th 20221.7.1
1.7.0Java 1.5.0
Go 1.4.0
PHP 1.1.0
Python 1.6.0
.NET 1.7.0
JS 2.1.0
0.10.0Unsupported
Apr 7th 20221.7.0
1.7.0Java 1.5.0
Go 1.4.0
PHP 1.1.0
Python 1.6.0
.NET 1.7.0
JS 2.1.0
0.10.0Unsupported
Apr 20th 20221.6.2
1.6.0Java 1.4.0
Go 1.3.1
PHP 1.1.0
Python 1.5.0
.NET 1.6.0
JS 2.0.0
0.9.0Unsupported
Mar 25th 20221.6.1
1.6.0Java 1.4.0
Go 1.3.1
PHP 1.1.0
Python 1.5.0
.NET 1.6.0
JS 2.0.0
0.9.0Unsupported
Jan 25th 20221.6.0
1.6.0Java 1.4.0
Go 1.3.1
PHP 1.1.0
Python 1.5.0
.NET 1.6.0
JS 2.0.0
0.9.0Unsupported

SDK compatibility

The SDKs and runtime are committed to non-breaking changes other than those required for security issues. All breaking changes are announced if required in the release notes.

SDK and runtime forward compatibility
Newer Dapr SDKs support the latest version of Dapr runtime and two previous versions (N-2).

SDK and runtime backward compatibility
For a new Dapr runtime, the current SDK version and two previous versions (N-2) are supported.

Upgrade paths

After the 1.0 release of the runtime there may be situations where it is necessary to explicitly upgrade through an additional release to reach the desired target. For example, an upgrade from v1.0 to v1.2 may need to pass through v1.1.

The table below shows the tested upgrade paths for the Dapr runtime. Any other combinations of upgrades have not been tested.

General guidance on upgrading can be found for self hosted mode and Kubernetes deployments. It is best to review the target version release notes for specific guidance.

Current Runtime versionMust upgrade throughTarget Runtime version
1.5.0 to 1.5.2N/A1.6.0
1.6.01.6.2
1.6.21.7.5
1.7.51.8.6
1.8.61.9.6
1.9.61.10.7
1.6.0 to 1.6.2N/A1.7.5
1.7.51.8.6
1.8.61.9.6
1.9.61.10.7
1.7.0 to 1.7.5N/A1.8.6
1.8.61.9.6
1.9.61.10.7
1.8.0 to 1.8.6N/A1.9.6
1.9.0 to 1.9.6N/A1.10.8
1.10.0 to 1.10.8N/A1.11.4
1.11.0 to 1.11.4N/A1.12.4
1.12.0 to 1.12.4N/A1.13.5
1.13.0 to 1.13.5N/A1.14.0
1.14.0 to 1.14.4N/A1.14.4
1.15.0N/A1.15.0

Upgrade on Hosting platforms

Dapr can support multiple hosting platforms for production. With the 1.0 release the two supported platforms are Kubernetes and physical machines. For Kubernetes upgrades see Production guidelines on Kubernetes

Supported versions of dependencies

Below is a list of software that the latest version of Dapr (v1.15.5) has been tested against.

DependencySupported Version
KubernetesDapr support for Kubernetes is aligned with Kubernetes Version Skew Policy
Open Telemetry collector (OTEL)v0.101.0
Prometheusv2.28

7.3 - Breaking changes and deprecations

Handling of breaking changes and deprecations

Breaking changes

Breaking changes are defined as a change to any of the following that cause compilation errors or undesirable runtime behavior to an existing 3rd party consumer application or script after upgrading to the next stable minor version of a Dapr artifact (SDK, CLI, runtime, etc):

  • Code behavior
  • Schema
  • Default configuration value
  • Command line argument
  • Published metric
  • Kubernetes resource template
  • Publicly accessible API
  • Publicly visible SDK interface, method, class, or attribute

Breaking changes can be applied right away to the following cases:

  • Projects that have not reached version 1.0.0 yet
  • Preview feature
  • Alpha API
  • Preview or Alpha interface, class, method or attribute in SDK
  • Dapr Component in Alpha or Beta
  • Interfaces for github.com/dapr/components-contrib
  • URLs in Docs and Blog
  • An exceptional case where it is required to fix a critical bug or security vulnerability.

Process for applying breaking changes

There is a process for applying breaking changes:

  1. A deprecation notice must be posted as part of a release.
  2. The breaking changes are applied two (2) releases after the release in which the deprecation was announced.
    • For example, feature X is announced to be deprecated in the 1.0.0 release notes and will then be removed in 1.2.0.

Deprecations

Deprecations can apply to:

  1. APIs, including alpha APIs
  2. Preview features
  3. Components
  4. CLI
  5. Features that could result in security vulnerabilities

Deprecations appear in release notes under a section named “Deprecations”, which indicates:

  • The point in the future the now-deprecated feature will no longer be supported. For example release x.y.z. This is at least two (2) releases prior.
  • Document any steps the user must take to modify their code, operations, etc if applicable in the release notes.

After announcing a future breaking change, the change will happen in 2 releases or 6 months, whichever is greater. Deprecated features should respond with warning but do nothing otherwise.

Announced deprecations

FeatureDeprecation announcementRemoval
GET /v1.0/shutdown API (Users should use POST API instead)1.2.01.4.0
Java domain builder classes deprecated (Users should use setters instead)Java SDK 1.3.0Java SDK 1.5.0
Service invocation will no longer provide a default content type header of application/json when no content-type is specified. You must explicitly set a content-type header for service invocation if your invoked apps rely on this header.1.7.01.9.0
gRPC service invocation using invoke method is deprecated. Use proxy mode service invocation instead. See How-To: Invoke services using gRPC to use the proxy mode.1.9.01.10.0
The CLI flag --app-ssl (in both the Dapr CLI and daprd) has been deprecated in favor of using --app-protocol with values https or grpcs. daprd:6158 cli:12671.11.01.13.0
Hazelcast PubSub Component1.9.01.11.0
Twitter Binding Component1.10.01.11.0
NATS Streaming PubSub Component1.11.01.13.0
Workflows API Alpha1 /v1.0-alpha1/workflows being deprecated in favor of Workflow Client1.15.01.17.0

7.4 - Reporting security issues

How to report a security concern or vulnerability to the Dapr maintainers.

The Dapr project and maintainers make security a central focus of how we operate and design our software. From the Dapr binaries to the GitHub release processes, we take numerous steps to ensure user applications and data is secure. For more information on Dapr security features, visit the security page.

Repositories and issues covered

When we say “a security vulnerability in Dapr”, this means a security issue in any repository under the dapr GitHub organization.

This reporting process is intended only for security issues in the Dapr project itself, and doesn’t apply to applications using Dapr or to issues which do not affect security.

If the issue cannot be fixed by a change to one of the covered repositories above, then it’s recommended to create a GitHub issue in the appropriate repo or raise a question in Discord.

If you’re unsure, err on the side of caution and reach out using the reporting process before raising your issue through GitHub, Discord, or another channel.

Explicitly Not Covered: Vulnerability Scanner Reports

We do not accept reports which amount to copy and pasted output from a vulnerability scanning tool unless work has specifically been done to confirm that a vulnerability reported by the tool actually exists in Dapr, including the CLI, Dapr SDKs, the components-contrib repo, or any other repo under the Dapr org.

We make use of these tools ourselves and try to act on the output they produce. We tend to find, however, that when these reports are sent to our security mailing list they almost always represent false positives, since these tools tend to check for the presence of a library without considering how the library is used in context.

If we receive a report which seems to simply be a vulnerability list from a scanner, we reserve the right to ignore it.

This applies especially when tools produce vulnerability identifiers which are not publicly visible or which are proprietary in some way. We can look up CVEs or other publicly-available identifiers for further details, but cannot do the same for proprietary identifiers.

Security Contacts

The people who should have access to read your security report are listed in maintainers.md.

Reporting Process

  1. Describe the issue in English, ideally with some example configuration or code which allows the issue to be reproduced. Explain why you believe this to be a security issue in Dapr.
  2. Put that information into an email. Use a descriptive title.
  3. Send an email to Security (security@dapr.io)

Response

Response times could be affected by weekends, holidays, breaks or time zone differences. That said, the maintainers team endeavours to reply as soon as possible, ideally within 3 working days.

If the team concludes that the reported issue is indeed a security vulnerability in a Dapr project, at least two members of the maintainers team discuss the next steps together as soon as possible, ideally within 24 hours.

As soon as the team decides that the report is of a genuine vulnerability, one of the team responds to the reporter acknowledging the issue and establishing a disclosure timeline, which should be as soon as possible.

Triage, response, patching and announcement should all happen within 30 days.

7.5 - Preview features

List of current preview features

Preview features in Dapr are considered experimental when they are first released.

Runtime preview features require explicit opt-in in order to be used. The runtime opt-in is specified in a preview setting feature in Dapr’s application configuration. See How-To: Enable preview features for more information.

For CLI there is no explicit opt-in, just the version that this was first made available.

Current preview features

FeatureDescriptionSettingDocumentationVersion introduced
Pluggable componentsAllows creating self-hosted gRPC-based components written in any language that supports gRPC. The following component APIs are supported: State stores, Pub/sub, BindingsN/APluggable components conceptv1.9
Multi-App Run for KubernetesConfigure multiple Dapr applications from a single configuration file and run from a single command on Kubernetesdapr run -k -fMulti-App Runv1.12
CryptographyEncrypt or decrypt data without having to manage secrets keysN/ACryptography conceptv1.11
Actor State TTLAllow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read Actor State Transactions for more information.ActorStateTTLActor State Transactionsv1.11
Component Hot ReloadingAllows for Dapr-loaded components to be “hot reloaded”. A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends.HotReloadHot Reloadingv1.13
Subscription Hot ReloadingAllows for declarative subscriptions to be “hot reloaded”. A subscription is reloaded either when it is created/updated/deleted in Kubernetes, or on file in self-hosted mode. In-flight messages are unaffected when reloading.HotReloadHot Reloadingv1.14
Scheduler Actor RemindersScheduler actor reminders are actor reminders stored in the Scheduler control plane service, as opposed to the Placement control plane service actor reminder system. The SchedulerReminders preview feature defaults to true, but you can disable Scheduler actor reminders by setting it to false.SchedulerRemindersScheduler actor remindersv1.14

7.6 - Alpha and Beta APIs

List of current alpha and beta APIs

Alpha APIs

Building block/APIgRPCHTTPDescriptionDocumentationVersion introduced
Query StateQuery State protov1.0-alpha1/state/statestore/queryThe state query API enables you to retrieve, filter, and sort the key/value data stored in state store components.Query State APIv1.5
Distributed LockLock proto/v1.0-alpha1/lockThe distributed lock API enables you to take a lock on a resource.Distributed Lock APIv1.8
Bulk PublishBulk publish protov1.0-alpha1/publish/bulkThe bulk publish API allows you to publish multiple messages to a topic in a single request.Bulk Publish and Subscribe APIv1.10
Bulk SubscribeBulk subscribe protoN/AThe bulk subscribe application callback receives multiple messages from a topic in a single call.Bulk Publish and Subscribe APIv1.10
CryptographyCrypto protov1.0-alpha1/cryptoThe cryptography API enables you to perform high level cryptography operations for encrypting and decrypting messages.Cryptography APIv1.11
JobsJobs protov1.0-alpha1/jobsThe jobs API enables you to schedule and orchestrate jobs.Jobs APIv1.14
ConversationConversation protov1.0-alpha1/conversationConverse between different large language models using the conversation API.Conversation APIv1.15

Beta APIs

No current beta APIs.

Learn more about the Alpha, Beta, and Stable lifecycle stages.

8 - Performance and scalability statistics of Dapr

Benchmarks and guidelines for Dapr building blocks

8.1 - Service invocation performance

This article provides service invocation API performance benchmarks and resource utilization for the components needed to run Dapr in different hosting environments.

System overview

Dapr consists of a data plane, the sidecar that runs next to your app, and a control plane that configures the sidecars and provides capabilities such as cert and identity management.

Self-hosted components

  • Sidecar (data plane)
  • Sentry (optional, control plane)
  • Placement (optional, control plane)

For more information see overview of Dapr in self-hosted mode.

Kubernetes components

  • Sidecar (data plane)
  • Sentry (optional, control plane)
  • Placement (optional, control planee)
  • Operator (control plane)
  • Sidecar Injector (control plane)

For more information see overview of Dapr on Kubernetes.

Performance summary for Dapr v1.0

The service invocation API is a reverse proxy with built-in service discovery to connect to other services. This includes tracing, metrics, mTLS for in-transit encryption of traffic, together with resiliency in the form of retries for network partitions and connection errors.

Using service invocation you can call from HTTP to HTTP, HTTP to gRPC, gRPC to HTTP, and gRPC to gRPC. Dapr does not use HTTP for the communication between sidecars, always using gRPC, while carrying over the semantics of the protocol used when called from the app. Service invocation is the underlying mechanism of communicating with Dapr Actors.

For more information see service invocation overview.

Kubernetes performance test setup

The test was conducted on a 3 node Kubernetes cluster, using commodity hardware running 4 cores and 8GB of RAM, without any network acceleration. The setup included a load tester (Fortio) pod with a Dapr sidecar injected into it that called the service invocation API to reach a pod on a different node.

Test parameters:

  • 1000 requests per second
  • Sidecar limited to 0.5 vCPU
  • Sidecar mTLS enabled
  • Sidecar telemetry enabled (tracing with a sampling rate of 0.1)
  • Payload of 1KB

The baseline test included direct, non-encrypted traffic, without telemetry, directly from the load tester to the target app.

Control plane performance

The Dapr control plane uses a total of 0.009 vCPU and 61.6 Mb when running in non-HA mode, meaning a single replica per system component. When running in a highly available production setup, the Dapr control plane consumes ~0.02 vCPU and 185 Mb.

ComponentvCPUMemory
Operator0.00112.5 Mb
Sentry0.00513.6 Mb
Sidecar Injector0.00214.6 Mb
Placement0.00120.9 Mb

There are a number of variants that affect the CPU and memory consumption for each of the system components. These variants are shown in the table below.

ComponentvCPUMemory
OperatorNumber of pods requesting components, configurations and subscriptions
SentryNumber of certificate requests
Sidecar InjectorNumber of admission requests
PlacementNumber of actor rebalancing operationsNumber of connected actor hosts

Data plane performance

The Dapr sidecar uses 0.48 vCPU and 23Mb per 1000 requests per second. End-to-end, the Dapr sidecars (client and server) add ~1.40 ms to the 90th percentile latency, and ~2.10 ms to the 99th percentile latency. End-to-end here is a call from one app to another app receiving a response. This is shown by steps 1-7 in this diagram.

This performance is on par or better than commonly used service meshes.

Latency

In the test setup, requests went through the Dapr sidecar both on the client side (serving requests from the load tester tool) and the server side (the target app). mTLS and telemetry (tracing with a sampling rate of 0.1) and metrics were enabled on the Dapr test, and disabled for the baseline test.

Latency for 90th percentile
Latency for 99th percentile

8.2 - Actors activation performance

This article provides service invocation API performance benchmarks and resource utilization for actors in Dapr on Kubernetes.

System overview

For applications using actors in Dapr there are two aspects to be considered. First, is the routing of actor invocations handled by Dapr sidecar. Second, is the actors runtime that is implemented and handled on the application side and depends on the SDK. For now, the performance tests are using the Java SDK to provide an actors runtime in the application.

Kubernetes components

  • Sidecar (data plane)
  • Placement (required for actors, control plane mapping actor types to hosts)
  • Operator (control plane)
  • Sidecar Injector (control plane)
  • Sentry (optional, control plane)

Performance summary for Dapr v1.0

The actors API in Dapr sidecar will identify which hosts are registered for a given actor type and route the request to the appropriate host for a given actor ID. The host runs an instance of the application and uses the Dapr SDK (.Net, Java, Python or PHP) to handle actors requests via HTTP.

This test uses invokes actors via Dapr’s HTTP API directly.

For more information see actors overview.

Kubernetes performance test setup

The test was conducted on a 3 node Kubernetes cluster, using commodity hardware running 4 cores and 8GB of RAM, without any network acceleration. The setup included a load tester (Fortio) pod with a Dapr sidecar injected into it that called the service invocation API to reach a pod on a different node.

Test parameters:

  • 500 requests per second
  • 1 replica
  • 1 minute duration
  • Sidecar limited to 0.5 vCPU
  • mTLS enabled
  • Sidecar telemetry enabled (tracing with a sampling rate of 0.1)
  • Payload of an empty JSON object: {}

Results

  • The actual throughput was ~500 qps.
  • The tp90 latency was ~3ms.
  • The tp99 latency was ~6.2ms.
  • Dapr app consumed ~523m CPU and ~304.7Mb of Memory
  • Dapr sidecar consumed 2m CPU and ~18.2Mb of Memory
  • No app restarts
  • No sidecar restarts

9 - Debugging and Troubleshooting

Tools, techniques and common problems to help users debug and diagnose issues with Dapr

9.1 - Common issues when running Dapr

Common issues and problems faced when running Dapr applications

This guide covers common issues you may encounter while installing and running Dapr.

Dapr can’t connect to Docker when installing the Dapr CLI

When installing and initializing the Dapr CLI, if you see the following error message after running dapr init:

⌛  Making the jump to hyperspace...
❌  could not connect to docker. docker may not be installed or running

Troubleshoot the error by ensuring:

  1. The correct containers are running.

  2. In Docker Desktop, verify the Allow the default Docker socket to be used (requires password) option is selected.

I don’t see the Dapr sidecar injected to my pod

There could be several reasons to why a sidecar will not be injected into a pod. First, check your deployment or pod YAML file, and check that you have the following annotations in the right place:

annotations:
  dapr.io/enabled: "true"
  dapr.io/app-id: "nodeapp"
  dapr.io/app-port: "3000"

Sample deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodeapp
  namespace: default
  labels:
    app: node
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node
  template:
    metadata:
      labels:
        app: node
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "nodeapp"
        dapr.io/app-port: "3000"
    spec:
      containers:
      - name: node
        image: dapriosamples/hello-k8s-node
        ports:
        - containerPort: 3000
        imagePullPolicy: Always

There are some known cases where this might not properly work:

  • If your pod spec template is annotated correctly, and you still don’t see the sidecar injected, make sure Dapr was deployed to the cluster before your deployment or pod were deployed.

    If this is the case, restarting the pods will fix the issue.

  • If you are deploying Dapr on a private GKE cluster, sidecar injection does not work without extra steps. See Setup a Google Kubernetes Engine cluster.

    In order to further diagnose any issue, check the logs of the Dapr sidecar injector:

     kubectl logs -l app=dapr-sidecar-injector -n dapr-system
    

    Note: If you installed Dapr to a different namespace, replace dapr-system above with the desired namespace

  • If you are deploying Dapr on Amazon EKS and using an overlay network such as Calico, you will need to set hostNetwork parameter to true, this is a limitation of EKS with such CNIs.

    You can set this parameter using Helm values.yaml file:

    helm upgrade --install dapr dapr/dapr \
    --namespace dapr-system \
    --create-namespace \
    --values values.yaml
    

    values.yaml

    dapr_sidecar_injector:
      hostNetwork: true
    

    or using command line:

    helm upgrade --install dapr dapr/dapr \
    --namespace dapr-system \
    --create-namespace \
    --set dapr_sidecar_injector.hostNetwork=true
    
  • Make sure the kube api server can reach the following webhooks services:

    Check with your cluster administrators to setup allow ingress rules to the above ports, 4000 and 19443, in the cluster from the kube api servers.

My pod is in CrashLoopBackoff or another failed state due to the daprd sidecar

If the Dapr sidecar (daprd) is taking too long to initialize, this might be surfaced as a failing health check by Kubernetes.

If your pod is in a failed state you should check this:

kubectl describe pod <name-of-pod>

You might see a table like the following at the end of the command output:

  Normal   Created    7m41s (x2 over 8m2s)   kubelet, aks-agentpool-12499885-vmss000000  Created container daprd
  Normal   Started    7m41s (x2 over 8m2s)   kubelet, aks-agentpool-12499885-vmss000000  Started container daprd
  Warning  Unhealthy  7m28s (x5 over 7m58s)  kubelet, aks-agentpool-12499885-vmss000000  Readiness probe failed: Get http://10.244.1.10:3500/v1.0/healthz: dial tcp 10.244.1.10:3500: connect: connection refused
  Warning  Unhealthy  7m25s (x6 over 7m55s)  kubelet, aks-agentpool-12499885-vmss000000  Liveness probe failed: Get http://10.244.1.10:3500/v1.0/healthz: dial tcp 10.244.1.10:3500: connect: connection refused
  Normal   Killing    7m25s (x2 over 7m43s)  kubelet, aks-agentpool-12499885-vmss000000  Container daprd failed liveness probe, will be restarted
  Warning  BackOff    3m2s (x18 over 6m48s)  kubelet, aks-agentpool-12499885-vmss000000  Back-off restarting failed container

The message Container daprd failed liveness probe, will be restarted indicates at the Dapr sidecar has failed its health checks and will be restarted. The messages Readiness probe failed: Get http://10.244.1.10:3500/v1.0/healthz: dial tcp 10.244.1.10:3500: connect: connection refused and Liveness probe failed: Get http://10.244.1.10:3500/v1.0/healthz: dial tcp 10.244.1.10:3500: connect: connection refused show that the health check failed because no connection could be made to the sidecar.

The most common cause of this failure is that a component (such as a state store) is misconfigured and is causing initialization to take too long. When initialization takes a long time, it’s possible that the health check could terminate the sidecar before anything useful is logged by the sidecar.

To diagnose the root cause:

  • Significantly increase the liveness probe delay - link
  • Set the log level of the sidecar to debug - link
  • Watch the logs for meaningful information - link

Remember to configure the liveness check delay and log level back to your desired values after solving the problem.

I am unable to save state or get state

Have you installed an Dapr State store in your cluster?

To check, use kubectl get a list of components:

kubectl get components

If there isn’t a state store component, it means you need to set one up. Visit here for more details.

If everything’s set up correctly, make sure you got the credentials right. Search the Dapr runtime logs and look for any state store errors:

kubectl logs <name-of-pod> daprd

I am unable to publish and receive events

Have you installed an Dapr Message Bus in your cluster?

To check, use kubectl get a list of components:

kubectl get components

If there isn’t a pub/sub component, it means you need to set one up. Visit here for more details.

If everything is set up correctly, make sure you got the credentials right. Search the Dapr runtime logs and look for any pub/sub errors:

kubectl logs <name-of-pod> daprd

I’m getting 500 Error responses when calling Dapr

This means there are some internal issue inside the Dapr runtime. To diagnose, view the logs of the sidecar:

kubectl logs <name-of-pod> daprd

I’m getting 404 Not Found responses when calling Dapr

This means you’re trying to call an Dapr API endpoint that either doesn’t exist or the URL is malformed. Look at the Dapr API reference here and make sure you’re calling the right endpoint.

I don’t see any incoming events or calls from other services

Have you specified the port your app is listening on? In Kubernetes, make sure the dapr.io/app-port annotation is specified:

annotations:
  dapr.io/enabled: "true"
  dapr.io/app-id: "nodeapp"
  dapr.io/app-port: "3000"

If using Dapr Standalone and the Dapr CLI, make sure you pass the --app-port flag to the dapr run command.

My Dapr-enabled app isn’t behaving correctly

The first thing to do is inspect the HTTP error code returned from the Dapr API, if any. If you still can’t find the issue, try enabling debug log levels for the Dapr runtime. See here how to do so.

You might also want to look at error logs from your own process. If running on Kubernetes, find the pod containing your app, and execute the following:

kubectl logs <pod-name> <name-of-your-container>

If running in Standalone mode, you should see the stderr and stdout outputs from your app displayed in the main console session.

I’m getting timeout/connection errors when running Actors locally

Each Dapr instance reports it’s host address to the placement service. The placement service then distributes a table of nodes and their addresses to all Dapr instances. If that host address is unreachable, you are likely to encounter socket timeout errors or other variants of failing request errors.

Unless the host name has been specified by setting an environment variable named DAPR_HOST_IP to a reachable, pingable address, Dapr will loop over the network interfaces and select the first non-loopback address it finds.

As described above, in order to tell Dapr what the host name should be used, simply set an environment variable with the name of DAPR_HOST_IP.

The following example shows how to set the Host IP env var to 127.0.0.1:

Note: for versions <= 0.4.0 use HOST_IP

export DAPR_HOST_IP=127.0.0.1

None of my components are getting loaded when my application starts. I keep getting “Error component X cannot be found”

This is usually due to one of the following issues

  • You may have defined the NAMESPACE environment variable locally or deployed your components into a different namespace in Kubernetes. Check which namespace your app and the components are deployed to. Read scoping components to one or more applications for more information.
  • You may have not provided a --resources-path with the Dapr run commands or not placed your components into the default components folder for your OS. Read define a component for more information.
  • You may have a syntax issue in component YAML file. Check your component YAML with the component YAML samples.

Service invocation is failing and my Dapr service is missing an appId (macOS)

Some organizations will implement software that filters out all UDP traffic, which is what mDNS is based on. Mostly commonly, on MacOS, Microsoft Content Filter is the culprit.

In order for mDNS to function properly, ensure Microsoft Content Filter is inactive.

  • Open a terminal shell.
  • Type mdatp system-extension network-filter disable and hit enter.
  • Enter your account password.

Microsoft Content Filter is disabled when the output is “Success”.

Some organizations will re-enable the filter from time to time. If you repeatedly encounter app-id values missing, first check to see if the filter has been re-enabled before doing more extensive troubleshooting.

Admission webhook denied the request

You may encounter an error similar to the one below due to admission webhook having an allowlist for service accounts to create or modify resources.

root:[dapr]$ kubectl run -i --tty --rm debug --image=busybox --restart=Never -- sh
Error from server: admission webhook "sidecar-injector.dapr.io" denied the request: service account 'user-xdd5l' not on the list of allowed controller accounts

To resolve this error, you should create a clusterrolebind for the current user:

kubectl create clusterrolebinding dapr-<name-of-user> --clusterrole=dapr-operator-admin --user <name-of-user>

You can run the below command to get all users in your cluster:

kubectl config get-users

You may learn more about webhooks here.

Ports not available during dapr init

You might encounter the following error on Windows after attempting to execute dapr init:

PS C:\Users\You> dapr init Making the jump to hyperspace… Container images will be pulled from Docker Hub Installing runtime version 1.14.4 Downloading binaries and setting up components… docker: Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:52379 -> 0.0.0.0:0: listen tcp4 0.0.0.0:52379: bind: An attempt was made to access a socket in a way forbidden by its access permissions.

To resolve this error, open a command prompt in an elevated terminal and run:

nat stop winnat
dapr init
net start winnat

9.2 - Configure and view Dapr Logs

Understand how logging works in Dapr and how to configure and view logs

This section will assist you in understanding how logging works in Dapr, configuring and viewing logs.

Overview

Logs have different, configurable verbosity levels. The levels outlined below are the same for both system components and the Dapr sidecar process/container:

  1. error
  2. warn
  3. info
  4. debug

error produces the minimum amount of output, where debug produces the maximum amount. The default level is info, which provides a balanced amount of information for operating Dapr in normal conditions.

To set the output level, you can use the --log-level command-line option. For example:

./daprd --log-level error
./placement --log-level debug

This will start the Dapr runtime binary with a log level of error and the Dapr Actor Placement Service with a log level of debug.

Logs in stand-alone mode

To set the log level when running your app with the Dapr CLI, pass the log-level param:

dapr run --log-level warn node myapp.js

As outlined above, every Dapr binary takes a --log-level argument. For example, to launch the placement service with a log level of warning:

./placement --log-level warn

Viewing Logs on Standalone Mode

When running Dapr with the Dapr CLI, both your app’s log output and the runtime’s output will be redirected to the same session, for easy debugging. For example, this is the output when running Dapr:

dapr run node myapp.js
â„šī¸  Starting Dapr with id Trackgreat-Lancer on port 56730
✅  You are up and running! Both Dapr and your app logs will appear here.

== APP == App listening on port 3000!
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="starting Dapr Runtime -- version 0.3.0-alpha -- commit b6f2810-dirty"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="log level set to: info"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="standalone mode configured"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="app id: Trackgreat-Lancer"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="loaded component statestore (state.redis)"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="loaded component messagebus (pubsub.redis)"
== DAPR == 2019/09/05 12:26:43 redis: connecting to localhost:6379
== DAPR == 2019/09/05 12:26:43 redis: connected to localhost:6379 (localAddr: [::1]:56734, remAddr: [::1]:6379)
== DAPR == time="2019-09-05T12:26:43-07:00" level=warn msg="failed to init input bindings: app channel not initialized"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="actor runtime started. actor idle timeout: 1h0m0s. actor scan interval: 30s"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="actors: starting connection attempt to placement service at localhost:50005"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="http server is running on port 56730"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="gRPC server is running on port 56731"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="dapr initialized. Status: Running. Init Elapsed 8.772922000000001ms"
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="actors: established connection to placement service at localhost:50005"

Logs in Kubernetes mode

Learn how to debug daprd on Kubernetes.

You can set the log level individually for every sidecar by providing the following annotation in your pod spec template:

annotations:
  dapr.io/log-level: "debug"

Setting system pods log level

When deploying Dapr to your cluster using Helm 3.x, you can individually set the log level for every Dapr system component:

helm install dapr dapr/dapr --namespace dapr-system --set <COMPONENT>.logLevel=<LEVEL>

Components:

  • dapr_operator
  • dapr_placement
  • dapr_sidecar_injector

Example:

helm install dapr dapr/dapr --namespace dapr-system --set dapr_operator.logLevel=error

Viewing Logs on Kubernetes

Dapr logs are written to stdout and stderr. This section will guide you on how to view logs for Dapr system components as well as the Dapr sidecar.

Sidecar Logs

When deployed in Kubernetes, the Dapr sidecar injector will inject a Dapr container named daprd into your annotated pod. In order to view logs for the sidecar, simply find the pod in question by running kubectl get pods:

NAME                                        READY     STATUS    RESTARTS   AGE
addapp-74b57fb78c-67zm6                     2/2       Running   0          40h

Next, get the logs for the Dapr sidecar container:

kubectl logs addapp-74b57fb78c-67zm6 -c daprd

time="2019-09-04T02:52:27Z" level=info msg="starting Dapr Runtime -- version 0.3.0-alpha -- commit b6f2810-dirty"
time="2019-09-04T02:52:27Z" level=info msg="log level set to: info"
time="2019-09-04T02:52:27Z" level=info msg="kubernetes mode configured"
time="2019-09-04T02:52:27Z" level=info msg="app id: addapp"
time="2019-09-04T02:52:27Z" level=info msg="application protocol: http. waiting on port 6000"
time="2019-09-04T02:52:27Z" level=info msg="application discovered on port 6000"
time="2019-09-04T02:52:27Z" level=info msg="actor runtime started. actor idle timeout: 1h0m0s. actor scan interval: 30s"
time="2019-09-04T02:52:27Z" level=info msg="actors: starting connection attempt to placement service at dapr-placement.dapr-system.svc.cluster.local:80"
time="2019-09-04T02:52:27Z" level=info msg="http server is running on port 3500"
time="2019-09-04T02:52:27Z" level=info msg="gRPC server is running on port 50001"
time="2019-09-04T02:52:27Z" level=info msg="dapr initialized. Status: Running. Init Elapsed 64.234049ms"
time="2019-09-04T02:52:27Z" level=info msg="actors: established connection to placement service at dapr-placement.dapr-system.svc.cluster.local:80"

System Logs

Dapr runs the following system pods:

  • Dapr operator
  • Dapr sidecar injector
  • Dapr placement service

Operator Logs

kubectl logs -l app=dapr-operator -n dapr-system

I1207 06:01:02.891031 1 leaderelection.go:243] attempting to acquire leader lease dapr-system/operator.dapr.io...
I1207 06:01:02.913696 1 leaderelection.go:253] successfully acquired lease dapr-system/operator.dapr.io
time="2021-12-07T06:01:03.092529085Z" level=info msg="getting tls certificates" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator type=log ver=unknown
time="2021-12-07T06:01:03.092703283Z" level=info msg="tls certificates loaded successfully" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator type=log ver=unknown
time="2021-12-07T06:01:03.093062379Z" level=info msg="starting gRPC server" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator.api type=log ver=unknown
time="2021-12-07T06:01:03.093123778Z" level=info msg="Healthz server is listening on :8080" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator type=log ver=unknown
time="2021-12-07T06:01:03.497889776Z" level=info msg="starting webhooks" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator type=log ver=unknown
I1207 06:01:03.497944 1 leaderelection.go:243] attempting to acquire leader lease dapr-system/webhooks.dapr.io...
I1207 06:01:03.516641 1 leaderelection.go:253] successfully acquired lease dapr-system/webhooks.dapr.io
time="2021-12-07T06:01:03.526202227Z" level=info msg="Successfully patched webhook in CRD "subscriptions.dapr.io"" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator type=log ver=unknown

Note: If Dapr is installed to a different namespace than dapr-system, simply replace the namespace to the desired one in the command above

Sidecar Injector Logs

kubectl logs -l app=dapr-sidecar-injector -n dapr-system

time="2021-12-07T06:01:01.554859058Z" level=info msg="log level set to: info" instance=dapr-sidecar-injector-5d88fcfcf5-2gmvv scope=dapr.injector type=log ver=unknown
time="2021-12-07T06:01:01.555114755Z" level=info msg="metrics server started on :9090/" instance=dapr-sidecar-injector-5d88fcfcf5-2gmvv scope=dapr.metrics type=log ver=unknown
time="2021-12-07T06:01:01.555233253Z" level=info msg="starting Dapr Sidecar Injector -- version 1.5.1 -- commit c6daae8e9b11b3e241a9cb84c33e5aa740d74368" instance=dapr-sidecar-injector-5d88fcfcf5-2gmvv scope=dapr.injector type=log ver=unknown
time="2021-12-07T06:01:01.557646524Z" level=info msg="Healthz server is listening on :8080" instance=dapr-sidecar-injector-5d88fcfcf5-2gmvv scope=dapr.injector type=log ver=unknown
time="2021-12-07T06:01:01.621291968Z" level=info msg="Sidecar injector is listening on :4000, patching Dapr-enabled pods" instance=dapr-sidecar-injector-5d88fcfcf5-2gmvv scope=dapr.injector type=log ver=unknown

Note: If Dapr is installed to a different namespace than dapr-system, simply replace the namespace to the desired one in the command above

Viewing Placement Service Logs

kubectl logs -l app=dapr-placement-server -n dapr-system

time="2021-12-04T05:08:05.733416791Z" level=info msg="starting Dapr Placement Service -- version 1.5.0 -- commit 83fe579f5dc93bef1ce3b464d3167a225a3aff3a" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=unknown
time="2021-12-04T05:08:05.733469491Z" level=info msg="log level set to: info" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
time="2021-12-04T05:08:05.733512692Z" level=info msg="metrics server started on :9090/" instance=dapr-placement-server-0 scope=dapr.metrics type=log ver=1.5.0
time="2021-12-04T05:08:05.735207095Z" level=info msg="Raft server is starting on 127.0.0.1:8201..." instance=dapr-placement-server-0 scope=dapr.placement.raft type=log ver=1.5.0
time="2021-12-04T05:08:05.735221195Z" level=info msg="mTLS enabled, getting tls certificates" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
time="2021-12-04T05:08:05.735265696Z" level=info msg="tls certificates loaded successfully" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
time="2021-12-04T05:08:05.735276396Z" level=info msg="placement service started on port 50005" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
time="2021-12-04T05:08:05.735553696Z" level=info msg="Healthz server is listening on :8080" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
time="2021-12-04T05:08:07.036850257Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
time="2021-12-04T05:08:07.036909357Z" level=info msg="leader is established." instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0

Note: If Dapr is installed to a different namespace than dapr-system, simply replace the namespace to the desired one in the command above

Non Kubernetes Environments

The examples above are specific specific to Kubernetes, but the principal is the same for any kind of container based environment: simply grab the container ID of the Dapr sidecar and/or system component (if applicable) and view its logs.

References

9.3 - Dapr API Logs

Understand how API logging works in Dapr and how to view logs

API logging enables you to see the API calls your application makes to the Dapr sidecar. This is useful to monitor your application’s behavior or for other debugging purposes. You can also combine Dapr API logging with Dapr log events (see configure and view Dapr Logs into the output if you want to use the logging capabilities together.

Overview

API logging is disabled by default.

To enable API logging, you can use the --enable-api-logging command-line option when starting the daprd process. For example:

./daprd --enable-api-logging

Configuring API logging in self-hosted mode

To enable API logging when running your app with the Dapr CLI, pass the --enable-api-logging flag:

dapr run \
  --enable-api-logging \
  -- node myapp.js

Viewing API logs in self-hosted mode

When running Dapr with the Dapr CLI, both your app’s log output and the Dapr runtime log output are redirected to the same session, for easy debugging.

The example below shows some API logs:

$ dapr run --enable-api-logging -- node myapp.js

â„šī¸  Starting Dapr with id order-processor on port 56730
✅  You are up and running! Both Dapr and your app logs will appear here.
.....
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="POST /v1.0/state/mystate" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
== APP == INFO:root:Saving Order: {'orderId': '483'}
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="GET /v1.0/state/mystate/key123" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
== APP == INFO:root:Getting Order: {'orderId': '483'}
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="DELETE /v1.0/state/mystate" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
== APP == INFO:root:Deleted Order: {'orderId': '483'}
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="PUT /v1.0/metadata/cliPID" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge

Configuring API logging in Kubernetes

You can enable the API logs for a sidecar by adding the following annotation in your pod spec template:

annotations:
  dapr.io/enable-api-logging: "true"

Viewing API logs on Kubernetes

Dapr API logs are written to stdout and stderr and you can view API logs on Kubernetes.

See the kubernetes API logs by executing the below command.

kubectl logs <pod_name> daprd -n <name_space>

The example below show info level API logging in Kubernetes (with URL obfuscation enabled).

time="2022-03-16T18:32:02.487041454Z" level=info msg="HTTP API Called" method="POST /v1.0/invoke/{id}/method/{method:*}" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
time="2022-03-16T18:32:02.698387866Z" level=info msg="HTTP API Called" method="POST /v1.0/invoke/{id}/method/{method:*}" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
time="2022-03-16T18:32:02.917629403Z" level=info msg="HTTP API Called" method="POST /v1.0/invoke/{id}/method/{method:*}" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
time="2022-03-16T18:32:03.137830112Z" level=info msg="HTTP API Called" method="POST /v1.0/invoke/{id}/method/{method:*}" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
time="2022-03-16T18:32:03.359097916Z" level=info msg="HTTP API Called" method="POST /v1.0/invoke/{id}/method/{method:*}" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge

API logging configuration

Using the Dapr Configuration spec, you can configure the default behavior of API logging in Dapr runtimes.

Enable API logging by default

Using the Dapr Configuration spec, you can set the default value for the --enable-api-logging flag (and the correspondent annotation when running on Kubernetes), with the logging.apiLogging.enabled option. This value applies to all Dapr runtimes that reference the Configuration document or resource in which it’s defined.

  • If logging.apiLogging.enabled is set to false, the default value, API logging is disabled for Dapr runtimes unless --enable-api-logging is set to true (or the dapr.io/enable-api-logging: true annotation is added).
  • When logging.apiLogging.enabled is true, Dapr runtimes have API logging enabled by default, and it can be disabled by setting --enable-api-logging=false or with the dapr.io/enable-api-logging: false annotation.

For example:

logging:
  apiLogging:
    enabled: true

Obfuscate URLs in HTTP API logging

By default, logs for API calls in the HTTP endpoints include the full URL being invoked (for example, POST /v1.0/invoke/directory/method/user-123), which could contain Personal Identifiable Information (PII).

To reduce the risk of PII being accidentally included in API logs (when enabled), Dapr can instead log the abstract route being invoked (for example, POST /v1.0/invoke/{id}/method/{method:*}). This can help ensuring compliance with privacy regulations such as GDPR.

To enable obfuscation of URLs in Dapr’s HTTP API logs, set logging.apiLogging.obfuscateURLs to true. For example:

logging:
  apiLogging:
    obfuscateURLs: true

Logs emitted by the Dapr gRPC APIs are not impacted by this configuration option, as they only include the name of the method invoked and no arguments.

Omit health checks from API logging

When API logging is enabled, all calls to the Dapr API server are logged, including those to health check endpoints (e.g. /v1.0/healthz). Depending on your environment, this may generate multiple log lines per minute and could create unwanted noise.

You can configure Dapr to not log calls to health check endpoints when API logging is enabled using the Dapr Configuration spec, by setting logging.apiLogging.omitHealthChecks: true. The default value is false, which means that health checks calls are logged in the API logs.

For example:

logging:
  apiLogging:
    omitHealthChecks: true

9.4 - Profiling & Debugging

Discover problems and issues such as concurrency, performance, cpu and memory usage through a profiling session

In any real world scenario, an app might start exhibiting undesirable behavior in terms of resource spikes. CPU/Memory spikes are not uncommon in most cases.

Dapr allows users to start an on-demand profiling session using pprof through its profiling server endpoint and start an instrumentation session to discover problems and issues such as concurrency, performance, cpu and memory usage.

Enable profiling

Dapr allows you to enable profiling in both Kubernetes and stand-alone modes.

Stand-alone

To enable profiling in Standalone mode, pass the --enable-profiling and the --profile-port flags to the Dapr CLI: Note that profile-port is not required, and if not provided Dapr will pick an available port.

dapr run --enable-profiling --profile-port 7777 python myapp.py

Kubernetes

To enable profiling in Kubernetes, simply add the dapr.io/enable-profiling annotation to your Dapr annotated pod:

   annotations:
    dapr.io/enabled: "true"
    dapr.io/app-id: "rust-app"
    dapr.io/enable-profiling: "true"

Debug a profiling session

After profiling is enabled, we can start a profiling session to investigate what’s going on with the Dapr runtime.

Stand-alone

For Standalone mode, locate the Dapr instance that you want to profile:

dapr list
APP ID           DAPR PORT     APP PORT  COMMAND      AGE  CREATED              PID
node-subscriber  3500          3000      node app.js  12s  2019-09-09 15:11.24  896

Grab the DAPR PORT, and if profiling has been enabled as described above, you can now start using pprof to profile Dapr. Look at the Kubernetes examples above for some useful commands to profile Dapr.

More info on pprof can be found here.

Kubernetes

First, find the pod containing the Dapr runtime. If you don’t already know the the pod name, type kubectl get pods:

NAME                                        READY     STATUS    RESTARTS   AGE
divideapp-6dddf7dc74-6sq4l                  2/2       Running   0          2d23h

If profiling has been enabled successfully, the runtime logs should show the following: time="2019-09-09T20:56:21Z" level=info msg="starting profiling server on port 7777"

In this case, we want to start a session with the Dapr runtime inside of pod divideapp-6dddf7dc74-6sq4l.

We can do so by connecting to the pod via port forwarding:

kubectl port-forward divideapp-6dddf7dc74-6sq4 7777:7777
Forwarding from 127.0.0.1:7777 -> 7777
Forwarding from [::1]:7777 -> 7777
Handling connection for 7777

Now that the connection has been established, we can use pprof to profile the Dapr runtime.

The following example will create a cpu.pprof file containing samples from a profile session that lasts 120 seconds:

curl "http://localhost:7777/debug/pprof/profile?seconds=120" > cpu.pprof

Analyze the file with pprof:

pprof cpu.pprof

You can also save the results in a visualized way inside a PDF:

go tool pprof --pdf your-binary-file http://localhost:7777/debug/pprof/profile?seconds=120 > profile.pdf

For memory related issues, you can profile the heap:

go tool pprof --pdf your-binary-file http://localhost:7777/debug/pprof/heap > heap.pdf

heap

Profiling allocated objects:

go tool pprof http://localhost:7777/debug/pprof/heap
> exit

Saved profile in /Users/myusername/pprof/pprof.daprd.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz

To analyze, grab the file path above (its a dynamic file path, so pay attention to note paste this one), and execute:

go tool pprof -alloc_objects --pdf /Users/myusername/pprof/pprof.daprd.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz > alloc-objects.pdf

alloc