This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Dapr components reference

Detailed information and specifications on Dapr components

1 - Pub/sub brokers component specs

The supported pub/sub brokers that interface with Dapr

The following table lists publish and subscribe brokers supported by the Dapr pub/sub building block. Learn how to set up different brokers for Dapr publish and subscribe.

Table headers to note:

HeaderDescriptionExample
StatusComponent certification statusAlpha
Beta
Stable
Component versionThe version of the componentv1
Since runtime versionThe version of the Dapr runtime when the component status was set or updated1.11

Generic

ComponentStatusComponent versionSince runtime version
Apache KafkaStablev11.5
In-memoryStablev11.7
JetStreamBetav11.10
KubeMQBetav11.10
MQTT3Stablev11.7
PulsarStablev11.10
RabbitMQStablev11.7
Redis StreamsStablev11.0
RocketMQAlphav11.8
Solace-AMQPBetav11.10

Amazon Web Services (AWS)

ComponentStatusComponent versionSince runtime version
AWS SNS/SQSStablev11.10

Google Cloud Platform (GCP)

ComponentStatusComponent versionSince runtime version
GCP Pub/SubStablev11.11

Microsoft Azure

ComponentStatusComponent versionSince runtime version
Azure Event HubsStablev11.8
Azure Service Bus QueuesBetav11.10
Azure Service Bus TopicsStablev11.0

1.1 - Apache Kafka

Detailed documentation on the Apache Kafka pubsub component

Component format

To set up Apache Kafka pub/sub, create a component of type pubsub.kafka. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

All component metadata field values can carry templated metadata values, which are resolved on Dapr sidecar startup. For example, you can choose to use {namespace} as the consumerGroup to enable using the same appId in different namespaces using the same topics as described in this article.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-pubsub
spec:
  type: pubsub.kafka
  version: v1
  metadata:
  - name: brokers # Required. Kafka broker connection setting
    value: "dapr-kafka.myapp.svc.cluster.local:9092"
  - name: consumerGroup # Optional. Used for input bindings.
    value: "{namespace}"
  - name: consumerID # Optional. If not supplied, runtime will create one.
    value: "channel1"
  - name: clientID # Optional. Used as client tracing ID by Kafka brokers.
    value: "my-dapr-app-id"
  - name: authType # Required.
    value: "password"
  - name: saslUsername # Required if authType is `password`.
    value: "adminuser"
  - name: saslPassword # Required if authType is `password`.
    secretKeyRef:
      name: kafka-secrets
      key: saslPasswordSecret
  - name: saslMechanism
    value: "SHA-512"
  - name: maxMessageBytes # Optional.
    value: 1024
  - name: consumeRetryInterval # Optional.
    value: 200ms
  - name: heartbeatInterval # Optional.
    value: 5s
  - name: sessionTimeout # Optional.
    value: 15s
  - name: version # Optional.
    value: 2.0.0
  - name: disableTls # Optional. Disable TLS. This is not safe for production!! You should read the `Mutual TLS` section for how to use TLS.
    value: "true"
  - name: consumerFetchMin # Optional. Advanced setting. The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available.
    value: 1
  - name: consumerFetchDefault # Optional. Advanced setting. The default number of message bytes to fetch from the broker in each request.
    value: 2097152
  - name: channelBufferSize # Optional. Advanced setting. The number of events to buffer in internal and external channels.
    value: 512
  - name: consumerGroupRebalanceStrategy # Optional. Advanced setting. The strategy to use for consumer group rebalancing.
    value: sticky
  - name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
    value: http://localhost:8081
  - name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.
    value: XYAXXAZ
  - name: schemaRegistryAPISecret # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret.
    value: "ABCDEFGMEADFF"
  - name: schemaCachingEnabled # Optional. When using Schema Registry Avro serialization/deserialization. Enables caching for schemas.
    value: true
  - name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.
    value: 5m
  - name: useAvroJson # Optional. Enables Avro JSON schema for serialization as opposed to Standard JSON default. Only applicable when the subscription uses valueSchemaType=Avro
    value: "true"
  - name: escapeHeaders # Optional.
    value: false
  

For details on using secretKeyRef, see the guide on how to reference secrets in components.

Spec metadata fields

FieldRequiredDetailsExample
brokersYA comma-separated list of Kafka brokers."localhost:9092,dapr-kafka.myapp.svc.cluster.local:9093"
consumerGroupNA kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic. If a value for consumerGroup is provided, any value for consumerID is ignored - a combination of the consumer group and a random unique identifier will be set for the consumerID instead."group1"
consumerIDNConsumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID) value. If a value for consumerGroup is provided, any value for consumerID is ignored - a combination of the consumer group and a random unique identifier will be set for the consumerID instead.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
clientIDNA user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes. Defaults to "namespace.appID" for Kubernetes mode or "appID" for Self-Hosted mode."my-namespace.my-dapr-app", "my-dapr-app"
authRequiredNDeprecated Enable SASL authentication with the Kafka brokers."true", "false"
authTypeYConfigure or disable authentication. Supported values: none, password, mtls, oidc or awsiam"password", "none"
saslUsernameNThe SASL username used for authentication. Only required if authType is set to "password"."adminuser"
saslPasswordNThe SASL password used for authentication. Can be secretKeyRef to use a secret reference. Only required if authType is set to “password”`."", "KeFg23!"
saslMechanismNThe SASL Authentication Mechanism you wish to use. Only required if authType is set to "password". Defaults to PLAINTEXT"SHA-512", "SHA-256", "PLAINTEXT"
initialOffsetNThe initial offset to use if no offset was previously committed. Should be “newest” or “oldest”. Defaults to “newest”."oldest"
maxMessageBytesNThe maximum size in bytes allowed for a single Kafka message. Defaults to 1024.2048
consumeRetryIntervalNThe interval between retries when attempting to consume topics. Treats numbers without suffix as milliseconds. Defaults to 100ms.200ms
consumeRetryEnabledNDisable consume retry by setting "false""true", "false"
versionNKafka cluster version. Defaults to 2.0.0. Note that this must be set to 1.0.0 if you are using Azure EventHubs with Kafka.0.10.2.0
caCertNCertificate authority certificate, required for using TLS. Can be secretKeyRef to use a secret reference"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"
clientCertNClient certificate, required for authType mtls. Can be secretKeyRef to use a secret reference"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"
clientKeyNClient key, required for authType mtls Can be secretKeyRef to use a secret reference"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"
skipVerifyNSkip TLS verification, this is not recommended for use in production. Defaults to "false""true", "false"
disableTlsNDisable TLS for transport security. To disable, you’re not required to set value to "true". This is not recommended for use in production. Defaults to "false"."true", "false"
oidcTokenEndpointNFull URL to an OAuth2 identity provider access token endpoint. Required when authType is set to oidchttps://identity.example.com/v1/token"
oidcClientIDNThe OAuth2 client ID that has been provisioned in the identity provider. Required when authType is set to oidcdapr-kafka
oidcClientSecretNThe OAuth2 client secret that has been provisioned in the identity provider: Required when authType is set to oidc"KeFg23!"
oidcScopesNComma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when authType is set to oidc. Defaults to "openid""openid,kafka-prod"
oidcExtensionsNString containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token{"cluster":"kafka","poolid":"kafkapool"}
awsRegionNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS region where the Kafka cluster is deployed to. Required when authType is set to awsiamus-west-1
awsAccessKeyNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account."accessKey"
awsSecretKeyNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key."secretKey"
awsSessionTokenNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials."sessionToken"
awsIamRoleArnNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘assumeRoleArn’ instead. IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials."arn:aws:iam::123456789:role/mskRole"
awsStsSessionNameNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionName’ instead. Represents the session name for assuming a role."DaprDefaultSession"
schemaRegistryURLNRequired when using Schema Registry Avro serialization/deserialization. The Schema Registry URL.http://localhost:8081
schemaRegistryAPIKeyNWhen using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key.XYAXXAZ
schemaRegistryAPISecretNWhen using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret.ABCDEFGMEADFF
schemaCachingEnabledNWhen using Schema Registry Avro serialization/deserialization. Enables caching for schemas. Default is truetrue
schemaLatestVersionCacheTTLNWhen using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min5m
useAvroJsonNEnables Avro JSON schema for serialization as opposed to Standard JSON default. Only applicable when the subscription uses valueSchemaType=Avro. Default is "false""true"
clientConnectionTopicMetadataRefreshIntervalNThe interval for the client connection’s topic metadata to be refreshed with the broker as a Go duration. Defaults to 9m."4m"
clientConnectionKeepAliveIntervalNThe maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely."4m"
consumerFetchMinNThe minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available. The default is 1, as 0 causes the consumer to spin when no messages are available. Equivalent to the JVM’s fetch.min.bytes."2"
consumerFetchDefaultNThe default number of message bytes to fetch from the broker in each request. Default is "1048576" bytes."2097152"
channelBufferSizeNThe number of events to buffer in internal and external channels. This permits the producer and consumer to continue processing some messages in the background while user code is working, greatly improving throughput. Defaults to 256."512"
heartbeatIntervalNThe interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the sessionTimeout value. Defaults to “3s”."5s"
sessionTimeoutNThe timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to “10s”."20s"
consumerGroupRebalanceStrategyNThe strategy to use for consumer group rebalancing. Supported values: range, sticky, roundrobin. Default is range"sticky"
escapeHeadersNEnables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is false.true

The secretKeyRef above is referencing a kubernetes secrets store to access the tls information. Visit here to learn more about how to configure a secret store component.

Note

The metadata version must be set to 1.0.0 when using Azure EventHubs with Kafka.

Authentication

Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. With the added authentication methods, the authRequired field has been deprecated from the v1.6 release and instead the authType field should be used. If authRequired is set to true, Dapr will attempt to configure authType correctly based on the value of saslPassword. The valid values for authType are:

  • none
  • password
  • certificate
  • mtls
  • oidc
  • awsiam

None

Setting authType to none will disable any authentication. This is NOT recommended in production.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-pubsub-noauth
spec:
  type: pubsub.kafka
  version: v1
  metadata:
  - name: brokers # Required. Kafka broker connection setting
    value: "dapr-kafka.myapp.svc.cluster.local:9092"
  - name: consumerGroup # Optional. Used for input bindings.
    value: "group1"
  - name: clientID # Optional. Used as client tracing ID by Kafka brokers.
    value: "my-dapr-app-id"
  - name: authType # Required.
    value: "none"
  - name: maxMessageBytes # Optional.
    value: 1024
  - name: consumeRetryInterval # Optional.
    value: 200ms
  - name: heartbeatInterval # Optional.
    value: 5s
  - name: sessionTimeout # Optional.
    value: 15s
  - name: version # Optional.
    value: 0.10.2.0
  - name: disableTls
    value: "true"

SASL Password

Setting authType to password enables SASL authentication. This requires setting the saslUsername and saslPassword fields.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-pubsub-sasl
spec:
  type: pubsub.kafka
  version: v1
  metadata:
  - name: brokers # Required. Kafka broker connection setting
    value: "dapr-kafka.myapp.svc.cluster.local:9092"
  - name: consumerGroup # Optional. Used for input bindings.
    value: "group1"
  - name: clientID # Optional. Used as client tracing ID by Kafka brokers.
    value: "my-dapr-app-id"
  - name: authType # Required.
    value: "password"
  - name: saslUsername # Required if authType is `password`.
    value: "adminuser"
  - name: saslPassword # Required if authType is `password`.
    secretKeyRef:
      name: kafka-secrets
      key: saslPasswordSecret
  - name: saslMechanism
    value: "SHA-512"
  - name: maxMessageBytes # Optional.
    value: 1024
  - name: consumeRetryInterval # Optional.
    value: 200ms
  - name: heartbeatInterval # Optional.
    value: 5s
  - name: sessionTimeout # Optional.
    value: 15s
  - name: version # Optional.
    value: 0.10.2.0
  - name: caCert
    secretKeyRef:
      name: kafka-tls
      key: caCert

Mutual TLS

Setting authType to mtls uses a x509 client certificate (the clientCert field) and key (the clientKey field) to authenticate. Note that mTLS as an authentication mechanism is distinct from using TLS to secure the transport layer via encryption. mTLS requires TLS transport (meaning disableTls must be false), but securing the transport layer does not require using mTLS. See Communication using TLS for configuring underlying TLS transport.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-pubsub-mtls
spec:
  type: pubsub.kafka
  version: v1
  metadata:
  - name: brokers # Required. Kafka broker connection setting
    value: "dapr-kafka.myapp.svc.cluster.local:9092"
  - name: consumerGroup # Optional. Used for input bindings.
    value: "group1"
  - name: clientID # Optional. Used as client tracing ID by Kafka brokers.
    value: "my-dapr-app-id"
  - name: authType # Required.
    value: "mtls"
  - name: caCert
    secretKeyRef:
      name: kafka-tls
      key: caCert
  - name: clientCert
    secretKeyRef:
      name: kafka-tls
      key: clientCert
  - name: clientKey
    secretKeyRef:
      name: kafka-tls
      key: clientKey
  - name: maxMessageBytes # Optional.
    value: 1024
  - name: consumeRetryInterval # Optional.
    value: 200ms
  - name: heartbeatInterval # Optional.
    value: 5s
  - name: sessionTimeout # Optional.
    value: 15s
  - name: version # Optional.
    value: 0.10.2.0

OAuth2 or OpenID Connect

Setting authType to oidc enables SASL authentication via the OAUTHBEARER mechanism. This supports specifying a bearer token from an external OAuth2 or OIDC identity provider. Currently, only the client_credentials grant is supported.

Configure oidcTokenEndpoint to the full URL for the identity provider access token endpoint.

Set oidcClientID and oidcClientSecret to the client credentials provisioned in the identity provider.

If caCert is specified in the component configuration, the certificate is appended to the system CA trust for verifying the identity provider certificate. Similarly, if skipVerify is specified in the component configuration, verification will also be skipped when accessing the identity provider.

By default, the only scope requested for the token is openid; it is highly recommended that additional scopes be specified via oidcScopes in a comma-separated list and validated by the Kafka broker. If additional scopes are not used to narrow the validity of the access token, a compromised Kafka broker could replay the token to access other services as the Dapr clientID.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-pubsub
spec:
  type: pubsub.kafka
  version: v1
  metadata:
  - name: brokers # Required. Kafka broker connection setting
    value: "dapr-kafka.myapp.svc.cluster.local:9092"
  - name: consumerGroup # Optional. Used for input bindings.
    value: "group1"
  - name: clientID # Optional. Used as client tracing ID by Kafka brokers.
    value: "my-dapr-app-id"
  - name: authType # Required.
    value: "oidc"
  - name: oidcTokenEndpoint # Required if authType is `oidc`.
    value: "https://identity.example.com/v1/token"
  - name: oidcClientID      # Required if authType is `oidc`.
    value: "dapr-myapp"
  - name: oidcClientSecret  # Required if authType is `oidc`.
    secretKeyRef:
      name: kafka-secrets
      key: oidcClientSecret
  - name: oidcScopes        # Recommended if authType is `oidc`.
    value: "openid,kafka-dev"
  - name: caCert            # Also applied to verifying OIDC provider certificate
    secretKeyRef:
      name: kafka-tls
      key: caCert
  - name: maxMessageBytes # Optional.
    value: 1024
  - name: consumeRetryInterval # Optional.
    value: 200ms
  - name: heartbeatInterval # Optional.
    value: 5s
  - name: sessionTimeout # Optional.
    value: 15s
  - name: version # Optional.
    value: 0.10.2.0

AWS IAM

Authenticating with AWS IAM is supported with MSK. Setting authType to awsiam uses AWS SDK to generate auth tokens to authenticate.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-pubsub-awsiam
spec:
  type: pubsub.kafka
  version: v1
  metadata:
  - name: brokers # Required. Kafka broker connection setting
    value: "dapr-kafka.myapp.svc.cluster.local:9092"
  - name: consumerGroup # Optional. Used for input bindings.
    value: "group1"
  - name: clientID # Optional. Used as client tracing ID by Kafka brokers.
    value: "my-dapr-app-id"
  - name: authType # Required.
    value: "awsiam"
  - name: region # Required.
    value: "us-west-1"
  - name: accessKey # Optional.
    value: <AWS_ACCESS_KEY>
  - name: secretKey # Optional.
    value: <AWS_SECRET_KEY>
  - name: sessionToken # Optional.
    value: <AWS_SESSION_KEY>
  - name: assumeRoleArn # Optional.
    value: "arn:aws:iam::123456789:role/mskRole"
  - name: sessionName # Optional.
    value: "DaprDefaultSession"

Communication using TLS

By default TLS is enabled to secure the transport layer to Kafka. To disable TLS, set disableTls to true. When TLS is enabled, you can control server certificate verification using skipVerify to disable verification (NOT recommended in production environments) and caCert to specify a trusted TLS certificate authority (CA). If no caCert is specified, the system CA trust will be used. To also configure mTLS authentication, see the section under Authentication. Below is an example of a Kafka pubsub component configured to use transport layer TLS:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-pubsub
spec:
  type: pubsub.kafka
  version: v1
  metadata:
  - name: brokers # Required. Kafka broker connection setting
    value: "dapr-kafka.myapp.svc.cluster.local:9092"
  - name: consumerGroup # Optional. Used for input bindings.
    value: "group1"
  - name: clientID # Optional. Used as client tracing ID by Kafka brokers.
    value: "my-dapr-app-id"
  - name: authType # Required.
    value: "certificate"
  - name: consumeRetryInterval # Optional.
    value: 200ms
  - name: heartbeatInterval # Optional.
    value: 5s
  - name: sessionTimeout # Optional.
    value: 15s
  - name: version # Optional.
    value: 0.10.2.0
  - name: maxMessageBytes # Optional.
    value: 1024
  - name: caCert # Certificate authority certificate.
    secretKeyRef:
      name: kafka-tls
      key: caCert
auth:
  secretStore: <SECRET_STORE_NAME>

Consuming from multiple topics

When consuming from multiple topics using a single pub/sub component, there is no guarantee about how the consumers in your consumer group are balanced across the topic partitions.

For instance, let’s say you are subscribing to two topics with 10 partitions per topic and you have 20 replicas of your service consuming from the two topics. There is no guarantee that 10 will be assigned to the first topic and 10 to the second topic. Instead, the partitions could be divided unequally, with more than 10 assigned to the first topic and the rest assigned to the second topic.

This can result in idle consumers listening to the first topic and over-extended consumers on the second topic, or vice versa. This same behavior can be observed when using auto-scalers such as HPA or KEDA.

If you run into this particular issue, it is recommended that you configure a single pub/sub component per topic with uniquely defined consumer groups per component. This guarantees that all replicas of your service are fully allocated to the unique consumer group, where each consumer group targets one specific topic.

For example, you may define two Dapr components with the following configuration:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-pubsub-topic-one
spec:
  type: pubsub.kafka
  version: v1
  metadata:
  - name: consumerGroup
    value: "{appID}-topic-one"
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-pubsub-topic-two
spec:
  type: pubsub.kafka
  version: v1
  metadata:
  - name: consumerGroup
    value: "{appID}-topic-two"

Sending and receiving multiple messages

Apache Kafka component supports sending and receiving multiple messages in a single operation using the bulk Pub/sub API.

Configuring bulk subscribe

When subscribing to a topic, you can configure bulkSubscribe options. Refer to Subscribing messages in bulk for more details. Learn more about the bulk subscribe API.

Apache Kafka supports the following bulk metadata options:

ConfigurationDefault
maxAwaitDurationMs10000 (10s)
maxMessagesCount80

Per-call metadata fields

Partition Key

When invoking the Kafka pub/sub, its possible to provide an optional partition key by using the metadata query param in the request url.

The param name can either be partitionKey or __key

Example:

curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.partitionKey=key1 \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        }
      }'

Message headers

All other metadata key/value pairs (that are not partitionKey or __key) are set as headers in the Kafka message. Here is an example setting a correlationId for the message.

curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        }
      }'

Kafka Pubsub special message headers received on consumer side

When consuming messages, special message metadata are being automatically passed as headers. These are:

  • __key: the message key if available
  • __topic: the topic for the message
  • __partition: the partition number for the message
  • __offset: the offset of the message in the partition
  • __timestamp: the timestamp for the message

You can access them within the consumer endpoint as follows:

from fastapi import APIRouter, Body, Response, status
import json
import sys

app = FastAPI()

router = APIRouter()


@router.get('/dapr/subscribe')
def subscribe():
    subscriptions = [{'pubsubname': 'pubsub',
                      'topic': 'my-topic',
                      'route': 'my_topic_subscriber',
                      }]
    return subscriptions

@router.post('/my_topic_subscriber')
def my_topic_subscriber(
      key: Annotated[str, Header(alias="__key")],
      offset: Annotated[int, Header(alias="__offset")],
      event_data=Body()):
    print(f"key={key} - offset={offset} - data={event_data}", flush=True)
      return Response(status_code=status.HTTP_200_OK)

app.include_router(router)

Receiving message headers with special characters

The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors. HTTP header values must follow specifications, making some characters not allowed. Learn more about the protocols. In this case, you can enable escapeHeaders configuration setting, which uses URL escaping to encode header values on the consumer side.

Set escapeHeaders to true to URL escape.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-pubsub-escape-headers
spec:
  type: pubsub.kafka
  version: v1
  metadata:
  - name: brokers # Required. Kafka broker connection setting
    value: "dapr-kafka.myapp.svc.cluster.local:9092"
  - name: consumerGroup # Optional. Used for input bindings.
    value: "group1"
  - name: clientID # Optional. Used as client tracing ID by Kafka brokers.
    value: "my-dapr-app-id"
  - name: authType # Required.
    value: "none"
  - name: escapeHeaders
    value: "true"

Avro Schema Registry serialization/deserialization

You can configure pub/sub to publish or consume data encoded using Avro binary serialization, leveraging an Apache Schema Registry (for example, Confluent Schema Registry, Apicurio).

Configuration

When configuring the Kafka pub/sub component metadata, you must define:

  • The schema registry URL
  • The API key/secret, if applicable

Schema subjects are automatically derived from topic names, using the standard naming convention. For example, for a topic named my-topic, the schema subject will be my-topic-value. When interacting with the message payload within the service, it is in JSON format. The payload is transparently serialized/deserialized within the Dapr component. Date/Datetime fields must be passed as their Epoch Unix timestamp equivalent (rather than typical Iso8601). For example:

  • 2024-01-10T04:36:05.986Z should be passed as 1704861365986 (the number of milliseconds since Jan 1st, 1970)
  • 2024-01-10 should be passed as 19732 (the number of days since Jan 1st, 1970)

Publishing Avro messages

In order to indicate to the Kafka pub/sub component that the message should be using Avro serialization, the valueSchemaType metadata must be set to Avro.

curl -X "POST" http://localhost:3500/v1.0/publish/pubsub/my-topic?metadata.rawPayload=true&metadata.valueSchemaType=Avro -H "Content-Type: application/json" -d '{"order_number": "345", "created_date": 1704861365986}'
from dapr.clients import DaprClient

with DaprClient() as d:
    req_data = {
        'order_number': '345',
        'created_date': 1704861365986
    }
    # Create a typed message with content type and body
    resp = d.publish_event(
        pubsub_name='pubsub',
        topic_name='my-topic',
        data=json.dumps(req_data),
        publish_metadata={'rawPayload': 'true', 'valueSchemaType': 'Avro'}
    )
    # Print the request
    print(req_data, flush=True)

Subscribing to Avro topics

In order to indicate to the Kafka pub/sub component that the message should be deserialized using Avro, the valueSchemaType metadata must be set to Avro in the subscription metadata.

from fastapi import APIRouter, Body, Response, status
import json
import sys

app = FastAPI()

router = APIRouter()


@router.get('/dapr/subscribe')
def subscribe():
    subscriptions = [{'pubsubname': 'pubsub',
                      'topic': 'my-topic',
                      'route': 'my_topic_subscriber',
                      'metadata': {
                          'valueSchemaType': 'Avro',
                      } }]
    return subscriptions

@router.post('/my_topic_subscriber')
def my_topic_subscriber(event_data=Body()):
    print(event_data, flush=True)
      return Response(status_code=status.HTTP_200_OK)

app.include_router(router)

Overriding default consumer group rebalancing

In Kafka, rebalancing strategies determine how partitions are assigned to consumers within a consumer group. The default strategy is “range”, but “roundrobin” and “sticky” are also available.

  • Range: Partitions are assigned to consumers based on their lexicographical order. If you have three partitions (0, 1, 2) and two consumers (A, B), consumer A might get partitions 0 and 1, while consumer B gets partition 2.
  • RoundRobin: Partitions are assigned to consumers in a round-robin fashion. With the same example above, consumer A might get partitions 0 and 2, while consumer B gets partition 1.
  • Sticky: This strategy aims to preserve previous assignments as much as possible while still maintaining a balanced distribution. If a consumer leaves or joins the group, only the affected partitions are reassigned, minimizing disruption.

Choosing a Strategy:

  • Range: Simple to understand and implement, but can lead to uneven distribution if partition sizes vary significantly.
  • RoundRobin: Provides a good balance in many cases, but might not be optimal if message keys are unevenly distributed.
  • Sticky: Generally preferred for its ability to minimize disruption during rebalances, especially when dealing with a large number of partitions or frequent consumer group changes.

Create a Kafka instance

You can run Kafka locally using this Docker image. To run without Docker, see the getting started guide here.

To run Kafka on Kubernetes, you can use any Kafka operator, such as Strimzi.

1.2 - AWS SNS/SQS

Detailed documentation on the AWS SNS/SQS pubsub component

Component format

To set up AWS SNS/SQS pub/sub, create a component of type pubsub.aws.snssqs.

By default, the AWS SNS/SQS component:

  • Generates the SNS topics
  • Provisions the SQS queues
  • Configures a subscription of the queues to the topics
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: snssqs-pubsub
spec:
  type: pubsub.aws.snssqs
  version: v1
  metadata:
    - name: accessKey
      value: "AKIAIOSFODNN7EXAMPLE"
    - name: secretKey
      value: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
    - name: region
      value: "us-east-1"
    # - name: consumerID # Optional. If not supplied, runtime will create one.
    #   value: "channel1"
    # - name: endpoint # Optional. 
    #   value: "http://localhost:4566"
    # - name: sessionToken  # Optional (mandatory if using AssignedRole; for example, temporary accessKey and secretKey)
    #   value: "TOKEN"
    # - name: messageVisibilityTimeout # Optional
    #   value: 10
    # - name: messageRetryLimit # Optional
    #   value: 10
    # - name: messageReceiveLimit # Optional
    #   value: 10
    # - name: sqsDeadLettersQueueName # Optional
    # - value: "myapp-dlq"
    # - name: messageWaitTimeSeconds # Optional
    #   value: 1
    # - name: messageMaxNumber # Optional
    #   value: 10
    # - name: fifo # Optional
    #   value: "true"
    # - name: fifoMessageGroupID # Optional
    #   value: "app1-mgi"
    # - name: disableEntityManagement # Optional
    #   value: "false"
    # - name: disableDeleteOnRetryLimit # Optional
    #   value: "false"
    # - name: assetsManagementTimeoutSeconds # Optional
    #   value: 5
    # - name: concurrencyMode # Optional
    #   value: "single"
    # - name: concurrencyLimit # Optional
    #   value: "0"

Spec metadata fields

FieldRequiredDetailsExample
accessKeyYID of the AWS account/role with appropriate permissions to SNS and SQS (see below)"AKIAIOSFODNN7EXAMPLE"
secretKeyYSecret for the AWS user/role. If using an AssumeRole access, you will also need to provide a sessionToken"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
regionYThe AWS region where the SNS/SQS assets are located or be created in. See this page for valid regions. Ensure that SNS and SQS are available in that region"us-east-1"
consumerIDNConsumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID) value. See the pub/sub broker component file to learn how ConsumerID is automatically generated.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
endpointNAWS endpoint for the component to use. Only used for local development with, for example, localstack. The endpoint is unnecessary when running against production AWS"http://localhost:4566"
sessionTokenNAWS session token to use. A session token is only required if you are using temporary security credentials"TOKEN"
messageReceiveLimitNNumber of times a message is received, after processing of that message fails, that once reached, results in removing of that message from the queue. If sqsDeadLettersQueueName is specified, messageReceiveLimit is the number of times a message is received, after processing of that message fails, that once reached, results in moving of the message to the SQS dead-letters queue. Default: 1010
sqsDeadLettersQueueNameNName of the dead letters queue for this application"myapp-dlq"
messageVisibilityTimeoutNAmount of time in seconds that a message is hidden from receive requests after it is sent to a subscriber. Default: 1010
messageRetryLimitNNumber of times to resend a message after processing of that message fails before removing that message from the queue. Default: 1010
messageWaitTimeSecondsNThe duration (in seconds) for which the call waits for a message to arrive in the queue before returning. If a message is available, the call returns sooner than messageWaitTimeSeconds. If no messages are available and the wait time expires, the call returns successfully with an empty list of messages. Default: 11
messageMaxNumberNMaximum number of messages to receive from the queue at a time. Default: 10, Maximum: 1010
fifoNUse SQS FIFO queue to provide message ordering and deduplication. Default: "false". See further details about SQS FIFO"true", "false"
fifoMessageGroupIDNIf fifo is enabled, instructs Dapr to use a custom Message Group ID for the pubsub deployment. This is not mandatory as Dapr creates a custom Message Group ID for each producer, thus ensuring ordering of messages per a Dapr producer. Default: """app1-mgi"
disableEntityManagementNWhen set to true, SNS topics, SQS queues and the SQS subscriptions to SNS do not get created automatically. Default: "false""true", "false"
disableDeleteOnRetryLimitNWhen set to true, after retrying and failing of messageRetryLimit times processing a message, reset the message visibility timeout so that other consumers can try processing, instead of deleting the message from SQS (the default behvior). Default: "false""true", "false"
assetsManagementTimeoutSecondsNAmount of time in seconds, for an AWS asset management operation, before it times out and cancelled. Asset management operations are any operations performed on STS, SNS and SQS, except message publish and consume operations that implement the default Dapr component retry behavior. The value can be set to any non-negative float/integer. Default: 50.5, 10
concurrencyModeNWhen messages are received in bulk from SQS, call the subscriber sequentially (“single” message at a time), or concurrently (in “parallel”). Default: "parallel""single", "parallel"
concurrencyLimitNDefines the maximum number of concurrent workers handling messages. This value is ignored when concurrencyMode is set to "single". To avoid limiting the number of concurrent workers, set this to 0. Default: 0100

Additional info

Conforming with AWS specifications

Dapr created SNS topic and SQS queue names conform with AWS specifications. By default, Dapr creates an SQS queue name based on the consumer app-id, therefore Dapr might perform name standardization to meet with AWS specifications.

SNS/SQS component behavior

When the pub/sub SNS/SQS component provisions SNS topics, the SQS queues and the subscription behave differently in situations where the component is operating on behalf of a message producer (with no subscriber app deployed), than in situations where a subscriber app is present (with no publisher deployed).

Due to how SNS works without SQS subscription in publisher only setup, the SQS queues and the subscription behave as a “classic” pub/sub system that relies on subscribers listening to topic messages. Without those subscribers, messages:

  • Cannot be passed onwards and are effectively dropped
  • Are not available for future subscribers (no replay of message when the subscriber finally subscribes)

SQS FIFO

Using SQS FIFO (fifo metadata field set to "true") per AWS specifications provides message ordering and deduplication, but incurs a lower SQS processing throughput, among other caveats.

Specifying fifoMessageGroupID limits the number of concurrent consumers of the FIFO queue used to only one but guarantees global ordering of messages published by the app’s Dapr sidecars. See this AWS blog post to better understand the topic of Message Group IDs and FIFO queues.

To avoid losing the order of messages delivered to consumers, the FIFO configuration for the SQS Component requires the concurrencyMode metadata field set to "single".

Default parallel concurrencyMode

Since v1.8.0, the component supports the "parallel" concurrencyMode as its default mode. In prior versions, the component default behavior was calling the subscriber a single message at a time and waiting for its response.

SQS dead-letter Queues

When configuring the PubSub component with SQS dead-letter queues, the metadata fields messageReceiveLimit and sqsDeadLettersQueueName must both be set to a value. For messageReceiveLimit, the value must be greater than 0 and the sqsDeadLettersQueueName must not be empty string.

SNS/SQS Contention with Dapr

Fundamentally, SNS aggregates messages from multiple publisher topics into a single SQS queue by creating SQS subscriptions to those topics. As a subscriber, the SNS/SQS pub/sub component consumes messages from that sole SQS queue.

However, like any SQS consumer, the component cannot selectively retrieve the messages published to the SNS topics to which it is specifically subscribed. This can result in the component receiving messages originating from topics without associated handlers. Typically, this occurs during:

  • Component initialization: If infrastructure subscriptions are ready before component subscription handlers, or
  • Shutdown: If component handlers are removed before infrastructure subscriptions.

Since this issue affects any SQS consumer of multiple SNS topics, the component cannot prevent consuming messages from topics lacking handlers. When this happens, the component logs an error indicating such messages were erroneously retrieved.

In these situations, the unhandled messages would reappear in SQS with their receive count decremented after each pull. Thus, there is a risk that an unhandled message could exceed its messageReceiveLimit and be lost.

Create an SNS/SQS instance

For local development, the localstack project is used to integrate AWS SNS/SQS. Follow these instructions to run localstack.

To run localstack locally from the command line using Docker, apply the following cmd:

docker run --rm -it -p 4566:4566 -p 4571:4571 -e SERVICES="sts,sns,sqs" -e AWS_DEFAULT_REGION="us-east-1" localstack/localstack

In order to use localstack with your pub/sub binding, you need to provide the endpoint configuration in the component metadata. The endpoint is unnecessary when running against production AWS.

See Authenticating to AWS for information about authentication-related attributes.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: snssqs-pubsub
spec:
  type: pubsub.aws.snssqs
  version: v1
  metadata:
    - name: accessKey
      value: "anyString"
    - name: secretKey
      value: "anyString"
    - name: endpoint
      value: http://localhost:4566
    # Use us-east-1 or any other region if provided to localstack as defined by "AWS_DEFAULT_REGION" envvar
    - name: region
      value: us-east-1

To run localstack on Kubernetes, you can apply the configuration below. Localstack is then reachable at the DNS name http://localstack.default.svc.cluster.local:4566 (assuming this was applied to the default namespace), which should be used as the endpoint.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: localstack
spec:
  # using the selector, we will expose the running deployments
  # this is how Kubernetes knows, that a given service belongs to a deployment
  selector:
    matchLabels:
      app: localstack
  replicas: 1
  template:
    metadata:
      labels:
        app: localstack
    spec:
      containers:
      - name: localstack
        image: localstack/localstack:latest
        ports:
          # Expose the edge endpoint
          - containerPort: 4566
---
kind: Service
apiVersion: v1
metadata:
  name: localstack
  labels:
    app: localstack
spec:
  selector:
    app: localstack
  ports:
  - protocol: TCP
    port: 4566
    targetPort: 4566
  type: LoadBalancer

In order to run in AWS, create or assign an IAM user with permissions to the SNS and SQS services, with a policy like:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "YOUR_POLICY_NAME",
      "Effect": "Allow",
      "Action": [
        "sns:CreateTopic",
        "sns:GetTopicAttributes",
        "sns:ListSubscriptionsByTopic",
        "sns:Publish",
        "sns:Subscribe",
        "sns:TagResource",
        "sqs:ChangeMessageVisibility",
        "sqs:CreateQueue",
        "sqs:DeleteMessage",
        "sqs:GetQueueAttributes",
        "sqs:GetQueueUrl",
        "sqs:ReceiveMessage",
        "sqs:SetQueueAttributes",
        "sqs:TagQueue"
      ],
      "Resource": [
        "arn:aws:sns:AWS_REGION:AWS_ACCOUNT_ID:*",
        "arn:aws:sqs:AWS_REGION:AWS_ACCOUNT_ID:*"
      ]
    }
  ]
}

Plug the AWS account ID and AWS account secret into the accessKey and secretKey in the component metadata, using Kubernetes secrets and secretKeyRef.

Alternatively, let’s say you want to provision the SNS and SQS assets using your own tool of choice (for example, Terraform) while preventing Dapr from doing so dynamically. You need to enable disableEntityManagement and assign your Dapr-using application with an IAM Role, with a policy like:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "YOUR_POLICY_NAME",
      "Effect": "Allow",
      "Action": [
        "sqs:DeleteMessage",
        "sqs:ReceiveMessage",
        "sqs:ChangeMessageVisibility",
        "sqs:GetQueueUrl",
        "sqs:GetQueueAttributes",
        "sns:Publish",
        "sns:ListSubscriptionsByTopic",
        "sns:GetTopicAttributes"

      ],
      "Resource": [
        "arn:aws:sns:AWS_REGION:AWS_ACCOUNT_ID:APP_TOPIC_NAME",
        "arn:aws:sqs:AWS_REGION:AWS_ACCOUNT_ID:APP_ID"
      ]
    }
  ]
}

In the above example, you are running your applications on an EKS cluster with dynamic assets creation (the default Dapr behavior).

1.3 - Azure Event Hubs

Detailed documentation on the Azure Event Hubs pubsub component

Component format

To set up an Azure Event Hubs pub/sub, create a component of type pubsub.azure.eventhubs. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

Apart from the configuration metadata fields shown below, Azure Event Hubs also supports Azure Authentication mechanisms.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: eventhubs-pubsub
spec:
  type: pubsub.azure.eventhubs
  version: v1
  metadata:
    # Either connectionString or eventHubNamespace is required
    # Use connectionString when *not* using Microsoft Entra ID
    - name: connectionString
      value: "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"
    # Use eventHubNamespace when using Microsoft Entra ID
    - name: eventHubNamespace
      value: "namespace"
    - name: consumerID # Optional. If not supplied, the runtime will create one.
      value: "channel1"
    - name: enableEntityManagement
      value: "false"
    - name: enableInOrderMessageDelivery
      value: "false"
    # The following four properties are needed only if enableEntityManagement is set to true
    - name: resourceGroupName
      value: "test-rg"
    - name: subscriptionID
      value: "value of Azure subscription ID"
    - name: partitionCount
      value: "1"
    - name: messageRetentionInDays
      value: "3"
    # Checkpoint store attributes
    - name: storageAccountName
      value: "myeventhubstorage"
    - name: storageAccountKey
      value: "112233445566778899"
    - name: storageContainerName
      value: "myeventhubstoragecontainer"
    # Alternative to passing storageAccountKey
    - name: storageConnectionString
      value: "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>"

Spec metadata fields

FieldRequiredDetailsExample
connectionStringY*Connection string for the Event Hub or the Event Hub namespace.
* Mutally exclusive with eventHubNamespace field.
* Required when not using Microsoft Entra ID Authentication
"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}" or "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"
eventHubNamespaceY*The Event Hub Namespace name.
* Mutally exclusive with connectionString field.
* Required when using Microsoft Entra ID Authentication
"namespace"
consumerIDNConsumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID) value.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
enableEntityManagementNBoolean value to allow management of the EventHub namespace and storage account. Default: false"true", "false"
enableInOrderMessageDeliveryNInput/OutputBoolean value to allow messages to be delivered in the order in which they were posted. This assumes partitionKey is set when publishing or posting to ensure ordering across partitions. Default: false
storageAccountNameYStorage account name to use for the checkpoint store."myeventhubstorage"
storageAccountKeyY*Storage account key for the checkpoint store account.
* When using Microsoft Entra ID, it’s possible to omit this if the service principal has access to the storage account too.
"112233445566778899"
storageConnectionStringY*Connection string for the checkpoint store, alternative to specifying storageAccountKey"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"
storageContainerNameYStorage container name for the storage account name."myeventhubstoragecontainer"
resourceGroupNameNName of the resource group the Event Hub namespace is part of. Required when entity management is enabled"test-rg"
subscriptionIDNAzure subscription ID value. Required when entity management is enabled"azure subscription id"
partitionCountNNumber of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: "1""2"
messageRetentionInDaysNNumber of days to retain messages for in the newly created Event Hub namespace. Used only when entity management is enabled. Default: "1""90"

Microsoft Entra ID authentication

The Azure Event Hubs pub/sub component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Example Configuration

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: eventhubs-pubsub
spec:
  type: pubsub.azure.eventhubs
  version: v1
  metadata:
    # Azure Authentication Used
    - name: azureTenantId
      value: "***"
    - name: azureClientId
      value: "***"
    - name: azureClientSecret
      value: "***"
    - name: eventHubNamespace 
      value: "namespace"
    - name: enableEntityManagement
      value: "false"
    # The following four properties are needed only if enableEntityManagement is set to true
    - name: resourceGroupName
      value: "test-rg"
    - name: subscriptionID
      value: "value of Azure subscription ID"
    - name: partitionCount
      value: "1"
    - name: messageRetentionInDays
    # Checkpoint store attributes
    # In this case, we're using Microsoft Entra ID to access the storage account too
    - name: storageAccountName
      value: "myeventhubstorage"
    - name: storageContainerName
      value: "myeventhubstoragecontainer"

Sending and receiving multiple messages

Azure Eventhubs supports sending and receiving multiple messages in a single operation using the bulk pub/sub API.

Configuring bulk publish

To set the metadata for bulk publish operation, set the query parameters on the HTTP request or the gRPC metadata, as documented in the API reference.

MetadataDefault
metadata.maxBulkPubBytes1000000

Configuring bulk subscribe

When subscribing to a topic, you can configure bulkSubscribe options. Refer to Subscribing messages in bulk for more details and to learn more about the bulk subscribe API.

ConfigurationDefault
maxMessagesCount100
maxAwaitDurationMs10000

Configuring checkpoint frequency

When subscribing to a topic, you can configure the checkpointing frequency in a partition by setting the metadata in the HTTP or gRPC subscribe request . This metadata enables checkpointing after the configured number of events within a partition event sequence. Disable checkpointing by setting the frequency to 0.

Learn more about checkpointing.

MetadataDefault
metadata.checkPointFrequencyPerPartition1

Following example shows a sample subscription file for Declarative subscription using checkPointFrequencyPerPartition metadata. Similarly, you can also pass the metadata in Programmatic subscriptions as well.

apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
  name: order-pub-sub
spec:
  topic: orders
  routes: 
    default: /checkout
  pubsubname: order-pub-sub
  metadata:
    checkPointFrequencyPerPartition: 1
scopes:
- orderprocessing
- checkout

Create an Azure Event Hub

Follow the instructions on the documentation to set up Azure Event Hubs.

Because this component uses Azure Storage as checkpoint store, you will also need an Azure Storage Account. Follow the instructions on the documentation to manage the storage account access keys.

See the documentation on how to get the Event Hubs connection string (note this is not for the Event Hubs namespace).

Create consumer groups for each subscriber

For every Dapr app that wants to subscribe to events, create an Event Hubs consumer group with the name of the Dapr app ID. For example, a Dapr app running on Kubernetes with dapr.io/app-id: "myapp" will need an Event Hubs consumer group named myapp.

Note: Dapr passes the name of the consumer group to the Event Hub, so this is not supplied in the metadata.

Entity Management

When entity management is enabled in the metadata, as long as the application has the right role and permissions to manipulate the Event Hub namespace, Dapr can automatically create the Event Hub and consumer group for you.

The Evet Hub name is the topic field in the incoming request to publish or subscribe to, while the consumer group name is the name of the Dapr app which subscribes to a given Event Hub. For example, a Dapr app running on Kubernetes with name dapr.io/app-id: "myapp" requires an Event Hubs consumer group named myapp.

Entity management is only possible when using Microsoft Entra ID Authentication and not using a connection string.

Dapr passes the name of the consumer group to the Event Hub, so this is not supplied in the metadata.

Receiving custom properties

By default, Dapr does not forward custom properties. However, by setting the subscription metadata requireAllProperties to "true", you can receive custom properties as HTTP headers.

apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
  name: order-pub-sub
spec:
  topic: orders
  routes: 
    default: /checkout
  pubsubname: order-pub-sub
  metadata:
    requireAllProperties: "true"

The same can be achieved using the Dapr SDK:

[Topic("order-pub-sub", "orders")]
[TopicMetadata("requireAllProperties", "true")]
[HttpPost("checkout")]
public ActionResult Checkout(Order order, [FromHeader] int priority)
{
    return Ok();
}

Subscribing to Azure IoT Hub Events

Azure IoT Hub provides an endpoint that is compatible with Event Hubs, so the Azure Event Hubs pubsub component can also be used to subscribe to Azure IoT Hub events.

The device-to-cloud events created by Azure IoT Hub devices will contain additional IoT Hub System Properties, and the Azure Event Hubs pubsub component for Dapr will return the following as part of the response metadata:

System Property NameDescription & Routing Query Keyword
iothub-connection-auth-generation-idThe connectionDeviceGenerationId of the device that sent the message. See IoT Hub device identity properties.
iothub-connection-auth-methodThe connectionAuthMethod used to authenticate the device that sent the message.
iothub-connection-device-idThe deviceId of the device that sent the message. See IoT Hub device identity properties.
iothub-connection-module-idThe moduleId of the device that sent the message. See IoT Hub device identity properties.
iothub-enqueuedtimeThe enqueuedTime in RFC3339 format that the device-to-cloud message was received by IoT Hub.
message-idThe user-settable AMQP messageId.

For example, the headers of a delivered HTTP subscription message would contain:

{
  'user-agent': 'fasthttp',
  'host': '127.0.0.1:3000',
  'content-type': 'application/json',
  'content-length': '120',
  'iothub-connection-device-id': 'my-test-device',
  'iothub-connection-auth-generation-id': '637618061680407492',
  'iothub-connection-auth-method': '{"scope":"module","type":"sas","issuer":"iothub","acceptingIpFilterRule":null}',
  'iothub-connection-module-id': 'my-test-module-a',
  'iothub-enqueuedtime': '2021-07-13T22:08:09Z',
  'message-id': 'my-custom-message-id',
  'x-opt-sequence-number': '35',
  'x-opt-enqueued-time': '2021-07-13T22:08:09Z',
  'x-opt-offset': '21560',
  'traceparent': '00-4655608164bc48b985b42d39865f3834-ed6cf3697c86e7bd-01'
}

1.4 - Azure Service Bus Queues

Detailed documentation on the Azure Service Bus Queues pubsub component

Component format

To set up Azure Service Bus Queues pub/sub, create a component of type pubsub.azure.servicebus.queues. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

This component uses queues on Azure Service Bus; see the official documentation for the differences between topics and queues. For using topics, see the Azure Service Bus Topics pubsub component.

Connection String Authentication

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: servicebus-pubsub
spec:
  type: pubsub.azure.servicebus.queues
  version: v1
  metadata:
  # Required when not using Microsoft Entra ID Authentication
  - name: connectionString
    value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
  # - name: consumerID # Optional
  #   value: channel1
  # - name: timeoutInSec # Optional
  #   value: 60
  # - name: handlerTimeoutInSec # Optional
  #   value: 60
  # - name: disableEntityManagement # Optional
  #   value: "false"
  # - name: maxDeliveryCount # Optional
  #   value: 3
  # - name: lockDurationInSec # Optional
  #   value: 60
  # - name: lockRenewalInSec # Optional
  #   value: 20
  # - name: maxActiveMessages # Optional
  #   value: 10000
  # - name: maxConcurrentHandlers # Optional
  #   value: 10
  # - name: defaultMessageTimeToLiveInSec # Optional
  #   value: 10
  # - name: autoDeleteOnIdleInSec # Optional
  #   value: 3600
  # - name: minConnectionRecoveryInSec # Optional
  #   value: 2
  # - name: maxConnectionRecoveryInSec # Optional
  #   value: 300
  # - name: maxRetriableErrorsPerSec # Optional
  #   value: 10
  # - name: publishMaxRetries # Optional
  #   value: 5
  # - name: publishInitialRetryIntervalInMs # Optional
  #   value: 500

Spec metadata fields

FieldRequiredDetailsExample
connectionStringYShared access policy connection string for the Service Bus. Required unless using Microsoft Entra ID authentication.See example above
consumerIDNConsumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID) value.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
namespaceNameNParameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication."namespace.servicebus.windows.net"
timeoutInSecNTimeout for sending messages and for management operations. Default: 6030
handlerTimeoutInSecNTimeout for invoking the app’s handler. Default: 6030
lockRenewalInSecNDefines the frequency at which buffered message locks will be renewed. Default: 20.20
maxActiveMessagesNDefines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: 10002000
maxConcurrentHandlersNDefines the maximum number of concurrent message handlers. Default: 0 (unlimited)10
disableEntityManagementNWhen set to true, queues and subscriptions do not get created automatically. Default: "false""true", "false"
defaultMessageTimeToLiveInSecNDefault message time to live, in seconds. Used during subscription creation only.10
autoDeleteOnIdleInSecNTime in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: 0 (disabled)3600
maxDeliveryCountNDefines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server.10
lockDurationInSecNDefines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server.30
minConnectionRecoveryInSecNMinimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: 25
maxConnectionRecoveryInSecNMaximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: 300 (5 minutes)600
maxRetriableErrorsPerSecNMaximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: 1010
publishMaxRetriesNThe max number of retries for when Azure Service Bus responds with “too busy” in order to throttle messages. Defaults: 55
publishInitialRetryIntervalInMsNTime in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: 500500

Microsoft Entra ID authentication

The Azure Service Bus Queues pubsub component supports authentication using all Microsoft Entra ID mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Example Configuration

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: servicebus-pubsub
spec:
  type: pubsub.azure.servicebus.queues
  version: v1
  metadata:
  - name: namespaceName
    # Required when using Azure Authentication.
    # Must be a fully-qualified domain name
    value: "servicebusnamespace.servicebus.windows.net"
  - name: azureTenantId
    value: "***"
  - name: azureClientId
    value: "***"
  - name: azureClientSecret
    value: "***"

Message metadata

Azure Service Bus messages extend the Dapr message format with additional contextual metadata. Some metadata fields are set by Azure Service Bus itself (read-only) and others can be set by the client when publishing a message.

Sending a message with metadata

To set Azure Service Bus metadata when sending a message, set the query parameters on the HTTP request or the gRPC metadata as documented here.

  • metadata.MessageId
  • metadata.CorrelationId
  • metadata.SessionId
  • metadata.Label
  • metadata.ReplyTo
  • metadata.PartitionKey
  • metadata.To
  • metadata.ContentType
  • metadata.ScheduledEnqueueTimeUtc
  • metadata.ReplyToSessionId

Receiving a message with metadata

When Dapr calls your application, it attaches Azure Service Bus message metadata to the request using either HTTP headers or gRPC metadata. In addition to the settable metadata listed above, you can also access the following read-only message metadata.

  • metadata.DeliveryCount
  • metadata.LockedUntilUtc
  • metadata.LockToken
  • metadata.EnqueuedTimeUtc
  • metadata.SequenceNumber

To find out more details on the purpose of any of these metadata properties refer to the official Azure Service Bus documentation.

In addition, all entries of ApplicationProperties from the original Azure Service Bus message are appended as metadata.<application property's name>.

Sending and receiving multiple messages

Azure Service Bus supports sending and receiving multiple messages in a single operation using the bulk pub/sub API.

Configuring bulk publish

To set the metadata for bulk publish operation, set the query parameters on the HTTP request or the gRPC metadata as documented here

MetadataDefault
metadata.maxBulkPubBytes131072 (128 KiB)

Configuring bulk subscribe

When subscribing to a topic, you can configure bulkSubscribe options. Refer to Subscribing messages in bulk for more details. Learn more about the bulk subscribe API.

ConfigurationDefault
maxMessagesCount100

Create an Azure Service Bus broker for queues

Follow the instructions here on setting up Azure Service Bus Queues.

Retry policy and dead-letter queues

By default, an Azure Service Bus Queue has a dead-letter queue. The messages are retried the amount given for maxDeliveryCount. The default maxDeliveryCount value defaults to 10, but can be set up to 2000. These retries happen very rapidly and the message is put in the dead-letter queue if no success is returned.

Dapr Pub/sub offers its own dead-letter queue concept that lets you control the retry policy and subscribe to the dead-letter queue through Dapr.

  1. Set up a separate queue as that dead-letter queue in the Azure Service Bus namespace, and a resilience policy that defines how to retry.
  2. Subscribe to the topic to get the failed messages and deal with them.

For example, setting up a dead-letter queue orders-dlq in the subscription and a resiliency policy lets you subscribe to the topic orders-dlq to handle failed messages.

For more details on setting up dead-letter queues, see the dead-letter article.

1.5 - Azure Service Bus Topics

Detailed documentation on the Azure Service Bus Topics pubsub component

Component format

To set up Azure Service Bus Topics pub/sub, create a component of type pubsub.azure.servicebus.topics. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

This component uses topics on Azure Service Bus; see the official documentation for the differences between topics and queues.
For using queues, see the Azure Service Bus Queues pubsub component.

Connection String Authentication

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: servicebus-pubsub
spec:
  type: pubsub.azure.servicebus.topics
  version: v1
  metadata:
  # Required when not using Microsoft Entra ID Authentication
  - name: connectionString
    value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
  # - name: consumerID # Optional: defaults to the app's own ID
  #   value: channel1 
  # - name: timeoutInSec # Optional
  #   value: 60
  # - name: handlerTimeoutInSec # Optional
  #   value: 60
  # - name: disableEntityManagement # Optional
  #   value: "false"
  # - name: maxDeliveryCount # Optional
  #   value: 3
  # - name: lockDurationInSec # Optional
  #   value: 60
  # - name: lockRenewalInSec # Optional
  #   value: 20
  # - name: maxActiveMessages # Optional
  #   value: 10000
  # - name: maxConcurrentHandlers # Optional
  #   value: 10
  # - name: defaultMessageTimeToLiveInSec # Optional
  #   value: 10
  # - name: autoDeleteOnIdleInSec # Optional
  #   value: 3600
  # - name: minConnectionRecoveryInSec # Optional
  #   value: 2
  # - name: maxConnectionRecoveryInSec # Optional
  #   value: 300
  # - name: maxRetriableErrorsPerSec # Optional
  #   value: 10
  # - name: publishMaxRetries # Optional
  #   value: 5
  # - name: publishInitialRetryIntervalInMs # Optional
  #   value: 500

NOTE: The above settings are shared across all topics that use this component.

Spec metadata fields

FieldRequiredDetailsExample
connectionStringYShared access policy connection string for the Service Bus. Required unless using Microsoft Entra ID authentication.See example above
namespaceNameNParameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication."namespace.servicebus.windows.net"
consumerIDNConsumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID) value. (appID) value.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
timeoutInSecNTimeout for sending messages and for management operations. Default: 6030
handlerTimeoutInSecNTimeout for invoking the app’s handler. Default: 6030
lockRenewalInSecNDefines the frequency at which buffered message locks will be renewed. Default: 20.20
maxActiveMessagesNDefines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: 10002000
maxConcurrentHandlersNDefines the maximum number of concurrent message handlers. Default: 0 (unlimited)10
disableEntityManagementNWhen set to true, queues and subscriptions do not get created automatically. Default: "false""true", "false"
defaultMessageTimeToLiveInSecNDefault message time to live, in seconds. Used during subscription creation only.10
autoDeleteOnIdleInSecNTime in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: 0 (disabled)3600
maxDeliveryCountNDefines the number of attempts the server makes to deliver a message. Used during subscription creation only. Default set by server.10
lockDurationInSecNDefines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server.30
minConnectionRecoveryInSecNMinimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: 25
maxConnectionRecoveryInSecNMaximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: 300 (5 minutes)600
maxRetriableErrorsPerSecNMaximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: 1010
publishMaxRetriesNThe max number of retries for when Azure Service Bus responds with “too busy” in order to throttle messages. Defaults: 55
publishInitialRetryIntervalInMsNTime in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: 500500

Microsoft Entra ID authentication

The Azure Service Bus Topics pubsub component supports authentication using all Microsoft Entra ID mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Example Configuration

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: servicebus-pubsub
spec:
  type: pubsub.azure.servicebus.topics
  version: v1
  metadata:
  - name: namespaceName
    # Required when using Azure Authentication.
    # Must be a fully-qualified domain name
    value: "servicebusnamespace.servicebus.windows.net"
  - name: azureTenantId
    value: "***"
  - name: azureClientId
    value: "***"
  - name: azureClientSecret
    value: "***"

Message metadata

Azure Service Bus messages extend the Dapr message format with additional contextual metadata. Some metadata fields are set by Azure Service Bus itself (read-only) and others can be set by the client when publishing a message.

Sending a message with metadata

To set Azure Service Bus metadata when sending a message, set the query parameters on the HTTP request or the gRPC metadata as documented here.

  • metadata.MessageId
  • metadata.CorrelationId
  • metadata.SessionId
  • metadata.Label
  • metadata.ReplyTo
  • metadata.PartitionKey
  • metadata.To
  • metadata.ContentType
  • metadata.ScheduledEnqueueTimeUtc
  • metadata.ReplyToSessionId

Note: The metadata.MessageId property does not set the id property of the cloud event returned by Dapr and should be treated in isolation.

NOTE: If the metadata.SessionId property is not set but the topic requires sessions then an empty session id will be used.

NOTE: The metadata.ScheduledEnqueueTimeUtc property supports the RFC1123 and RFC3339 timestamp formats.

Receiving a message with metadata

When Dapr calls your application, it will attach Azure Service Bus message metadata to the request using either HTTP headers or gRPC metadata. In addition to the settable metadata listed above, you can also access the following read-only message metadata.

  • metadata.DeliveryCount
  • metadata.LockedUntilUtc
  • metadata.LockToken
  • metadata.EnqueuedTimeUtc
  • metadata.SequenceNumber

To find out more details on the purpose of any of these metadata properties refer to the official Azure Service Bus documentation.

In addition, all entries of ApplicationProperties from the original Azure Service Bus message are appended as metadata.<application property's name>.

Note: that all times are populated by the server and are not adjusted for clock skews.

Subscribe to a session enabled topic

To subscribe to a topic that has sessions enabled you can provide the following properties in the subscription metadata.

  • requireSessions (default: false)
  • sessionIdleTimeoutInSec (default: 60)
  • maxConcurrentSessions (default: 8)

Create an Azure Service Bus broker for topics

Follow the instructions here on setting up Azure Service Bus Topics.

1.6 - GCP

Detailed documentation on the GCP Pub/Sub component

Create a Dapr component

To set up GCP pub/sub, create a component of type pubsub.gcp.pubsub. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: gcp-pubsub
spec:
  type: pubsub.gcp.pubsub
  version: v1
  metadata:
  - name: type
    value: service_account
  - name: projectId
    value: <PROJECT_ID> # replace
  - name: endpoint # Optional.
    value: "http://localhost:8085"
  - name: consumerID # Optional - defaults to the app's own ID
    value: <CONSUMER_ID>
  - name: identityProjectId
    value: <IDENTITY_PROJECT_ID> # replace
  - name: privateKeyId
    value: <PRIVATE_KEY_ID> #replace
  - name: clientEmail
    value: <CLIENT_EMAIL> #replace
  - name: clientId
    value: <CLIENT_ID> # replace
  - name: authUri
    value: https://accounts.google.com/o/oauth2/auth
  - name: tokenUri
    value: https://oauth2.googleapis.com/token
  - name: authProviderX509CertUrl
    value: https://www.googleapis.com/oauth2/v1/certs
  - name: clientX509CertUrl
    value: https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com #replace PROJECT_NAME
  - name: privateKey
    value: <PRIVATE_KEY> # replace x509 cert
  - name: disableEntityManagement
    value: "false"
  - name: enableMessageOrdering
    value: "false"
  - name: orderingKey # Optional
    value: <ORDERING_KEY>
  - name: maxReconnectionAttempts # Optional
    value: 30
  - name: connectionRecoveryInSec # Optional
    value: 2
  - name: deadLetterTopic # Optional
    value: <EXISTING_PUBSUB_TOPIC>
  - name: maxDeliveryAttempts # Optional
    value: 5
  - name: maxOutstandingMessages # Optional
    value: 1000
  - name: maxOutstandingBytes # Optional
    value: 1000000000
  - name: maxConcurrentConnections # Optional
    value: 10

Spec metadata fields

FieldRequiredDetailsExample
projectIdYGCP project IDmyproject-123
endpointNGCP endpoint for the component to use. Only used for local development (for example) with GCP Pub/Sub Emulator. The endpoint is unnecessary when running against the GCP production API."http://localhost:8085"
consumerIDNThe Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID) value. The consumerID, along with the topic provided as part of the request, are used to build the Pub/Sub subscription IDCan be set to string value (such as "channel1") or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
identityProjectIdNIf the GCP pubsub project is different from the identity project, specify the identity project using this attribute"myproject-123"
privateKeyIdNIf using explicit credentials, this field should contain the private_key_id field from the service account json document"my-private-key"
privateKeyNIf using explicit credentials, this field should contain the private_key field from the service account json-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B
clientEmailNIf using explicit credentials, this field should contain the client_email field from the service account json"myservice@myproject-123.iam.gserviceaccount.com"
clientIdNIf using explicit credentials, this field should contain the client_id field from the service account json106234234234
authUriNIf using explicit credentials, this field should contain the auth_uri field from the service account jsonhttps://accounts.google.com/o/oauth2/auth
tokenUriNIf using explicit credentials, this field should contain the token_uri field from the service account jsonhttps://oauth2.googleapis.com/token
authProviderX509CertUrlNIf using explicit credentials, this field should contain the auth_provider_x509_cert_url field from the service account jsonhttps://www.googleapis.com/oauth2/v1/certs
clientX509CertUrlNIf using explicit credentials, this field should contain the client_x509_cert_url field from the service account jsonhttps://www.googleapis.com/robot/v1/metadata/x509/myserviceaccount%40myproject.iam.gserviceaccount.com
disableEntityManagementNWhen set to "true", topics and subscriptions do not get created automatically. Default: "false""true", "false"
enableMessageOrderingNWhen set to "true", subscribed messages will be received in order, depending on publishing and permissions configuration."true", "false"
orderingKeyNThe key provided in the request. It’s used when enableMessageOrdering is set to true to order messages based on such key.“my-orderingkey”
maxReconnectionAttemptsNDefines the maximum number of reconnect attempts. Default: 3030
connectionRecoveryInSecNTime in seconds to wait between connection recovery attempts. Default: 22
deadLetterTopicNName of the GCP Pub/Sub Topic. This topic must exist before using this component."myapp-dlq"
maxDeliveryAttemptsNMaximum number of attempts to deliver the message. If deadLetterTopic is specified, maxDeliveryAttempts is the maximum number of attempts for failed processing of messages. Once that number is reached, the message will be moved to the dead-letter topic. Default: 55
typeNDEPRECATED GCP credentials type. Only service_account is supported. Defaults to service_accountservice_account
maxOutstandingMessagesNMaximum number of outstanding messages a given streaming-pull connection can have. Default: 100050
maxOutstandingBytesNMaximum number of outstanding bytes a given streaming-pull connection can have. Default: 10000000001000000000
maxConcurrentConnectionsNMaximum number of concurrent streaming-pull connections to be maintained. Default: 102
ackDeadlineNMessage acknowledgement duration deadline. Default: 20s1m

GCP Credentials

Since the GCP Pub/Sub component uses the GCP Go Client Libraries, by default it authenticates using Application Default Credentials. This is explained further in the Authenticate to GCP Cloud services using client libraries guide.

Create a GCP Pub/Sub

For local development, the GCP Pub/Sub Emulator is used to test the GCP Pub/Sub Component. Follow these instructions to run the GCP Pub/Sub Emulator.

To run the GCP Pub/Sub Emulator locally using Docker, use the following docker-compose.yaml:

version: '3'
services:
  pubsub:
    image: gcr.io/google.com/cloudsdktool/cloud-sdk:422.0.0-emulators
    ports:
      - "8085:8085"
    container_name: gcp-pubsub
    entrypoint: gcloud beta emulators pubsub start --project local-test-prj --host-port 0.0.0.0:8085

In order to use the GCP Pub/Sub Emulator with your pub/sub binding, you need to provide the endpoint configuration in the component metadata. The endpoint is unnecessary when running against the GCP Production API.

The projectId attribute must match the --project used in either the docker-compose.yaml or Docker command.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: gcp-pubsub
spec:
  type: pubsub.gcp.pubsub
  version: v1
  metadata:
  - name: projectId
    value: "local-test-prj"
  - name: consumerID
    value: "testConsumer"
  - name: endpoint
    value: "localhost:8085"

You can use either “explicit” or “implicit” credentials to configure access to your GCP pubsub instance. If using explicit, most fields are required. Implicit relies on dapr running under a Kubernetes service account (KSA) mapped to a Google service account (GSA) which has the necessary permissions to access pubsub. In implicit mode, only the projectId attribute is needed, all other are optional.

Follow the instructions here on setting up Google Cloud Pub/Sub system.

1.7 - In-memory

Detailed documentation on the In Memory pubsub component

The in-memory pub/sub component operates within a single Dapr sidecar. This is primarily meant for development purposes. State is not replicated across multiple sidecars and is lost when the Dapr sidecar is restarted.

Component format

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pubsub
spec:
  type: pubsub.in-memory
  version: v1
  metadata: []

Note: in-memory does not require any specific metadata for the component to work, however spec.metadata is a required field.

1.8 - JetStream

Detailed documentation on the NATS JetStream component

Component format

To set up JetStream pub/sub, create a component of type pubsub.jetstream. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: jetstream-pubsub
spec:
  type: pubsub.jetstream
  version: v1
  metadata:
  - name: natsURL
    value: "nats://localhost:4222"
  - name: jwt # Optional. Used for decentralized JWT authentication.
    value: "eyJhbGciOiJ...6yJV_adQssw5c"
  - name: seedKey # Optional. Used for decentralized JWT authentication.
    value: "SUACS34K232O...5Z3POU7BNIL4Y"
  - name: tls_client_cert # Optional. Used for TLS Client authentication.
    value: "/path/to/tls.crt"
  - name: tls_client_key # Optional. Used for TLS Client authentication.
    value: "/path/to/tls.key"
  - name: token # Optional. Used for token based authentication.
    value: "my-token"
  - name: name
    value: "my-conn-name"
  - name: streamName
    value: "my-stream"
  - name: durableName 
    value: "my-durable-subscription"
  - name: queueGroupName
    value: "my-queue-group"
  - name: startSequence
    value: 1
  - name: startTime # In Unix format
    value: 1630349391
  - name: flowControl
    value: false
  - name: ackWait
    value: 10s
  - name: maxDeliver
    value: 5
  - name: backOff
    value: "50ms, 1s, 10s"
  - name: maxAckPending
    value: 5000
  - name: replicas
    value: 1
  - name: memoryStorage
    value: false
  - name: rateLimit
    value: 1024
  - name: heartbeat
    value: 15s
  - name: ackPolicy
    value: explicit
  - name: deliverPolicy
    value: all
  - name: domain
    value: hub
  - name: apiPrefix
    value: PREFIX

Spec metadata fields

FieldRequiredDetailsExample
natsURLYNATS server address URL"nats://localhost:4222"
jwtNNATS decentralized authentication JWT"eyJhbGciOiJ...6yJV_adQssw5c"
seedKeyNNATS decentralized authentication seed key"SUACS34K232O...5Z3POU7BNIL4Y"
tls_client_certNNATS TLS Client Authentication Certificate"/path/to/tls.crt"
tls_client_keyNNATS TLS Client Authentication Key"/path/to/tls.key"
tokenNNATS token based authentication"my-token"
nameNNATS connection name"my-conn-name"
streamNameNName of the JetStream Stream to bind to"my-stream"
durableNameNDurable name"my-durable"
queueGroupNameNQueue group name"my-queue"
startSequenceNStart Sequence1
startTimeNStart Time in Unix format1630349391
flowControlNFlow Controltrue
ackWaitNAck Wait10s
maxDeliverNMax Deliver15
backOffNBackOff"50ms, 1s, 5s, 10s"
maxAckPendingNMax Ack Pending5000
replicasNReplicas3
memoryStorageNMemory Storagefalse
rateLimitNRate Limit1024
heartbeatNHeartbeat10s
ackPolicyNAck Policyexplicit
deliverPolicyNOne of: all, last, new, sequence, timeall
domainN[JetStream Leafondes]HUB
apiPrefixN[JetStream Leafnodes]PREFIX

Create a NATS server

You can run a NATS Server with JetStream enabled locally using Docker:

docker run -d -p 4222:4222 nats:latest -js

You can then interact with the server using the client port: localhost:4222.

Install NATS JetStream on Kubernetes by using the helm:

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install --set nats.jetstream.enabled=true my-nats nats/nats

This installs a single NATS server into the default namespace. To interact with NATS, find the service with:

kubectl get svc my-nats

For more information on helm chart settings, see the Helm chart documentation.

Create JetStream

It is essential to create a NATS JetStream for a specific subject. For example, for a NATS server running locally use:

nats -s localhost:4222 stream add myStream --subjects mySubject

Example: Competing consumers pattern

Let’s say you’d like each message to be processed by only one application or pod with the same app-id. Typically, the consumerID metadata spec helps you define competing consumers.

Since consumerID is not supported in NATS JetStream, you need to specify durableName and queueGroupName to achieve the competing consumers pattern. For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pubsub
spec:
  type: pubsub.jetstream
  version: v1
  metadata:
  - name: name
    value: "my-conn-name"
  - name: streamName
    value: "my-stream"
  - name: durableName 
    value: "my-durable-subscription"
  - name: queueGroupName
    value: "my-queue-group"

1.9 - KubeMQ

Detailed documentation on the KubeMQ pubsub component

Component format

To set up KubeMQ pub/sub, create a component of type pubsub.kubemq. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kubemq-pubsub
spec:
  type: pubsub.kubemq
  version: v1
  metadata:
    - name: address
      value: localhost:50000
    - name: store
      value: false
    - name: consumerID
      value: channel1

Spec metadata fields

FieldRequiredDetailsExample
addressYAddress of the KubeMQ server"localhost:50000"
storeNtype of pubsub, true: pubsub persisted (EventsStore), false: pubsub in-memory (Events)true or false (default is false)
consumerIDNConsumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID) value.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
clientIDNName for client id connectionsub-client-12345
authTokenNAuth JWT token for connection Check out KubeMQ Authenticationew...
groupNSubscriber group for load balancingg1
disableReDeliveryNSet if message should be re-delivered in case of error coming from applicationtrue or false (default is false)

Create a KubeMQ broker

  1. Obtain KubeMQ Key.
  2. Wait for an email confirmation with your Key

You can run a KubeMQ broker with Docker:

docker run -d -p 8080:8080 -p 50000:50000 -p 9090:9090 -e KUBEMQ_TOKEN=<your-key> kubemq/kubemq

You can then interact with the server using the client port: localhost:50000

  1. Obtain KubeMQ Key.
  2. Wait for an email confirmation with your Key

Then Run the following kubectl commands:

kubectl apply -f https://deploy.kubemq.io/init
kubectl apply -f https://deploy.kubemq.io/key/<your-key>

Install KubeMQ CLI

Go to KubeMQ CLI and download the latest version of the CLI.

Browse KubeMQ Dashboard

Open a browser and navigate to http://localhost:8080

With KubeMQCTL installed, run the following command:

kubemqctl get dashboard

Or, with kubectl installed, run port-forward command:

kubectl port-forward svc/kubemq-cluster-api -n kubemq 8080:8080

KubeMQ Documentation

Visit KubeMQ Documentation for more information.

1.10 - MQTT

Detailed documentation on the MQTT pubsub component

Component format

To set up MQTT pub/sub, create a component of type pubsub.mqtt. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt-pubsub
spec:
  type: pubsub.mqtt
  version: v1
  metadata:
  - name: url
    value: "tcp://[username][:password]@host.domain[:port]"
  - name: qos
    value: 1
  - name: retain
    value: "false"
  - name: cleanSession
    value: "false"
  - name: consumerID
    value: "channel1"

Spec metadata fields

FieldRequiredDetailsExample
urlYAddress of the MQTT broker. Can be secretKeyRef to use a secret reference.
Use the tcp:// URI scheme for non-TLS communication.
Use the ssl:// URI scheme for TLS communication.
"tcp://[username][:password]@host.domain[:port]"
consumerIDNThe client ID used to connect to the MQTT broker for the consumer connection. Defaults to the Dapr app ID.
Note: if producerID is not set, -consumer is appended to this value for the consumer connection
Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
producerIDNThe client ID used to connect to the MQTT broker for the producer connection. Defaults to {consumerID}-producer."myMqttProducerApp"
qosNIndicates the Quality of Service Level (QoS) of the message (more info). Defaults to 1.0, 1, 2
retainNDefines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to "false"."true", "false"
cleanSessionNSets the clean_session flag in the connection message to the MQTT broker if "true" (more info). Defaults to "false"."true", "false"
caCertRequired for using TLSCertificate Authority (CA) certificate in PEM format for verifying server TLS certificates."-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"
clientCertRequired for using TLSTLS client certificate in PEM format. Must be used with clientKey."-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"
clientKeyRequired for using TLSTLS client key in PEM format. Must be used with clientCert. Can be secretKeyRef to use a secret reference."-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"

Enabling message delivery retries

The MQTT pub/sub component has no built-in support for retry strategies. This means that the sidecar sends a message to the service only once. If the service marks the message as not processed, the message won’t be acknowledged back to the broker. Only if broker resends the message, would it would be retried.

To make Dapr use more spohisticated retry policies, you can apply a retry resiliency policy to the MQTT pub/sub component.

There is a crucial difference between the two ways of retries:

  1. Re-delivery of unacknowledged messages is completely dependent on the broker. Dapr does not guarantee it. Some brokers like emqx, vernemq etc. support it but it not a part of MQTT3 spec.

  2. Using a retry resiliency policy makes the same Dapr sidecar retry redelivering the messages. So it is the same Dapr sidecar and the same app receiving the same message.

Communication using TLS

To configure communication using TLS, ensure that the MQTT broker (for example, mosquitto) is configured to support certificates and provide the caCert, clientCert, clientKey metadata in the component configuration. For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt-pubsub
spec:
  type: pubsub.mqtt
  version: v1
  metadata:
  - name: url
    value: "ssl://host.domain[:port]"
  - name: qos
    value: 1
  - name: retain
    value: "false"
  - name: cleanSession
    value: "false"
  - name: caCert
    value: ${{ myLoadedCACert }}
  - name: clientCert
    value: ${{ myLoadedClientCert }}
  - name: clientKey
    secretKeyRef:
      name: myMqttClientKey
      key: myMqttClientKey
auth:
  secretStore: <SECRET_STORE_NAME>

Note that while the caCert and clientCert values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.

Consuming a shared topic

When consuming a shared topic, each consumer must have a unique identifier. By default, the application ID is used to uniquely identify each consumer and publisher. In self-hosted mode, invoking each dapr run with a different application ID is sufficient to have them consume from the same shared topic. However, on Kubernetes, multiple instances of an application pod will share the same application ID, prohibiting all instances from consuming the same topic. To overcome this, configure the component’s consumerID metadata with a {uuid} tag, which will give each instance a randomly generated consumerID value on start up. For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt-pubsub
spec:
  type: pubsub.mqtt
  version: v1
  metadata:
    - name: consumerID
      value: "{uuid}"
    - name: url
      value: "tcp://admin:public@localhost:1883"
    - name: qos
      value: 1
    - name: retain
      value: "false"
    - name: cleanSession
      value: "true"

Note that in the case, the value of the consumer ID is random every time Dapr restarts, so we are setting cleanSession to true as well.

Create a MQTT broker

You can run a MQTT broker locally using Docker:

docker run -d -p 1883:1883 -p 9001:9001 --name mqtt eclipse-mosquitto:1.6

You can then interact with the server using the client port: mqtt://localhost:1883

You can run a MQTT broker in kubernetes using following yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mqtt-broker
  labels:
    app-name: mqtt-broker
spec:
  replicas: 1
  selector:
    matchLabels:
      app-name: mqtt-broker
  template:
    metadata:
      labels:
        app-name: mqtt-broker
    spec:
      containers:
        - name: mqtt
          image: eclipse-mosquitto:1.6
          imagePullPolicy: IfNotPresent
          ports:
            - name: default
              containerPort: 1883
              protocol: TCP
            - name: websocket
              containerPort: 9001
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: mqtt-broker
  labels:
    app-name: mqtt-broker
spec:
  type: ClusterIP
  selector:
    app-name: mqtt-broker
  ports:
    - port: 1883
      targetPort: default
      name: default
      protocol: TCP
    - port: 9001
      targetPort: websocket
      name: websocket
      protocol: TCP

You can then interact with the server using the client port: tcp://mqtt-broker.default.svc.cluster.local:1883

1.11 - MQTT3

Detailed documentation on the MQTT3 pubsub component

Component format

To set up a MQTT3 pub/sub, create a component of type pubsub.mqtt3. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt-pubsub
spec:
  type: pubsub.mqtt3
  version: v1
  metadata:
    - name: url
      value: "tcp://[username][:password]@host.domain[:port]"
    # Optional
    - name: retain
      value: "false"
    - name: cleanSession
      value: "false"
    - name: qos
      value: "1"
    - name: consumerID
      value: "channel1"

Spec metadata fields

FieldRequiredDetailsExample
urlYAddress of the MQTT broker. Can be secretKeyRef to use a secret reference.
Use the tcp:// URI scheme for non-TLS communication.
Use the ssl:// URI scheme for TLS communication.
"tcp://[username][:password]@host.domain[:port]"
consumerIDNThe client ID used to connect to the MQTT broker. Defaults to the Dapr app ID.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
retainNDefines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to "false"."true", "false"
cleanSessionNSets the clean_session flag in the connection message to the MQTT broker if "true" (more info). Defaults to "false"."true", "false"
caCertRequired for using TLSCertificate Authority (CA) certificate in PEM format for verifying server TLS certificates.See example below
clientCertRequired for using TLSTLS client certificate in PEM format. Must be used with clientKey.See example below
clientKeyRequired for using TLSTLS client key in PEM format. Must be used with clientCert. Can be secretKeyRef to use a secret reference.See example below
qosNIndicates the Quality of Service Level (QoS) of the message (more info). Defaults to 1.0, 1, 2

Communication using TLS

To configure communication using TLS, ensure that the MQTT broker (for example, emqx) is configured to support certificates and provide the caCert, clientCert, clientKey metadata in the component configuration. For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt-pubsub
spec:
  type: pubsub.mqtt3
  version: v1
  metadata:
    - name: url
      value: "ssl://host.domain[:port]"
  # TLS configuration
    - name: caCert
      value: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
    - name: clientCert
      value: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
    - name: clientKey
      secretKeyRef:
        name: myMqttClientKey
        key: myMqttClientKey
    # Optional
    - name: retain
      value: "false"
    - name: cleanSession
      value: "false"
    - name: qos
      value: 1

Note that while the caCert and clientCert values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.

Consuming a shared topic

When consuming a shared topic, each consumer must have a unique identifier. By default, the application ID is used to uniquely identify each consumer and publisher. In self-hosted mode, invoking each dapr run with a different application ID is sufficient to have them consume from the same shared topic. However, on Kubernetes, multiple instances of an application pod will share the same application ID, prohibiting all instances from consuming the same topic. To overcome this, configure the component’s consumerID metadata with a {uuid} tag (which will give each instance a randomly generated value on start up) or {podName} (which will use the Pod’s name on Kubernetes). For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt-pubsub
spec:
  type: pubsub.mqtt3
  version: v1
  metadata:
    - name: consumerID
      value: "{uuid}"
    - name: cleanSession
      value: "true"
    - name: url
      value: "tcp://admin:public@localhost:1883"
    - name: qos
      value: 1
    - name: retain
      value: "false"

Note that in the case, the value of the consumer ID is random every time Dapr restarts, so you should set cleanSession to true as well.

It is recommended to use StatefulSets with shared subscriptions.

Create a MQTT3 broker

You can run a MQTT broker like emqx locally using Docker:

docker run -d -p 1883:1883 --name mqtt emqx:latest

You can then interact with the server using the client port: tcp://localhost:1883

You can run a MQTT3 broker in kubernetes using following yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mqtt-broker
  labels:
    app-name: mqtt-broker
spec:
  replicas: 1
  selector:
    matchLabels:
      app-name: mqtt-broker
  template:
    metadata:
      labels:
        app-name: mqtt-broker
    spec:
      containers:
        - name: mqtt
          image: emqx:latest
          imagePullPolicy: IfNotPresent
          ports:
            - name: default
              containerPort: 1883
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: mqtt-broker
  labels:
    app-name: mqtt-broker
spec:
  type: ClusterIP
  selector:
    app-name: mqtt-broker
  ports:
    - port: 1883
      targetPort: default
      name: default
      protocol: TCP

You can then interact with the server using the client port: tcp://mqtt-broker.default.svc.cluster.local:1883

1.12 - Pulsar

Detailed documentation on the Pulsar pubsub component

Component format

To set up Apache Pulsar pub/sub, create a component of type pubsub.pulsar. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

For more information on Apache Pulsar, read the official docs.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pulsar-pubsub
spec:
  type: pubsub.pulsar
  version: v1
  metadata:
  - name: host
    value: "localhost:6650"
  - name: enableTLS
    value: "false"
  - name: tenant
    value: "public"
  - name: token
    value: "eyJrZXlJZCI6InB1bHNhci1wajU0cXd3ZHB6NGIiLCJhbGciOiJIUzI1NiJ9.eyJzd"
  - name: consumerID
    value: "channel1"
  - name: namespace
    value: "default"
  - name: persistent
    value: "true"
  - name: disableBatching
    value: "false"
  - name: receiverQueueSize
    value: "1000"
  - name: <topic-name>.jsonschema # sets a json schema validation for the configured topic
    value: |
      {
        "type": "record",
        "name": "Example",
        "namespace": "test",
        "fields": [
          {"name": "ID","type": "int"},
          {"name": "Name","type": "string"}
        ]
      }
  - name: <topic-name>.avroschema # sets an avro schema validation for the configured topic
    value: |
      {
        "type": "record",
        "name": "Example",
        "namespace": "test",
        "fields": [
          {"name": "ID","type": "int"},
          {"name": "Name","type": "string"}
        ]
      }

Spec metadata fields

FieldRequiredDetailsExample
hostYAddress of the Pulsar broker. Default is "localhost:6650""localhost:6650" OR "http://pulsar-pj54qwwdpz4b-pulsar.ap-sg.public.pulsar.com:8080"
enableTLSNEnable TLS. Default: "false""true", "false"
tenantNThe topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. Default: "public""public"
consumerIDNUsed to set the subscription name or consumer ID.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
namespaceNThe administrative unit of the topic, which acts as a grouping mechanism for related topics. Default: "default""default"
persistentNPulsar supports two kinds of topics: persistent and non-persistent. With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks.
disableBatchingNdisable batching.When batching enabled default batch delay is set to 10 ms and default batch size is 1000 messages,Setting disableBatching: true will make the producer to send messages individually. Default: "false""true", "false"
receiverQueueSizeNSets the size of the consumer receiver queue. Controls how many messages can be accumulated by the consumer before it is explicitly called to read messages by Dapr. Default: "1000""1000"
batchingMaxPublishDelayNbatchingMaxPublishDelay set the time period within which the messages sent will be batched,if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or batchingMaxMessages (see below) or batchingMaxSize (see below). There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that is processed as milliseconds. Valid time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”. Default: "10ms""10ms", "10"
batchingMaxMessagesNbatchingMaxMessages set the maximum number of messages permitted in a batch.If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxSize (see below) has been reached or the batch interval has elapsed. Default: "1000""1000"
batchingMaxSizeNbatchingMaxSize sets the maximum number of bytes permitted in a batch. If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxMessages (see above) has been reached or the batch interval has elapsed. Default: "128KB""131072"
.jsonschemaNEnforces JSON schema validation for the configured topic.
.avroschemaNEnforces Avro schema validation for the configured topic.
publicKeyNA public key to be used for publisher and consumer encryption. Value can be one of two options: file path for a local PEM cert, or the cert data string value
privateKeyNA private key to be used for consumer encryption. Value can be one of two options: file path for a local PEM cert, or the cert data string value
keysNA comma delimited string containing names of Pulsar session keys. Used in conjunction with publicKey for publisher encryption
processModeNEnable processing multiple messages at once. Default: "async""async", "sync"
subscribeTypeNPulsar supports four kinds of subscription types. Default: "shared""shared", "exclusive", "failover", "key_shared"
subscribeInitialPositionNSubscription position is the initial position which the cursor is set when start consuming. Default: "latest""latest", "earliest"
subscribeModeNSubscription mode indicates the cursor persistence, durable subscription retains messages and persists the current position. Default: "durable""durable", "non_durable"
partitionKeyNSets the key of the message for routing policy. Default: ""
maxConcurrentHandlersNDefines the maximum number of concurrent message handlers. Default: 10010
replicateSubscriptionStateNEnable replication of subscription state across geo-replicated Pulsar clusters. Default: "false""true", "false"

Authenticate using Token

To authenticate to pulsar using a static JWT token, you can use the following metadata field:

FieldRequiredDetailsExample
tokenNToken used for authentication.How to create Pulsar token
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: messagebus
spec:
  type: pubsub.pulsar
  version: v1
  metadata:
  - name: host
    value: "pulsar.example.com:6650"
  - name: token
    secretKeyRef:
      name: pulsar
      key:  token

Authenticate using OIDC

Since v3.0, Pulsar supports OIDC authentication. To enable OIDC authentication, you need to provide the following OAuth2 parameters to the component spec. OAuth2 authentication cannot be used in combination with token authentication. It is recommended that you use a secret reference for the client secret. The pulsar OAuth2 authenticator is not specifically complaint with OIDC so it is your responsibility to ensure fields are compliant. For example, the issuer URL must use the https protocol, the requested scopes include openid, etc. If the oauth2TokenCAPEM field is omitted then the system’s certificate pool is used for connecting to the OAuth2 issuer if using https.

FieldRequiredDetailsExample
oauth2TokenURLNURL to request the OIDC client_credentials token from. Must not be empty.https://oauth.example.com/o/oauth2/token"`
oauth2TokenCAPEMNCA PEM certificate bundle to connect to the OAuth2 issuer. If not defined, the system’s certificate pool will be used."---BEGIN CERTIFICATE---\n...\n---END CERTIFICATE---"
oauth2ClientIDNOIDC client ID. Must not be empty."my-client-id"
oauth2ClientSecretNOIDC client secret. Must not be empty."my-client-secret"
oauth2AudiencesNComma separated list of audiences to request for. Must not be empty."my-audience-1,my-audience-2"
oauth2ScopesNComma separated list of scopes to request. Must not be empty."openid,profile,email"
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: messagebus
spec:
  type: pubsub.pulsar
  version: v1
  metadata:
  - name: host
    value: "pulsar.example.com:6650"
  - name: oauth2TokenURL
    value: https://oauth.example.com/o/oauth2/token
  - name: oauth2TokenCAPEM
    value: "---BEGIN CERTIFICATE---\n...\n---END CERTIFICATE---"
  - name: oauth2ClientID
    value: my-client-id
  - name: oauth2ClientSecret
    secretKeyRef:
      name: pulsar-oauth2
      key:  my-client-secret
  - name: oauth2Audiences
    value: "my.pulsar.example.com,another.pulsar.example.com"
  - name: oauth2Scopes
    value: "openid,profile,email"

Enabling message delivery retries

The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a retry resiliency policy to the Pulsar pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.

Delay queue

When invoking the Pulsar pub/sub, it’s possible to provide an optional delay queue by using the metadata query parameters in the request url.

These optional parameter names are metadata.deliverAt or metadata.deliverAfter:

  • deliverAt: Delay message to deliver at a specified time (RFC3339 format); for example, "2021-09-01T10:00:00Z"
  • deliverAfter: Delay message to deliver after a specified amount of time; for example,"4h5m3s"

Examples:

curl -X POST http://localhost:3500/v1.0/publish/myPulsar/myTopic?metadata.deliverAt='2021-09-01T10:00:00Z' \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        }
      }'

Or

curl -X POST http://localhost:3500/v1.0/publish/myPulsar/myTopic?metadata.deliverAfter='4h5m3s' \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        }
      }'

E2E Encryption

Dapr supports setting public and private key pairs to enable Pulsar’s end-to-end encryption feature.

Enabling publisher encryption from file certs

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: messagebus
spec:
  type: pubsub.pulsar
  version: v1
  metadata:
  - name: host
    value: "localhost:6650"
  - name: publicKey
    value: ./public.key
  - name: keys
    value: myapp.key

Enabling consumer encryption from file certs

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: messagebus
spec:
  type: pubsub.pulsar
  version: v1
  metadata:
  - name: host
    value: "localhost:6650"
  - name: publicKey
    value: ./public.key
  - name: privateKey
    value: ./private.key

Enabling publisher encryption from value

Note: It is recommended to reference the public key from a secret.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: messagebus
spec:
  type: pubsub.pulsar
  version: v1
  metadata:
  - name: host
    value: "localhost:6650"
  - name: publicKey
    value:  "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1KDAM4L8RtJ+nLaXBrBh\nzVpvTemsKVZoAct8A+ShepOHT9lgHOCGLFGWNla6K6j+b3AV/P/fAAhwj82vwTDd\nruXSflvSdmYeFAw3Ypphc1A5oM53wSRWhg63potBNWqdDzj8ApYgqjpmjYSQdL5/\na3golb36GYFrY0MLFTv7wZ87pmMIPsOgGIcPbCHker2fRZ34WXYLb1hkeUpwx4eK\njpwcg35gccvR6o/UhbKAuc60V1J9Wof2sNgtlRaQej45wnpjWYzZrIyk5qUbn0Qi\nCdpIrXvYtANq0Id6gP8zJvUEdPIgNuYxEmVCl9jI+8eGI6peD0qIt8U80hf9axhJ\n3QIDAQAB\n-----END PUBLIC KEY-----\n"
  - name: keys
    value: myapp.key

Enabling consumer encryption from value

Note: It is recommended to reference the public and private keys from a secret.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: messagebus
spec:
  type: pubsub.pulsar
  version: v1
  metadata:
  - name: host
    value: "localhost:6650"
  - name: publicKey
    value: "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1KDAM4L8RtJ+nLaXBrBh\nzVpvTemsKVZoAct8A+ShepOHT9lgHOCGLFGWNla6K6j+b3AV/P/fAAhwj82vwTDd\nruXSflvSdmYeFAw3Ypphc1A5oM53wSRWhg63potBNWqdDzj8ApYgqjpmjYSQdL5/\na3golb36GYFrY0MLFTv7wZ87pmMIPsOgGIcPbCHker2fRZ34WXYLb1hkeUpwx4eK\njpwcg35gccvR6o/UhbKAuc60V1J9Wof2sNgtlRaQej45wnpjWYzZrIyk5qUbn0Qi\nCdpIrXvYtANq0Id6gP8zJvUEdPIgNuYxEmVCl9jI+8eGI6peD0qIt8U80hf9axhJ\n3QIDAQAB\n-----END PUBLIC KEY-----\n"
  - name: privateKey
    value: "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA1KDAM4L8RtJ+nLaXBrBhzVpvTemsKVZoAct8A+ShepOHT9lg\nHOCGLFGWNla6K6j+b3AV/P/fAAhwj82vwTDdruXSflvSdmYeFAw3Ypphc1A5oM53\nwSRWhg63potBNWqdDzj8ApYgqjpmjYSQdL5/a3golb36GYFrY0MLFTv7wZ87pmMI\nPsOgGIcPbCHker2fRZ34WXYLb1hkeUpwx4eKjpwcg35gccvR6o/UhbKAuc60V1J9\nWof2sNgtlRaQej45wnpjWYzZrIyk5qUbn0QiCdpIrXvYtANq0Id6gP8zJvUEdPIg\nNuYxEmVCl9jI+8eGI6peD0qIt8U80hf9axhJ3QIDAQABAoIBAQCKuHnM4ac/eXM7\nQPDVX1vfgyHc3hgBPCtNCHnXfGFRvFBqavKGxIElBvGOcBS0CWQ+Rg1Ca5kMx3TQ\njSweSYhH5A7pe3Sa5FK5V6MGxJvRhMSkQi/lJZUBjzaIBJA9jln7pXzdHx8ekE16\nBMPONr6g2dr4nuI9o67xKrtfViwRDGaG6eh7jIMlEqMMc6WqyhvI67rlVDSTHFKX\njlMcozJ3IT8BtTzKg2Tpy7ReVuJEpehum8yn1ZVdAnotBDJxI07DC1cbOP4M2fHM\ngfgPYWmchauZuTeTFu4hrlY5jg0/WLs6by8r/81+vX3QTNvejX9UdTHMSIfQdX82\nAfkCKUVhAoGBAOvGv+YXeTlPRcYC642x5iOyLQm+BiSX4jKtnyJiTU2s/qvvKkIu\nxAOk3OtniT9NaUAHEZE9tI71dDN6IgTLQlAcPCzkVh6Sc5eG0MObqOO7WOMCWBkI\nlaAKKBbd6cGDJkwGCJKnx0pxC9f8R4dw3fmXWgWAr8ENiekMuvjSfjZ5AoGBAObd\ns2L5uiUPTtpyh8WZ7rEvrun3djBhzi+d7rgxEGdditeiLQGKyZbDPMSMBuus/5wH\nwfi0xUq50RtYDbzQQdC3T/C20oHmZbjWK5mDaLRVzWS89YG/NT2Q8eZLBstKqxkx\ngoT77zoUDfRy+CWs1xvXzgxagD5Yg8/OrCuXOqWFAoGAPIw3r6ELknoXEvihASxU\nS4pwInZYIYGXpygLG8teyrnIVOMAWSqlT8JAsXtPNaBtjPHDwyazfZrvEmEk51JD\nX0tA8M5ah1NYt+r5JaKNxp3P/8wUT6lyszyoeubWJsnFRfSusuq/NRC+1+KDg/aq\nKnSBu7QGbm9JoT2RrmBv5RECgYBRn8Lj1I1muvHTNDkiuRj2VniOSirkUkA2/6y+\nPMKi+SS0tqcY63v4rNCYYTW1L7Yz8V44U5mJoQb4lvpMbolGhPljjxAAU3hVkItb\nvGVRlSCIZHKczADD4rJUDOS7DYxO3P1bjUN4kkyYx+lKUMDBHFzCa2D6Kgt4dobS\n5qYajQKBgQC7u7MFPkkEMqNqNGu5erytQkBq1v1Ipmf9rCi3iIj4XJLopxMgw0fx\n6jwcwNInl72KzoUBLnGQ9PKGVeBcgEgdI+a+tq+1TJo6Ta+hZSx+4AYiKY18eRKG\neNuER9NOcSVJ7Eqkcw4viCGyYDm2vgNV9HJ0VlAo3RDh8x5spEN+mg==\n-----END RSA PRIVATE KEY-----\n"

Partition Key

When invoking the Pulsar pub/sub, it’s possible to provide an optional partition key by using the metadata query parameter in the request url.

The parameter name is partitionKey.

Example:

curl -X POST http://localhost:3500/v1.0/publish/myPlusar/myTopic?metadata.partitionKey=key1 \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        }
      }'

Message headers

All other metadata key/value pairs (that are not partitionKey) are set as headers in the Pulsar message. For example, set a correlationId for the message:

curl -X POST http://localhost:3500/v1.0/publish/myPlusar/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        }
      }'

Order guarantee

To ensure that messages arrive in order for each consumer subscribed to a specific key, three conditions must be met.

  1. subscribeType should be set to key_shared.
  2. partitionKey must be set.
  3. processMode should be set to sync.

Create a Pulsar instance

docker run -it \
  -p 6650:6650 \
  -p 8080:8080 \
  --mount source=pulsardata,target=/pulsar/data \
  --mount source=pulsarconf,target=/pulsar/conf \
  apachepulsar/pulsar:2.5.1 \
  bin/pulsar standalone

Refer to the following Helm chart Documentation.

1.13 - RabbitMQ

Detailed documentation on the RabbitMQ pubsub component

Component format

To set up RabbitMQ pub/sub, create a component of type pubsub.rabbitmq. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: rabbitmq-pubsub
spec:
  type: pubsub.rabbitmq
  version: v1
  metadata:
  - name: connectionString
    value: "amqp://localhost:5672"
  - name: protocol
    value: amqp  
  - name: hostname
    value: localhost 
  - name: username
    value: username
  - name: password
    value: password  
  - name: consumerID
    value: channel1
  - name: durable
    value: false
  - name: deletedWhenUnused
    value: false
  - name: autoAck
    value: false
  - name: deliveryMode
    value: 0
  - name: requeueInFailure
    value: false
  - name: prefetchCount
    value: 0
  - name: reconnectWait
    value: 0
  - name: concurrencyMode
    value: parallel
  - name: publisherConfirm
    value: false
  - name: enableDeadLetter # Optional enable dead Letter or not
    value: true
  - name: maxLen # Optional max message count in a queue
    value: 3000
  - name: maxLenBytes # Optional maximum length in bytes of a queue.
    value: 10485760
  - name: exchangeKind
    value: fanout
  - name: saslExternal
    value: false
  - name: ttlInSeconds
    value: 60
  - name: clientName
    value: {podName}
  - name: heartBeat
    value: 10s
  - name: publishMessagePropertiesToMetadata
    value: "true"

Spec metadata fields

FieldRequiredDetailsExample
connectionStringY*The RabbitMQ connection string. *Mutally exclusive with protocol, hostname, username, password fieldamqp://user:pass@localhost:5672
protocolN*The RabbitMQ protocol. *Mutally exclusive with connectionString fieldamqp
hostnameN*The RabbitMQ hostname. *Mutally exclusive with connectionString fieldlocalhost
usernameN*The RabbitMQ username. *Mutally exclusive with connectionString fieldusername
passwordN*The RabbitMQ password. *Mutally exclusive with connectionString fieldpassword
consumerIDNConsumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID) value.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
durableNWhether or not to use durable queues. Defaults to "false""true", "false"
deletedWhenUnusedNWhether or not the queue should be configured to auto-delete Defaults to "true""true", "false"
autoAckNWhether or not the queue consumer should auto-ack messages. Defaults to "false""true", "false"
deliveryModeNPersistence mode when publishing messages. Defaults to "0". RabbitMQ treats "2" as persistent, all other numbers as non-persistent"0", "2"
requeueInFailureNWhether or not to requeue when sending a negative acknowledgement in case of a failure. Defaults to "false""true", "false"
prefetchCountNNumber of messages to prefetch. Consider changing this to a non-zero value for production environments. Defaults to "0", which means that all available messages will be pre-fetched."2"
publisherConfirmNIf enabled, client waits for publisher confirms after publishing a message. Defaults to "false""true", "false"
reconnectWaitNHow long to wait (in seconds) before reconnecting if a connection failure occurs"0"
concurrencyModeNparallel is the default, and allows processing multiple messages in parallel (limited by the app-max-concurrency annotation, if configured). Set to single to disable parallel processing. In most situations there’s no reason to change this.parallel, single
enableDeadLetterNEnable forwarding Messages that cannot be handled to a dead-letter topic. Defaults to "false""true", "false"
maxLenNThe maximum number of messages of a queue and its dead letter queue (if dead letter enabled). If both maxLen and maxLenBytes are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit."1000"
maxLenBytesNMaximum length in bytes of a queue and its dead letter queue (if dead letter enabled). If both maxLen and maxLenBytes are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit."1048576"
exchangeKindNExchange kind of the rabbitmq exchange. Defaults to "fanout"."fanout","topic"
saslExternalNWith TLS, should the username be taken from an additional field (for example, CN). See RabbitMQ Authentication Mechanisms. Defaults to "false"."true", "false"
ttlInSecondsNSet message TTL at the component level, which can be overwritten by message level TTL per request."60"
caCertRequired for using TLSCertificate Authority (CA) certificate in PEM format for verifying server TLS certificates."-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"
clientCertRequired for using TLSTLS client certificate in PEM format. Must be used with clientKey."-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"
clientKeyRequired for using TLSTLS client key in PEM format. Must be used with clientCert. Can be secretKeyRef to use a secret reference."-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"
clientNameNThis RabbitMQ client-provided connection name is a custom identifier. If set, the identifier is mentioned in RabbitMQ server log entries and management UI. Can be set to {uuid}, {podName}, or {appID}, which is replaced by Dapr runtime to the real value."app1", {uuid}, {podName}, {appID}
heartBeatNDefines the heartbeat interval with the server, detecting the aliveness of the peer TCP connection with the RabbitMQ server. Defaults to 10s ."10s"
publishMessagePropertiesToMetadataNWhether to publish AMQP message properties (headers, message ID, etc.) to the metadata.“true”, “false”

Communication using TLS

To configure communication using TLS, ensure that the RabbitMQ nodes have TLS enabled and provide the caCert, clientCert, clientKey metadata in the component configuration. For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: rabbitmq-pubsub
spec:
  type: pubsub.rabbitmq
  version: v1
  metadata:
  - name: host
    value: "amqps://localhost:5671"
  - name: consumerID
    value: myapp
  - name: durable
    value: false
  - name: deletedWhenUnused
    value: false
  - name: autoAck
    value: false
  - name: deliveryMode
    value: 0
  - name: requeueInFailure
    value: false
  - name: prefetchCount
    value: 0
  - name: reconnectWait
    value: 0
  - name: concurrencyMode
    value: parallel
  - name: publisherConfirm
    value: false
  - name: enableDeadLetter # Optional enable dead Letter or not
    value: true
  - name: maxLen # Optional max message count in a queue
    value: 3000
  - name: maxLenBytes # Optional maximum length in bytes of a queue.
    value: 10485760
  - name: exchangeKind
    value: fanout
  - name: saslExternal
    value: false
  - name: caCert
    value: ${{ myLoadedCACert }}
  - name: clientCert
    value: ${{ myLoadedClientCert }}
  - name: clientKey
    secretKeyRef:
      name: myRabbitMQClientKey
      key: myRabbitMQClientKey

Note that while the caCert and clientCert values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.

Enabling message delivery retries

The RabbitMQ pub/sub component has no built-in support for retry strategies. This means that the sidecar sends a message to the service only once. When the service returns a result, the message will be marked as consumed regardless of whether it was processed correctly or not. Note that this is common among all Dapr PubSub components and not just RabbitMQ. Dapr can try redelivering a message a second time, when autoAck is set to false and requeueInFailure is set to true.

To make Dapr use more sophisticated retry policies, you can apply a retry resiliency policy to the RabbitMQ pub/sub component.

There is a crucial difference between the two ways to retry messages:

  1. When using autoAck = false and requeueInFailure = true, RabbitMQ is the one responsible for re-delivering messages and any subscriber can get the redelivered message. If you have more than one instance of your consumer, then it’s possible that another consumer will get it. This is usually the better approach because if there’s a transient failure, it’s more likely that a different worker will be in a better position to successfully process the message.
  2. Using Resiliency makes the same Dapr sidecar retry redelivering the messages. So it will be the same Dapr sidecar and the same app receiving the same message.

Create a RabbitMQ server

You can run a RabbitMQ server locally using Docker:

docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3

You can then interact with the server using the client port: localhost:5672.

The easiest way to install RabbitMQ on Kubernetes is by using the Helm chart:

helm install rabbitmq stable/rabbitmq

Look at the chart output and get the username and password.

This will install RabbitMQ into the default namespace. To interact with RabbitMQ, find the service with: kubectl get svc rabbitmq.

For example, if installing using the example above, the RabbitMQ server client address would be:

rabbitmq.default.svc.cluster.local:5672

Use topic exchange to route messages

Setting exchangeKind to "topic" uses the topic exchanges, which are commonly used for the multicast routing of messages. In order to route messages using topic exchange, you must set the following metadata:

  • routingKey:
    Messages with a routing key are routed to one or many queues based on the routing key defined in the metadata when subscribing.

  • queueName:
    If you don’t set the queueName, only one queue is created, and all routing keys will route to that queue. This means all subscribers will bind to that queue, which won’t give the desired results.

For example, if an app is configured with a routing key keyA and queueName of queue-A:

apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
  name: orderspubsub
spec:
  topic: B
  routes: 
    default: /B
  pubsubname: pubsub
  metadata:
    routingKey: keyA
    queueName: queue-A

It will receive messages with routing key keyA, and messages with other routing keys are not received.

// publish messages with routing key `keyA`, and these will be received by the above example.
client.PublishEvent(context.Background(), "pubsub", "B", []byte("this is a message"), dapr.PublishEventWithMetadata(map[string]string{"routingKey": "keyA"}))
// publish messages with routing key `keyB`, and these will not be received by the above example.
client.PublishEvent(context.Background(), "pubsub", "B", []byte("this is another message"), dapr.PublishEventWithMetadata(map[string]string{"routingKey": "keyB"}))

Bind multiple routingKey

Multiple routing keys can be separated by commas.
The example below binds three routingKey: keyA, keyB, and "". Note the binding method of empty keys.

apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
  name: orderspubsub
spec:
  topic: B
  routes: 
    default: /B
  pubsubname: pubsub
  metadata:
    routingKey: keyA,keyB,

For more information see rabbitmq exchanges.

Use priority queues

Dapr supports RabbitMQ priority queues. To set a priority for a queue, use the maxPriority topic subscription metadata.

Declarative priority queue example

apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
  name: pubsub
spec:
  topic: checkout
  routes: 
    default: /orders
  pubsubname: order-pub-sub
  metadata:
    maxPriority: 3

Programmatic priority queue example

@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
    subscriptions = [
      {
        'pubsubname': 'pubsub',
        'topic': 'checkout',
        'routes': {
          'default': '/orders'
        },
        'metadata': {'maxPriority': '3'}
      }
    ]
    return jsonify(subscriptions)
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));

const port = 3000

app.get('/dapr/subscribe', (req, res) => {
  res.json([
    {
      pubsubname: "pubsub",
      topic: "checkout",
      routes: {
        default: '/orders'
      },
      metadata: {
        maxPriority: '3'
      }
    }
  ]);
})
package main

	"encoding/json"
	"net/http"

const appPort = 3000

type subscription struct {
	PubsubName string            `json:"pubsubname"`
	Topic      string            `json:"topic"`
	Metadata   map[string]string `json:"metadata,omitempty"`
	Routes     routes            `json:"routes"`
}

type routes struct {
	Rules   []rule `json:"rules,omitempty"`
	Default string `json:"default,omitempty"`
}

// This handles /dapr/subscribe
func configureSubscribeHandler(w http.ResponseWriter, _ *http.Request) {
	t := []subscription{
		{
			PubsubName: "pubsub",
			Topic:      "checkout",
			Routes: routes{
				Default: "/orders",
			},
      Metadata: map[string]string{
        "maxPriority": "3"
      },
		},
	}

	w.WriteHeader(http.StatusOK)
	json.NewEncoder(w).Encode(t)
}

Setting a priority when publishing a message

To set a priority on a message, add the publish metadata key maxPriority to the publish endpoint or SDK method.

curl -X POST http://localhost:3601/v1.0/publish/order-pub-sub/orders?metadata.priority=3 -H "Content-Type: application/json" -d '{"orderId": "100"}'
with DaprClient() as client:
        result = client.publish_event(
            pubsub_name=PUBSUB_NAME,
            topic_name=TOPIC_NAME,
            data=json.dumps(orderId),
            data_content_type='application/json',
            metadata= { 'priority': '3' })
await client.pubsub.publish(PUBSUB_NAME, TOPIC_NAME, orderId, { 'priority': '3' });
client.PublishEvent(ctx, PUBSUB_NAME, TOPIC_NAME, []byte(strconv.Itoa(orderId)), map[string]string{"priority": "3"})

Use quorum queues

By default, Dapr creates classic queues. To create quorum queues, add the following metadata to your pub/sub subscription

apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
  name: pubsub
spec:
  topic: checkout
  routes: 
    default: /orders
  pubsubname: order-pub-sub
  metadata:
    queueType: quorum

Time-to-live

You can set a time-to-live (TTL) value at either the message or component level. Set default component-level TTL using the component spec ttlInSeconds field in your component.

Single Active Consumer

The RabbitMQ Single Active Consumer setup ensures that only one consumer at a time processes messages from a queue and switches to another registered consumer if the active one is canceled or fails. This approach might be required when it is crucial for messages to be consumed in the exact order they arrive in the queue and if distributed processing with multiple instances is not supported. When this option is enabled on a queue by Dapr, an instance of the Dapr runtime will be the single active consumer. To allow another application instance to take over in case of failure, Dapr runtime must probe the application’s health and unsubscribe from the pub/sub component.

apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
  name: pubsub
spec:
  topic: orders
  routes:
    default: /orders
  pubsubname: order-pub-sub
  metadata:
    singleActiveConsumer: "true"

Publishing message properties to metadata

To enable message properties being published in the metadata, set the publishMessagePropertiesToMetadata field to "true" in the component spec. This will include properties such as message ID, timestamp, and headers in the metadata of the published message.

1.14 - Redis Streams

Detailed documentation on the Redis Streams pubsub component

Component format

To set up Redis Streams pub/sub, create a component of type pubsub.redis. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: redis-pubsub
spec:
  type: pubsub.redis
  version: v1
  metadata:
  - name: redisHost
    value: localhost:6379
  - name: redisPassword
    value: "KeFg23!"
  - name: consumerID
    value: "channel1"
  - name: useEntraID
    value: "true"
  - name: enableTLS
    value: "false"

Spec metadata fields

FieldRequiredDetailsExample
redisHostYConnection-string for the redis host. If "redisType" is "cluster" it can be multiple hosts separated by commas or just a single hostlocalhost:6379, redis-master.default.svc.cluster.local:6379
redisPasswordNPassword for Redis host. No Default. Can be secretKeyRef to use a secret reference"", "KeFg23!"
redisUsernameNUsername for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly."", "default"
consumerIDNThe consumer group ID.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
useEntraIDNImplements EntraID support for Azure Cache for Redis. Before enabling this:
  • The redisHost name must be specified in the form of "server:port"
  • TLS must be enabled
Learn more about this setting under Create a Redis instance > Azure Cache for Redis
"true", "false"
enableTLSNIf the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to "false""true", "false"
clientCertNThe content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here"----BEGIN CERTIFICATE-----\nMIIC..."
clientKeyNThe content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here"----BEGIN PRIVATE KEY-----\nMIIE..."
redeliverIntervalNThe interval between checking for pending messages to redeliver. Can use either be Go duration string (for example “ms”, “s”, “m”) or milliseconds number. Defaults to "60s". "0" disables redelivery."30s", "5000"
processingTimeoutNThe amount time that a message must be pending before attempting to redeliver it. Can use either be Go duration string ( for example “ms”, “s”, “m”) or milliseconds number. Defaults to "15s". "0" disables redelivery."60s", "600000"
queueDepthNThe size of the message queue for processing. Defaults to "100"."1000"
concurrencyNThe number of concurrent workers that are processing messages. Defaults to "10"."15"
redisTypeNThe type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node"."cluster"
redisDBNDatabase selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0"."0"
redisMaxRetriesNMaximum number of times to retry commands before giving up. Default is to not retry failed commands."5"
redisMinRetryIntervalNMinimum backoff for redis commands between each retry. Default is "8ms"; "-1" disables backoff."8ms"
redisMaxRetryIntervalNMaximum backoff for redis commands between each retry. Default is "512ms";"-1" disables backoff."5s"
dialTimeoutNDial timeout for establishing new connections. Defaults to "5s"."5s"
readTimeoutNTimeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s", "-1" for no timeout."3s"
writeTimeoutNTimeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout."3s"
poolSizeNMaximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU."20"
poolTimeoutNAmount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second."5s"
maxConnAgeNConnection age at which the client retires (closes) the connection. Default is to not close aged connections."30m"
minIdleConnsNMinimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0"."2"
idleCheckFrequencyNFrequency of idle checks made by idle connections reaper. Default is "1m". "-1" disables idle connections reaper."-1"
idleTimeoutNAmount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m". "-1" disables idle timeout check."10m"
failoverNProperty to enable failover configuration. Needs sentinelMasterName to be set. Defaults to "false""true", "false"
sentinelMasterNameNThe sentinel master name. See Redis Sentinel Documentation"", "mymaster"
sentinelUsernameNUsername for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled"username"
sentinelPasswordNPassword for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled"password"
maxLenApproxNMaximum number of items inside a stream.The old entries are automatically evicted when the specified length is reached, so that the stream is left at a constant size. Defaults to unlimited."10000"
streamTTLNTTL duration for stream entries. Entries older than this duration will be evicted. This is an approximate value, as it’s implemented using Redis stream’s MINID trimming with the ‘~’ modifier. The actual retention may include slightly more entries than strictly defined by the TTL, as Redis optimizes the trimming operation for efficiency by potentially keeping some additional entries."30d"

Create a Redis instance

Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.x or 6.x.

The Dapr CLI will automatically create and setup a Redis Streams instance for you. The Redis instance will be installed via Docker when you run dapr init, and the component file will be created in default directory. ($HOME/.dapr/components directory (Mac/Linux) or %USERPROFILE%\.dapr\components on Windows).

You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.

  1. Install Redis into your cluster.

    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm install redis bitnami/redis --set image.tag=6.2
    
  2. Run kubectl get pods to see the Redis containers now running in your cluster.

  3. Add redis-master:6379 as the redisHost in your redis.yaml file. For example:

        metadata:
        - name: redisHost
          value: redis-master:6379
    
  4. Next, we’ll get our Redis password, which is slightly different depending on the OS we’re using:

    • Windows: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64, which will create a file with your encoded password. Next, run certutil -decode encoded.b64 password.txt, which will put your redis password in a text file called password.txt. Copy the password and delete the two files.

    • Linux/MacOS: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode and copy the outputted password.

    Add this password as the redisPassword value in your redis.yaml file. For example:

        - name: redisPassword
          value: "lhDOkwTlp0"
    
  1. Create an Azure Cache for Redis instance using the official Microsoft documentation.

  2. Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.

    • For the Host name:
      • Navigate to the resource’s Overview page.
      • Copy the Host name value.
    • For your access key:
      • Navigate to Settings > Access Keys.
      • Copy and save your key.
  3. Add your key and your host name to a redis.yaml file that Dapr can apply to your cluster.

    • If you’re running a sample, add the host and key to the provided redis.yaml.
    • If you’re creating a project from the ground up, create a redis.yaml file as specified in the Component format section.
  4. Set the redisHost key to [HOST NAME FROM PREVIOUS STEP]:6379 and the redisPassword key to the key you saved earlier.

    Note: In a production-grade application, follow secret management instructions to securely manage your secrets.

  5. Enable EntraID support:

    • Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
    • Set useEntraID to "true" to implement EntraID support for Azure Cache for Redis.
  6. Set enableTLS to "true" to support TLS.

Note:useEntraID assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify the azureClientID property.

1.15 - RocketMQ

Detailed documentation on the RocketMQ pubsub component

Component format

To set up RocketMQ pub/sub, create a component of type pubsub.rocketmq. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: rocketmq-pubsub
spec:
  type: pubsub.rocketmq
  version: v1
  metadata:
    - name: instanceName
      value: dapr-rocketmq-test
    - name: consumerGroup
      value: dapr-rocketmq-test-g-c
    - name: producerGroup 
      value: dapr-rocketmq-test-g-p
    - name: consumerID
      value: channel1
    - name: nameSpace
      value: dapr-test
    - name: nameServer
      value: "127.0.0.1:9876,127.0.0.2:9876"
    - name: retries
      value: 3
    - name: consumerModel
      value: "clustering"
    - name: consumeOrderly
      value: false

Spec metadata fields

FieldRequiredDetailsdefaultExample
instanceNameNInstance nametime.Now().String()dapr-rocketmq-test
consumerGroupNConsumer group name. Recommend. If producerGroup is nullgroupName is used.dapr-rocketmq-test-g-c
producerGroup (consumerID)NProducer group name. Recommended. If producerGroup is nullconsumerID is used. If consumerID also is null, groupName is used.dapr-rocketmq-test-g-p
consumerIDNConsumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID) value.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
groupNameNConsumer/Producer group name. Depreciated.dapr-rocketmq-test-g
nameSpaceNRocketMQ namespacedapr-rocketmq
nameServerDomainNRocketMQ name server domainhttps://my-app.net:8080/nsaddr
nameServerNRocketMQ name server, separated by “,” or “;”127.0.0.1:9876;127.0.0.2:9877,127.0.0.3:9877
accessKeyNAccess Key (Username)"admin"
secretKeyNSecret Key (Password)"password"
securityTokenNSecurity Token
retriesNNumber of retries to send a message to broker33
producerQueueSelector (queueSelector)NProducer Queue selector. There are five implementations of queue selector: hash, random, manual, roundRobin, dapr.daprhash
consumerModelNMessage model that defines how messages are delivered to each consumer client. RocketMQ supports two message models: clustering and broadcasting.clusteringbroadcasting , clustering
fromWhere (consumeFromWhere)NConsuming point on consumer booting. There are three consuming points: CONSUME_FROM_LAST_OFFSET, CONSUME_FROM_FIRST_OFFSET, CONSUME_FROM_TIMESTAMPCONSUME_FROM_LAST_OFFSETCONSUME_FROM_LAST_OFFSET
consumeTimestampNBacktracks consumption time with second precision. Time format is yyyymmddhhmmss. For example, 20131223171201 implies the time of 17:12:01 and date of December 23, 2013time.Now().Add(time.Minute * (-30)).Format("20060102150405")20131223171201
consumeOrderlyNDetermines if it’s an ordered message using FIFO order.falsefalse
consumeMessageBatchMaxSizeNBatch consumption size out of range [1, 1024]51210
consumeConcurrentlyMaxSpanNConcurrently max span offset. This has no effect on sequential consumption. Range: [1, 65535]10001000
maxReconsumeTimesNMax re-consume times. -1 means 16 times. If messages are re-consumed more than {@link maxReconsumeTimes} before success, they’ll be directed to a deletion queue.Orderly message is MaxInt32; Concurrently message is 1616
autoCommitNEnable auto committruefalse
consumeTimeoutNMaximum amount of time a message may block the consuming thread. Time unit: Minute1515
consumerPullTimeoutNThe socket timeout in milliseconds
pullIntervalNMessage pull interval100100
pullBatchSizeNThe number of messages pulled from the broker at a time. If pullBatchSize is null, use ConsumerBatchSize. pullBatchSize out of range [1, 1024]3210
pullThresholdForQueueNFlow control threshold on queue level. Each message queue will cache a maximum of 1000 messages by default. Consider the PullBatchSize - the instantaneous value may exceed the limit. Range: [1, 65535]10241000
pullThresholdForTopicNFlow control threshold on topic level. The value of pullThresholdForQueue will be overwritten and calculated based on pullThresholdForTopic if it isn’t unlimited. For example, if the value of pullThresholdForTopic is 1000 and 10 message queues are assigned to this consumer, then pullThresholdForQueue will be set to 100. Range: [1, 6553500]-1(Unlimited)10
pullThresholdSizeForQueueNLimit the cached message size on queue level. Consider the pullBatchSize - the instantaneous value may exceed the limit. The size of a message is only measured by message body, so it’s not accurate. Range: [1, 1024]100100
pullThresholdSizeForTopicNLimit the cached message size on topic level. The value of pullThresholdSizeForQueue will be overwritten and calculated based on pullThresholdSizeForTopic if it isn’t unlimited. For example, if the value of pullThresholdSizeForTopic is 1000 MiB and 10 message queues are assigned to this consumer, then pullThresholdSizeForQueue will be set to 100 MiB. Range: [1, 102400]-1100
content-typeNMessage content type."text/plain""application/cloudevents+json; charset=utf-8", "application/octet-stream"
logLevelNLog levelwarninfo
sendTimeOutNSend message timeout to connect RocketMQ’s broker, measured in nanoseconds. Deprecated.3 seconds10000000000
sendTimeOutSecNTimeout duration for publishing a message in seconds. If sendTimeOutSec is null, sendTimeOut is used.3 seconds3
mspPropertiesNThe RocketMQ message properties in this collection are passed to the APP in Data Separate multiple properties with “,”key,mkey

For backwards-compatibility reasons, the following values in the metadata are supported, although their use is discouraged.

Field (supported but deprecated)RequiredDetailsExample
groupNameNProducer group name for RocketMQ publishers"my_unique_group_name"
sendTimeOutNTimeout duration for publishing a message in nanoseconds0
consumerBatchSizeNThe number of messages pulled from the broker at a time32

Setup RocketMQ

See https://rocketmq.apache.org/docs/quick-start/ to setup a local RocketMQ instance.

Per-call metadata fields

Partition Key

When invoking the RocketMQ pub/sub, it’s possible to provide an optional partition key by using the metadata query param in the request url.

You need to specify rocketmq-tag , "rocketmq-key" , rocketmq-shardingkey , rocketmq-queue in metadata

Example:

curl -X POST http://localhost:3500/v1.0/publish/myRocketMQ/myTopic?metadata.rocketmq-tag=?&metadata.rocketmq-key=?&metadata.rocketmq-shardingkey=key&metadata.rocketmq-queue=1 \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        }
      }'

QueueSelector

The RocketMQ component contains a total of five queue selectors. The RocketMQ client provides the following queue selectors:

  • HashQueueSelector
  • RandomQueueSelector
  • RoundRobinQueueSelector
  • ManualQueueSelector

To learn more about these RocketMQ client queue selectors, read the RocketMQ documentation.

The Dapr RocketMQ component implements the following queue selector:

  • DaprQueueSelector

This article focuses on the design of DaprQueueSelector.

DaprQueueSelector

DaprQueueSelector integrates three queue selectors:

  • HashQueueSelector
  • RoundRobinQueueSelector
  • ManualQueueSelector

DaprQueueSelector gets the queue id from the request parameter. You can set the queue id by running the following:

http://localhost:3500/v1.0/publish/myRocketMQ/myTopic?metadata.rocketmq-queue=1

The ManualQueueSelector is implemented using the method above.

Next, the DaprQueueSelector tries to:

  • Get a ShardingKey
  • Hash the ShardingKey to determine the queue id.

You can set the ShardingKey by doing the following:

http://localhost:3500/v1.0/publish/myRocketMQ/myTopic?metadata.rocketmq-shardingkey=key

If the ShardingKey does not exist, the RoundRobin algorithm is used to determine the queue id.

1.16 - Solace-AMQP

Detailed documentation on the Solace-AMQP pubsub component

Component format

To set up Solace-AMQP pub/sub, create a component of type pubsub.solace.amqp. See the pub/sub broker component file to learn how ConsumerID is automatically generated. Read the How-to: Publish and Subscribe guide on how to create and apply a pub/sub configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: solace
spec:
  type: pubsub.solace.amqp
  version: v1
  metadata:
    - name: url
      value: 'amqp://localhost:5672'
    - name: username
      value: 'default'
    - name: password
      value: 'default'
    - name: consumerID
      value: 'channel1'

Spec metadata fields

FieldRequiredDetailsExample
urlYAddress of the AMQP broker. Can be secretKeyRef to use a secret reference.
Use the amqp:// URI scheme for non-TLS communication.
Use the amqps:// URI scheme for TLS communication.
"amqp://host.domain[:port]"
usernameYThe username to connect to the broker. Only required if anonymous is not specified or set to false .default
passwordYThe password to connect to the broker. Only required if anonymous is not specified or set to false.default
consumerIDNConsumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumerID is not provided, the Dapr runtime set it to the Dapr application ID (appID) value.Can be set to string value (such as "channel1" in the example above) or string format value (such as "{podName}", etc.). See all of template tags you can use in your component metadata.
anonymousNTo connect to the broker without credential validation. Only works if enabled on the broker. A username and password would not be required if this is set to true.true
caCertRequired for using TLSCertificate Authority (CA) certificate in PEM format for verifying server TLS certificates."-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"
clientCertRequired for using TLSTLS client certificate in PEM format. Must be used with clientKey."-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"
clientKeyRequired for using TLSTLS client key in PEM format. Must be used with clientCert. Can be secretKeyRef to use a secret reference."-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"

Communication using TLS

To configure communication using TLS:

  1. Ensure that the Solace broker is configured to support certificates.
  2. Provide the caCert, clientCert, and clientKey metadata in the component configuration.

For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: solace
spec:
  type: pubsub.solace.amqp
  version: v1
  metadata:
  - name: url
    value: "amqps://host.domain[:port]"
  - name: username
    value: 'default'
  - name: password
    value: 'default'
  - name: caCert
    value: ${{ myLoadedCACert }}
  - name: clientCert
    value: ${{ myLoadedClientCert }}
  - name: clientKey
    secretKeyRef:
      name: mySolaceClientKey
      key: mySolaceClientKey
auth:
  secretStore: <SECRET_STORE_NAME>

While the caCert and clientCert values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.

Publishing/subscribing to topics and queues

By default, messages are published and subscribed over topics. If you would like your destination to be a queue, prefix the topic with queue: and the Solace AMQP component will connect to a queue.

Create a Solace broker

You can run a Solace broker locally using Docker:

docker run -d -p 8080:8080 -p 55554:55555 -p 8008:8008 -p 1883:1883 -p 8000:8000 -p 5672:5672 -p 9000:9000 -p 2222:2222 --shm-size=2g --env username_admin_globalaccesslevel=admin --env username_admin_password=admin --name=solace solace/solace-pubsub-standard

You can then interact with the server using the client port: mqtt://localhost:5672

You can also sign up for a free SaaS broker on Solace Cloud.

2 - Bindings component specs

The supported external bindings that interface with Dapr

The following table lists input and output bindings supported by the Dapr bindings building block. Learn how to set up different input and output binding components for Dapr bindings.

Table headers to note:

HeaderDescriptionExample
StatusComponent certification statusAlpha
Beta
Stable
Component versionThe version of the componentv1
Since runtime versionThe version of the Dapr runtime when the component status was set or updated1.11

Every binding component has its own set of properties. Click the name link to see the component specification for each binding.

Generic

ComponentInput BindingOutput BindingStatusComponent versionSince runtime version
Apple Push Notifications (APN)Input binding not supportedAlphav11.0
commercetools GraphQLInput binding not supportedAlphav11.8
Cron (Scheduler)Output binding not supportedStablev11.10
GraphQLInput binding not supportedAlphav11.0
HTTPInput binding not supportedStablev11.0
Huawei OBSInput binding not supportedAlphav11.8
InfluxDBInput binding not supportedBetav11.7
KafkaStablev11.8
KitexInput binding not supportedAlphav11.11
KubeMQBetav11.10
Kubernetes EventsOutput binding not supportedAlphav11.0
Local StorageInput binding not supportedStablev11.9
MQTT3Betav11.7
MySQL & MariaDBInput binding not supportedAlphav11.0
PostgreSQLInput binding not supportedStablev11.9
PostmarkInput binding not supportedAlphav11.0
RabbitMQStablev11.9
RedisInput binding not supportedStablev11.9
RethinkDBOutput binding not supportedBetav11.9
SendGridInput binding not supportedAlphav11.0
SFTPInput binding not supportedAlphav11.15
SMTPInput binding not supportedAlphav11.0
TwilioInput binding not supportedAlphav11.0
WasmInput binding not supportedAlphav11.11

Alibaba Cloud

ComponentInput BindingOutput BindingStatusComponent versionSince runtime version
Alibaba Cloud DingTalkAlphav11.2
Alibaba Cloud OSSInput binding not supportedAlphav11.0
Alibaba Cloud SLSInput binding not supportedAlphav11.9
Alibaba Cloud TablestoreInput binding not supportedAlphav11.5

Amazon Web Services (AWS)

ComponentInput BindingOutput BindingStatusComponent versionSince runtime version
AWS DynamoDBInput binding not supportedAlphav11.0
AWS KinesisAlphav11.0
AWS S3Input binding not supportedStablev11.11
AWS SESInput binding not supportedAlphav11.4
AWS SNSInput binding not supportedAlphav11.0
AWS SQSAlphav11.0

Cloudflare

ComponentInput BindingOutput BindingStatusComponent versionSince runtime version
Cloudflare QueuesInput binding not supportedAlphav11.10

Google Cloud Platform (GCP)

ComponentInput BindingOutput BindingStatusComponent versionSince runtime version
GCP Cloud Pub/SubAlphav11.0
GCP Storage BucketInput binding not supportedAlphav11.0

Microsoft Azure

ComponentInput BindingOutput BindingStatusComponent versionSince runtime version
Azure Blob StorageInput binding not supportedStablev11.0
Azure Cosmos DB (Gremlin API)Input binding not supportedAlphav11.5
Azure CosmosDBInput binding not supportedStablev11.7
Azure Event GridBetav11.7
Azure Event HubsStablev11.8
Azure OpenAIAlphav11.11
Azure Service Bus QueuesStablev11.7
Azure SignalRInput binding not supportedAlphav11.0
Azure Storage QueuesStablev11.0

Zeebe (Camunda Cloud)

ComponentInput BindingOutput BindingStatusComponent versionSince runtime version
Zeebe CommandInput binding not supportedStablev11.2
Zeebe Job WorkerOutput binding not supportedStablev11.2

2.1 - Alibaba Cloud DingTalk binding spec

Detailed documentation on the Alibaba Cloud DingTalk binding component

Setup Dapr component

To setup an Alibaba Cloud DingTalk binding create a component of type bindings.dingtalk.webhook. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.dingtalk.webhook
  version: v1
  metadata:
  - name: id
    value: "test_webhook_id"
  - name: url
    value: "https://oapi.dingtalk.com/robot/send?access_token=******"
  - name: secret
    value: "****************"
  - name: direction
    value: "input, output"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
idYInput/OutputUnique id"test_webhook_id"
urlYInput/OutputDingTalk’s Webhook url"https://oapi.dingtalk.com/robot/send?access_token=******"
secretNInput/OutputThe secret of DingTalk’s Webhook"****************"
directionNInput/OutputThe direction of the binding"input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create
  • get

Specifying a partition key

Example: Follow the instructions here on setting the data of payload

curl -X POST http://localhost:3500/v1.0/bindings/myDingTalk \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "msgtype": "text",
          "text": {
            "content": "Hi"
          }
        },
        "operation": "create"
      }'
curl -X POST http://localhost:3500/v1.0/bindings/myDingTalk \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "msgtype": "text",
          "text": {
            "content": "Hi"
          }
        },
        "operation": "get"
      }'

2.2 - Alibaba Cloud Log Storage Service binding spec

Detailed documentation on the Alibaba Cloud Log Storage binding component

Component format

To setup an Alibaba Cloud SLS binding create a component of type bindings.alicloud.sls. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: alicloud.sls
spec:
  type: bindings.alicloud.sls
  version: v1
  metadata:
  - name: AccessKeyID
    value: "[accessKey-id]"
  - name: AccessKeySecret
    value: "[accessKey-secret]"
  - name: Endpoint
    value: "[endpoint]"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
AccessKeyIDYOutputAccess key ID credential.
AccessKeySecretYOutputAccess key credential secret
EndpointYOutputAlicloud SLS endpoint.

Binding support

This component supports output binding with the following operations:

Request format

To perform a log store operation, invoke the binding with a POST method and the following JSON body:

{
    "metadata":{
        "project":"your-sls-project-name",
        "logstore":"your-sls-logstore-name",
        "topic":"your-sls-topic-name",
        "source":"your-sls-source"
    },
    "data":{
        "custome-log-filed":"any other log info"
    },
    "operation":"create"
}

Example

curl -X POST -H "Content-Type: application/json" -d "{\"metadata\":{\"project\":\"project-name\",\"logstore\":\"logstore-name\",\"topic\":\"topic-name\",\"source\":\"source-name\"},\"data\":{\"log-filed\":\"log info\"}" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -X POST -H "Content-Type: application/json" -d '{"metadata":{"project":"project-name","logstore":"logstore-name","topic":"topic-name","source":"source-name"},"data":{"log-filed":"log info"}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response format

As Alibaba Cloud SLS producer API is asynchronous, there is no response for this binding (there is no callback interface to accept the response of success or failure, only a record for failure any reason to the console log).

2.3 - Alibaba Cloud Object Storage Service binding spec

Detailed documentation on the Alibaba Cloud Object Storage binding component

Component format

To setup an Alibaba Cloud Object Storage binding create a component of type bindings.alicloud.oss. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: alicloudobjectstorage
spec:
  type: bindings.alicloud.oss
  version: v1
  metadata:
  - name: endpoint
    value: "[endpoint]"
  - name: accessKeyID
    value: "[key-id]"
  - name: accessKey
    value: "[access-key]"
  - name: bucket
    value: "[bucket]"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
endpointYOutputAlicloud OSS endpoint.https://oss-cn-hangzhou.aliyuncs.com
accessKeyIDYOutputAccess key ID credential.
accessKeyYOutputAccess key credential.
bucketYOutputName of the storage bucket.

Binding support

This component supports output binding with the following operations:

Create object

To perform a create object operation, invoke the binding with a POST method and the following JSON body:

{
  "operation": "create",
  "data": "YOUR_CONTENT"
}

Example

Saving to a random generated UUID file

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Saving to a specific file

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-key\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-key" } }' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Metadata information

Object key

By default, the Alicloud OSS output binding will auto-generate a UUID as the object key. You can set the key with the following metadata:

{
    "data": "file content",
    "metadata": {
        "key": "my-key"
    },
    "operation": "create"
}

2.4 - Alibaba Cloud Tablestore binding spec

Detailed documentation on the Alibaba Tablestore binding component

Component format

To setup an Alibaba Cloud Tablestore binding create a component of type bindings.alicloud.tablestore. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mytablestore
spec:
  type: bindings.alicloud.tablestore
  version: v1
  metadata:
  - name: endpoint
    value: "[endpoint]"
  - name: accessKeyID
    value: "[key-id]"
  - name: accessKey
    value: "[access-key]"
  - name: instanceName
    value: "[instance]"
  - name: tableName
    value: "[table]"
  - name: endpoint
    value: "[endpoint]"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
endpointYOutputAlicloud Tablestore endpoint.https://tablestore-cn-hangzhou.aliyuncs.com
accessKeyIDYOutputAccess key ID credential.
accessKeyYOutputAccess key credential.
instanceNameYOutputName of the instance.
tableNameYOutputName of the table.

Binding support

This component supports output binding with the following operations:

Create object

To perform a create object operation, invoke the binding with a POST method and the following JSON body:

{
  "operation": "create",
  "data": "YOUR_CONTENT",
  "metadata": {
    "primaryKeys": "pk1"
  }
} 

Delete object

To perform a delete object operation, invoke the binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
   "primaryKeys": "pk1",
   "columnToGet": "name,age,date"
  },
  "data": {
    "pk1": "data1"
  }
} 

List objects

To perform a list objects operation, invoke the binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "primaryKeys": "pk1",
    "columnToGet": "name,age,date"
  },
  "data": {
    "pk1": "data1",
    "pk2": "data2"
  }
} 

Get object

To perform a get object operation, invoke the binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "primaryKeys": "pk1"
  },
  "data": {
    "pk1": "data1"
  }
} 

2.5 - Apple Push Notification Service binding spec

Detailed documentation on the Apple Push Notification Service binding component

Component format

To setup Apple Push Notifications binding create a component of type bindings.apns. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.apns
  version: v1
  metadata:
    - name: development
      value: "<bool>"
    - name: key-id
      value: "<APPLE_KEY_ID>"
    - name: team-id
      value: "<APPLE_TEAM_ID>"
    - name: private-key
      secretKeyRef:
        name: <SECRET>
        key: "<SECRET-KEY-NAME>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
developmentYOutputTells the binding which APNs service to use. Set to "true" to use the development service or "false" to use the production service. Default: "true""true"
key-idYOutputThe identifier for the private key from the Apple Developer Portal"private-key-id"
team-idYOutputThe identifier for the organization or author from the Apple Developer Portal"team-id"
private-keyYOutputIs a PKCS #8-formatted private key. It is intended that the private key is stored in the secret store and not exposed directly in the configuration. See here for more details"pem file"

Private key

The APNS binding needs a cryptographic private key in order to generate authentication tokens for the APNS service. The private key can be generated from the Apple Developer Portal and is provided as a PKCS #8 file with the private key stored in PEM format. The private key should be stored in the Dapr secret store and not stored directly in the binding’s configuration file.

A sample configuration file for the APNS binding is shown below:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: apns
spec:
  type: bindings.apns
  metadata:
  - name: development
    value: false
  - name: key-id
    value: PUT-KEY-ID-HERE
  - name: team-id
    value: PUT-APPLE-TEAM-ID-HERE
  - name: private-key
    secretKeyRef:
      name: apns-secrets
      key: private-key

If using Kubernetes, a sample secret configuration may look like this:

apiVersion: v1
kind: Secret
metadata:
    name: apns-secrets
stringData:
    private-key: |
        -----BEGIN PRIVATE KEY-----
        KEY-DATA-GOES-HERE
        -----END PRIVATE KEY-----

Binding support

This component supports output binding with the following operations:

  • create

Push notification format

The APNS binding is a pass-through wrapper over the Apple Push Notification Service. The APNS binding will send the request directly to the APNS service without any translation. It is therefore important to understand the payload for push notifications expected by the APNS service. The payload format is documented here.

Request format

{
    "data": {
        "aps": {
            "alert": {
                "title": "New Updates!",
                "body": "There are new updates for your review"
            }
        }
    },
    "metadata": {
        "device-token": "PUT-DEVICE-TOKEN-HERE",
        "apns-push-type": "alert",
        "apns-priority": "10",
        "apns-topic": "com.example.helloworld"
    },
    "operation": "create"
}

The data object contains a complete push notification specification as described in the Apple documentation. The data object will be sent directly to the APNs service.

Besides the device-token value, the HTTP headers specified in the Apple documentation can be sent as metadata fields and will be included in the HTTP request to the APNs service.

Response format

{
    "messageID": "UNIQUE-ID-FOR-NOTIFICATION"
}

2.6 - AWS DynamoDB binding spec

Detailed documentation on the AWS DynamoDB binding component

Component format

To setup AWS DynamoDB binding create a component of type bindings.aws.dynamodb. See this guide on how to create and apply a binding configuration.

See Authenticating to AWS for information about authentication-related attributes

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.dynamodb
  version: v1
  metadata:
  - name: table
    value: "items"
  - name: region
    value: "us-west-2"
  - name: accessKey
    value: "*****************"
  - name: secretKey
    value: "*****************"
  - name: sessionToken
    value: "*****************"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
tableYOutputThe DynamoDB table name"items"
regionYOutputThe specific AWS region the AWS DynamoDB instance is deployed in"us-east-1"
accessKeyYOutputThe AWS Access Key to access this resource"key"
secretKeyYOutputThe AWS Secret Access Key to access this resource"secretAccessKey"
sessionTokenNOutputThe AWS session token to use"sessionToken"

Binding support

This component supports output binding with the following operations:

  • create

2.7 - AWS Kinesis binding spec

Detailed documentation on the AWS Kinesis binding component

Component format

To setup AWS Kinesis binding create a component of type bindings.aws.kinesis. See this guide on how to create and apply a binding configuration.

See this for instructions on how to set up an AWS Kinesis data streams See Authenticating to AWS for information about authentication-related attributes

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.kinesis
  version: v1
  metadata:
  - name: streamName
    value: "KINESIS_STREAM_NAME" # Kinesis stream name
  - name: consumerName
    value: "KINESIS_CONSUMER_NAME" # Kinesis consumer name
  - name: mode
    value: "shared" # shared - Shared throughput or extended - Extended/Enhanced fanout
  - name: region
    value: "AWS_REGION" #replace
  - name: accessKey
    value: "AWS_ACCESS_KEY" # replace
  - name: secretKey
    value: "AWS_SECRET_KEY" #replace
  - name: sessionToken
    value: "*****************"
  - name: direction
    value: "input, output"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
modeNInputThe Kinesis stream mode. shared- Shared throughput, extended - Extended/Enhanced fanout methods. More details are here. Defaults to "shared""shared", "extended"
streamNameYInput/OutputThe AWS Kinesis Stream Name"stream"
consumerNameYInputThe AWS Kinesis Consumer Name"myconsumer"
regionYOutputThe specific AWS region the AWS Kinesis instance is deployed in"us-east-1"
accessKeyYOutputThe AWS Access Key to access this resource"key"
secretKeyYOutputThe AWS Secret Access Key to access this resource"secretAccessKey"
sessionTokenNOutputThe AWS session token to use"sessionToken"
directionNInput/OutputThe direction of the binding"input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

2.8 - AWS S3 binding spec

Detailed documentation on the AWS S3 binding component

Component format

To setup an AWS S3 binding create a component of type bindings.aws.s3. This binding works with other S3-compatible services, such as Minio. See this guide on how to create and apply a binding configuration.

See Authenticating to AWS for information about authentication-related attributes.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.s3
  version: v1
  metadata:
  - name: bucket
    value: "mybucket"
  - name: region
    value: "us-west-2"
  - name: endpoint
    value: "s3.us-west-2.amazonaws.com"
  - name: accessKey
    value: "*****************"
  - name: secretKey
    value: "*****************"
  - name: sessionToken
    value: "mysession"
  - name: decodeBase64
    value: "<bool>"
  - name: encodeBase64
    value: "<bool>"
  - name: forcePathStyle
    value: "<bool>"
  - name: disableSSL
    value: "<bool>"
  - name: insecureSSL
    value: "<bool>"
  - name: storageClass
    value: "<string>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
bucketYOutputThe name of the S3 bucket to write to"bucket"
regionYOutputThe specific AWS region"us-east-1"
endpointNOutputThe specific AWS endpoint"s3.us-east-1.amazonaws.com"
accessKeyYOutputThe AWS Access Key to access this resource"key"
secretKeyYOutputThe AWS Secret Access Key to access this resource"secretAccessKey"
sessionTokenNOutputThe AWS session token to use"sessionToken"
forcePathStyleNOutputCurrently Amazon S3 SDK supports virtual hosted-style and path-style access. "true" is path-style format like "https://<endpoint>/<your bucket>/<key>". "false" is hosted-style format like "https://<your bucket>.<endpoint>/<key>". Defaults to "false""true", "false"
decodeBase64NOutputConfiguration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). "true" is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to false"true", "false"
encodeBase64NOutputConfiguration to encode base64 file content before return the content. (In case of opening a file with binary content). "true" is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to "false""true", "false"
disableSSLNOutputAllows to connect to non https:// endpoints. Defaults to "false""true", "false"
insecureSSLNOutputWhen connecting to https:// endpoints, accepts invalid or self-signed certificates. Defaults to "false""true", "false"
storageClassNOutputThe desired storage class for objects during the create operation. Valid aws storage class types can be found hereSTANDARD_IA

S3 Bucket Creation

Using with Minio

Minio is a service that exposes local storage as S3-compatible block storage, and it’s a popular alternative to S3 especially in development environments. You can use the S3 binding with Minio too, with some configuration tweaks:

  1. Set endpoint to the address of the Minio server, including protocol (http:// or https://) and the optional port at the end. For example, http://minio.local:9000 (the values depend on your environment).
  2. forcePathStyle must be set to true
  3. The value for region is not important; you can set it to us-east-1.
  4. Depending on your environment, you may need to set disableSSL to true if you’re connecting to Minio using a non-secure connection (using the http:// protocol). If you are using a secure connection (https:// protocol) but with a self-signed certificate, you may need to set insecureSSL to true.

For local development, the LocalStack project is used to integrate AWS S3. Follow these instructions to run LocalStack.

To run LocalStack locally from the command line using Docker, use a docker-compose.yaml similar to the following:

version: "3.8"

services:
  localstack:
    container_name: "cont-aws-s3"
    image: localstack/localstack:1.4.0
    ports:
      - "127.0.0.1:4566:4566"
    environment:
      - DEBUG=1
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "<PATH>/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh"  # init hook
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"

To use the S3 component, you need to use an existing bucket. The example above uses a LocalStack Initialization Hook to setup the bucket.

To use LocalStack with your S3 binding, you need to provide the endpoint configuration in the component metadata. The endpoint is unnecessary when running against production AWS.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
    name: aws-s3
    namespace: default
spec:
  type: bindings.aws.s3
  version: v1
  metadata:
    - name: bucket
      value: conformance-test-docker
    - name: endpoint
      value: "http://localhost:4566"
    - name: accessKey
      value: "my-access"
    - name: secretKey
      value: "my-secret"
    - name: region
      value: "us-east-1"

To use the S3 component, you need to use an existing bucket. Follow the AWS documentation for creating a bucket.

Binding support

This component supports output binding with the following operations:

Create object

To perform a create operation, invoke the AWS S3 binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated. See below for Metadata support to set the name

{
  "operation": "create",
  "data": "YOUR_CONTENT",
  "metadata": { 
    "storageClass": "STANDARD_IA",
    "tags": "project=sashimi,year=2024",
  }
}

For example you can provide a storage class or tags while using the create operation with a Linux curl command

curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "storageClass": "STANDARD_IA", "project=sashimi,year=2024" } }' /
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Share object with a presigned URL

To presign an object with a specified time-to-live, use the presignTTL metadata key on a create request. Valid values for presignTTL are Go duration strings.

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"presignTTL\": \"15m\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "presignTTL": "15m" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response

The response body contains the following example JSON:

{
    "location":"https://<your bucket>.s3.<your region>.amazonaws.com/<key>",
    "versionID":"<version ID if Bucket Versioning is enabled>",
    "presignURL": "https://<your bucket>.s3.<your region>.amazonaws.com/image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJJWZ7B6WCRGMKFGQ%2F20180210%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20180210T171315Z&X-Amz-Expires=1800&X-Amz-Signature=12b74b0788aa036bc7c3d03b3f20c61f1f91cc9ad8873e3314255dc479a25351&X-Amz-SignedHeaders=host"
}

Examples

Save text to a random generated UUID file

On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save a file to a object

To upload a file, encode it as Base64 and let the Binding know to deserialize it:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.s3
  version: v1
  metadata:
  - name: bucket
    value: mybucket
  - name: region
    value: us-west-2
  - name: endpoint
    value: s3.us-west-2.amazonaws.com
  - name: accessKey
    value: *****************
  - name: secretKey
    value: *****************
  - name: sessionToken
    value: mysession
  - name: decodeBase64
    value: <bool>
  - name: forcePathStyle
    value: <bool>

Then you can upload it as you would normally:

curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "key": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Upload from file path

To upload a file from a supplied path (relative or absolute), use the filepath metadata key on a create request that contains empty data fields.

curl -d '{ \"operation\": \"create\", \"metadata\": { \"filePath\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "metadata": { "filePath": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body will contain the following JSON:

{
    "location":"https://<your bucket>.s3.<your region>.amazonaws.com/<key>",
    "versionID":"<version ID if Bucket Versioning is enabled"
}

Presign an existing object

To presign an existing S3 object with a specified time-to-live, use the presignTTL and key metadata keys on a presign request. Valid values for presignTTL are Go duration strings.

curl -d "{ \"operation\": \"presign\", \"metadata\": { \"presignTTL\": \"15m\", \"key\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "presign", "metadata": { "presignTTL": "15m", "key": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Response

The response body contains the following example JSON:

{
    "presignURL": "https://<your bucket>.s3.<your region>.amazonaws.com/image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJJWZ7B6WCRGMKFGQ%2F20180210%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20180210T171315Z&X-Amz-Expires=1800&X-Amz-Signature=12b74b0788aa036bc7c3d03b3f20c61f1f91cc9ad8873e3314255dc479a25351&X-Amz-SignedHeaders=host"
}

Get object

To perform a get file operation, invoke the AWS S3 binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the object.

Delete object

To perform a delete object operation, invoke the AWS S3 binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object

Examples

Delete object
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body will be returned if successful.

List objects

To perform a list object operation, invoke the S3 binding with a POST method and the following JSON body:

{
  "operation": "list",
  "data": {
    "maxResults": 10,
    "prefix": "file",
    "marker": "hvlcCQFSOD5TD",
    "delimiter": "i0FvxAn2EOEL6"
  }
}

The data parameters are:

  • maxResults - (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
  • prefix - (optional) limits the response to keys that begin with the specified prefix.
  • marker - (optional) marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this specified key. Marker can be any key in the bucket. The marker value may then be used in a subsequent call to request the next set of list items.
  • delimiter - (optional) A delimiter is a character you use to group keys.

Response

The response body contains the list of found objects.

The list of objects will be returned as JSON array in the following form:

{
	"CommonPrefixes": null,
	"Contents": [
		{
			"ETag": "\"7e94cc9b0f5226557b05a7c2565dd09f\"",
			"Key": "hpNdFUxruNuwm",
			"LastModified": "2021-08-16T06:44:14Z",
			"Owner": {
				"DisplayName": "owner name",
				"ID": "owner id"
			},
			"Size": 6916,
			"StorageClass": "STANDARD"
		}
	],
	"Delimiter": "",
	"EncodingType": null,
	"IsTruncated": true,
	"Marker": "hvlcCQFSOD5TD",
	"MaxKeys": 1,
	"Name": "mybucketdapr",
	"NextMarker": "hzaUPWjmvyi9W",
	"Prefix": ""
}

2.9 - AWS SES binding spec

Detailed documentation on the AWS SES binding component

Component format

To setup AWS binding create a component of type bindings.aws.ses. See this guide on how to create and apply a binding configuration.

See Authenticating to AWS for information about authentication-related attributes

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: ses
spec:
  type: bindings.aws.ses
  version: v1
  metadata:
  - name: accessKey
    value: *****************
  - name: secretKey
    value: *****************
  - name: region
    value: "eu-west-1"
  - name: sessionToken
    value: mysession
  - name: emailFrom
    value: "sender@example.com"
  - name: emailTo
    value: "receiver@example.com"
  - name: emailCc
    value: "cc@example.com"
  - name: emailBcc
    value: "bcc@example.com"
  - name: subject
    value: "subject"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
regionNOutputThe specific AWS region"eu-west-1"
accessKeyNOutputThe AWS Access Key to access this resource"key"
secretKeyNOutputThe AWS Secret Access Key to access this resource"secretAccessKey"
sessionTokenNOutputThe AWS session token to use"sessionToken"
emailFromNOutputIf set, this specifies the email address of the sender. See also"me@example.com"
emailToNOutputIf set, this specifies the email address of the receiver. See also"me@example.com"
emailCcNOutputIf set, this specifies the email address to CC in. See also"me@example.com"
emailBccNOutputIf set, this specifies email address to BCC in. See also"me@example.com"
subjectNOutputIf set, this specifies the subject of the email message. See also"subject of mail"

Binding support

This component supports output binding with the following operations:

  • create

Example request

You can specify any of the following optional metadata properties with each request:

  • emailFrom
  • emailTo
  • emailCc
  • emailBcc
  • subject

When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the emailFrom, emailTo, emailCc, emailBcc and subject fields.

The emailTo, emailCc and emailBcc fields can contain multiple email addresses separated by a semicolon.

Example:

{
  "operation": "create",
  "metadata": {
    "emailTo": "dapr-smtp-binding@example.net",
    "emailCc": "cc1@example.net",
    "subject": "Email subject"
  },
  "data": "Testing Dapr SMTP Binding"
}

The emailTo, emailCc and emailBcc fields can contain multiple email addresses separated by a semicolon.

2.10 - AWS SNS binding spec

Detailed documentation on the AWS SNS binding component

Component format

To setup AWS SNS binding create a component of type bindings.aws.sns. See this guide on how to create and apply a binding configuration.

See Authenticating to AWS for information about authentication-related attributes

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.sns
  version: v1
  metadata:
  - name: topicArn
    value: "mytopic"
  - name: region
    value: "us-west-2"
  - name: endpoint
    value: "sns.us-west-2.amazonaws.com"
  - name: accessKey
    value: "*****************"
  - name: secretKey
    value: "*****************"
  - name: sessionToken
    value: "*****************"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
topicArnYOutputThe SNS topic name"arn:::topicarn"
regionYOutputThe specific AWS region"us-east-1"
endpointNOutputThe specific AWS endpoint"sns.us-east-1.amazonaws.com"
accessKeyYOutputThe AWS Access Key to access this resource"key"
secretKeyYOutputThe AWS Secret Access Key to access this resource"secretAccessKey"
sessionTokenNOutputThe AWS session token to use"sessionToken"

Binding support

This component supports output binding with the following operations:

  • create

2.11 - AWS SQS binding spec

Detailed documentation on the AWS SQS binding component

Component format

To setup AWS SQS binding create a component of type bindings.aws.sqs. See this guide on how to create and apply a binding configuration.

See Authenticating to AWS for information about authentication-related attributes

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.aws.sqs
  version: v1
  metadata:
  - name: queueName
    value: "items"
  - name: region
    value: "us-west-2"
  - name: accessKey
    value: "*****************"
  - name: secretKey
    value: "*****************"
  - name: sessionToken
    value: "*****************"
  - name: direction 
    value: "input, output"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
queueNameYInput/OutputThe SQS queue name"myqueue"
regionYInput/OutputThe specific AWS region"us-east-1"
accessKeyYInput/OutputThe AWS Access Key to access this resource"key"
secretKeyYInput/OutputThe AWS Secret Access Key to access this resource"secretAccessKey"
sessionTokenNInput/OutputThe AWS session token to use"sessionToken"
directionNInput/OutputThe direction of the binding"input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

2.12 - Azure Blob Storage binding spec

Detailed documentation on the Azure Blob Storage binding component

Component format

To setup Azure Blob Storage binding create a component of type bindings.azure.blobstorage. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.blobstorage
  version: v1
  metadata:
  - name: accountName
    value: myStorageAccountName
  - name: accountKey
    value: ***********
  - name: containerName
    value: container1
# - name: decodeBase64
#   value: <bool>
# - name: getBlobRetryCount
#   value: <integer>
# - name: publicAccessLevel
#   value: <publicAccessLevel>

Spec metadata fields

FieldRequiredBinding supportDetailsExample
accountNameYInput/OutputThe name of the Azure Storage account"myexmapleaccount"
accountKeyY*Input/OutputThe access key of the Azure Storage account. Only required when not using Microsoft Entra ID authentication."access-key"
containerNameYOutputThe name of the Blob Storage container to write tomyexamplecontainer
endpointNInput/OutputOptional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https://), the IP or FQDN, and optional port."http://127.0.0.1:10000"
decodeBase64NOutputConfiguration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). Defaults to falsetrue, false
getBlobRetryCountNOutputSpecifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to 101, 2
publicAccessLevelNOutputSpecifies whether data in the container may be accessed publicly and the level of access (only used if the container is created by Dapr). Defaults to noneblob, container, none

Microsoft Entra ID authentication

The Azure Blob Storage binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Binding support

This component supports output binding with the following operations:

The Blob storage component’s input binding triggers and pushes events using Azure Event Grid.

Refer to the Reacting to Blob storage events guide for more set up and more information.

Create blob

To perform a create blob operation, invoke the Azure Blob Storage binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated. See below for Metadata support to set the name

{
  "operation": "create",
  "data": "YOUR_CONTENT"
}

Examples

Save text to a random generated UUID blob

On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific blob
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"blobName\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "blobName": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save a file to a blob

To upload a file, encode it as Base64 and let the Binding know to deserialize it:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.blobstorage
  version: v1
  metadata:
  - name: accountName
    value: myStorageAccountName
  - name: accountKey
    value: ***********
  - name: containerName
    value: container1
  - name: decodeBase64
    value: true

Then you can upload it as you would normally:

curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"blobName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "blobName": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body will contain the following JSON:

{
   "blobURL": "https://<your account name>. blob.core.windows.net/<your container name>/<filename>"
}

Get blob

To perform a get blob operation, invoke the Azure Blob Storage binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "blobName": "myblob",
    "includeMetadata": "true"
  }
}

The metadata parameters are:

  • blobName - the name of the blob
  • includeMetadata- (optional) defines if the user defined metadata should be returned or not, defaults to: false

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "blobName": "myblob" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the blob object. If enabled, the user defined metadata will be returned as HTTP headers in the form:

Metadata.key1: value1 Metadata.key2: value2

Delete blob

To perform a delete blob operation, invoke the Azure Blob Storage binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "blobName": "myblob"
  }
}

The metadata parameters are:

  • blobName - the name of the blob
  • deleteSnapshots - (optional) required if the blob has associated snapshots. Specify one of the following two options:
    • include: Delete the base blob and all of its snapshots
    • only: Delete only the blob’s snapshots and not the blob itself

Examples

Delete blob
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Delete blob snapshots only
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"only\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "only" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Delete blob including snapshots
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"include\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "include" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body will be retuned if successful.

List blobs

To perform a list blobs operation, invoke the Azure Blob Storage binding with a POST method and the following JSON body:

{
  "operation": "list",
  "data": {
    "maxResults": 10,
    "prefix": "file",
    "marker": "2!108!MDAwMDM1IWZpbGUtMDgtMDctMjAyMS0wOS0zOC01NS03NzgtMjEudHh0ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--",
    "include": {
      "snapshots": false,
      "metadata": true,
      "uncommittedBlobs": false,
      "copy": false,
      "deleted": false
    }
  }
}

The data parameters are:

  • maxResults - (optional) specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxresults the server will return up to 5,000 items.
  • prefix - (optional) filters the results to return only blobs whose names begin with the specified prefix.
  • marker - (optional) a string value that identifies the portion of the list to be returned with the next list operation. The operation returns a marker value within the response body if the list returned was not complete. The marker value may then be used in a subsequent call to request the next set of list items.
  • include - (optional) Specifies one or more datasets to include in the response:
    • snapshots: Specifies that snapshots should be included in the enumeration. Snapshots are listed from oldest to newest in the response. Defaults to: false
    • metadata: Specifies that blob metadata be returned in the response. Defaults to: false
    • uncommittedBlobs: Specifies that blobs for which blocks have been uploaded, but which have not been committed using Put Block List, be included in the response. Defaults to: false
    • copy: Version 2012-02-12 and newer. Specifies that metadata related to any current or previous Copy Blob operation should be included in the response. Defaults to: false
    • deleted: Version 2017-07-29 and newer. Specifies that soft deleted blobs should be included in the response. Defaults to: false

Response

The response body contains the list of found blocks as also the following HTTP headers:

Metadata.marker: 2!108!MDAwMDM1IWZpbGUtMDgtMDctMjAyMS0wOS0zOC0zNC04NjctMTEudHh0ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ-- Metadata.number: 10

  • marker - the next marker which can be used in a subsequent call to request the next set of list items. See the marker description on the data property of the binding input.
  • number - the number of found blobs

The list of blobs will be returned as JSON array in the following form:

[
  {
    "XMLName": {
      "Space": "",
      "Local": "Blob"
    },
    "Name": "file-08-07-2021-09-38-13-776-1.txt",
    "Deleted": false,
    "Snapshot": "",
    "Properties": {
      "XMLName": {
        "Space": "",
        "Local": "Properties"
      },
      "CreationTime": "2021-07-08T07:38:16Z",
      "LastModified": "2021-07-08T07:38:16Z",
      "Etag": "0x8D941E3593C6573",
      "ContentLength": 1,
      "ContentType": "application/octet-stream",
      "ContentEncoding": "",
      "ContentLanguage": "",
      "ContentMD5": "xMpCOKC5I4INzFCab3WEmw==",
      "ContentDisposition": "",
      "CacheControl": "",
      "BlobSequenceNumber": null,
      "BlobType": "BlockBlob",
      "LeaseStatus": "unlocked",
      "LeaseState": "available",
      "LeaseDuration": "",
      "CopyID": null,
      "CopyStatus": "",
      "CopySource": null,
      "CopyProgress": null,
      "CopyCompletionTime": null,
      "CopyStatusDescription": null,
      "ServerEncrypted": true,
      "IncrementalCopy": null,
      "DestinationSnapshot": null,
      "DeletedTime": null,
      "RemainingRetentionDays": null,
      "AccessTier": "Hot",
      "AccessTierInferred": true,
      "ArchiveStatus": "",
      "CustomerProvidedKeySha256": null,
      "AccessTierChangeTime": null
    },
    "Metadata": null
  }
]

Metadata information

By default the Azure Blob Storage output binding auto generates a UUID as the blob filename and is not assigned any system or custom metadata to it. It is configurable in the metadata property of the message (all optional).

Applications publishing to an Azure Blob Storage output binding should send a message with the following format:

{
    "data": "file content",
    "metadata": {
        "blobName"           : "filename.txt",
        "contentType"        : "text/plain",
        "contentMD5"         : "vZGKbMRDAnMs4BIwlXaRvQ==",
        "contentEncoding"    : "UTF-8",
        "contentLanguage"    : "en-us",
        "contentDisposition" : "attachment",
        "cacheControl"       : "no-cache",
        "custom"             : "hello-world"
    },
    "operation": "create"
}

2.13 - Azure Cosmos DB (Gremlin API) binding spec

Detailed documentation on the Azure Cosmos DB (Gremlin API) binding component

Component format

To setup an Azure Cosmos DB (Gremlin API) binding create a component of type bindings.azure.cosmosdb.gremlinapi. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.cosmosdb.gremlinapi
  version: v1
  metadata:
  - name: url
    value: "wss://******.gremlin.cosmos.azure.com:443/"
  - name: masterKey
    value: "*****"
  - name: username
    value: "*****"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
urlYOutputThe Cosmos DB url for Gremlin APIs"wss://******.gremlin.cosmos.azure.com:443/"
masterKeyYOutputThe Cosmos DB account master key"masterKey"
usernameYOutputThe username of the Cosmos DB database"/dbs/<database_name>/colls/<graph_name>"

For more information see Quickstart: Azure Cosmos Graph DB using Gremlin.

Binding support

This component supports output binding with the following operations:

  • query

Request payload sample

{
  "data": {
    "gremlin": "g.V().count()"
    },
  "operation": "query"
}

2.14 - Azure Cosmos DB (SQL API) binding spec

Detailed documentation on the Azure Cosmos DB (SQL API) binding component

Component format

To setup Azure Cosmos DB binding create a component of type bindings.azure.cosmosdb. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.cosmosdb
  version: v1
  metadata:
  - name: url
    value: "https://******.documents.azure.com:443/"
  - name: masterKey
    value: "*****"
  - name: database
    value: "OrderDb"
  - name: collection
    value: "Orders"
  - name: partitionKey
    value: "<message>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
urlYOutputThe Cosmos DB url"https://******.documents.azure.com:443/"
masterKeyYOutputThe Cosmos DB account master key"master-key"
databaseYOutputThe name of the Cosmos DB database"OrderDb"
collectionYOutputThe name of the container inside the database."Orders"
partitionKeyYOutputThe name of the key to extract from the payload (document to be created) that is used as the partition key. This name must match the partition key specified upon creation of the Cosmos DB container."OrderId", "message"

For more information see Azure Cosmos DB resource model.

Microsoft Entra ID authentication

The Azure Cosmos DB binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

You can read additional information for setting up Cosmos DB with Azure AD authentication in the section below.

Binding support

This component supports output binding with the following operations:

  • create

Best Practices for Production Use

Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the Cosmos DB documentation)

Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:

  • Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by scoping your components to specific applications.
  • Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
  • Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
  • Increase the initTimeout value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is 5s and should be increased. When using Kubernetes, increasing this value may also require an update to your Readiness and Liveness probes.
spec:
  type: bindings.azure.cosmosdb
  version: v1
  initTimeout: 5m
  metadata:

Data format

The output binding create operation requires the following keys to exist in the payload of every document to be created:

  • id: a unique ID for the document to be created
  • <partitionKey>: the name of the partition key specified via the spec.partitionKey in the component definition. This must also match the partition key specified upon creation of the Cosmos DB container.

Setting up Cosmos DB for authenticating with Azure AD

When using the Dapr Cosmos DB binding and authenticating with Azure AD, you need to perform a few additional steps to set up your environment.

Prerequisites:

  • You need a Service Principal created as per the instructions in the authenticating to Azure page. You need the ID of the Service Principal for the commands below (note that this is different from the client ID of your application, or the value you use for azureClientId in the metadata).
  • Azure CLI
  • jq
  • The scripts below are optimized for a bash or zsh shell

When using the Cosmos DB binding, you don’t need to create stored procedures as you do in the case of the Cosmos DB state store.

Granting your Azure AD application access to Cosmos DB

You can find more information on the official documentation, including instructions to assign more granular permissions.

In order to grant your application permissions to access data stored in Cosmos DB, you need to assign it a custom role for the Cosmos DB data plane. In this example you’re going to use a built-in role, “Cosmos DB Built-in Data Contributor”, which grants your application full read-write access to the data; you can optionally create custom, fine-tuned roles following the instructions in the official docs.

# Name of the Resource Group that contains your Cosmos DB
RESOURCE_GROUP="..."
# Name of your Cosmos DB account
ACCOUNT_NAME="..."
# ID of your Service Principal object
PRINCIPAL_ID="..."
# ID of the "Cosmos DB Built-in Data Contributor" role
# You can also use the ID of a custom role
ROLE_ID="00000000-0000-0000-0000-000000000002"

az cosmosdb sql role assignment create \
  --account-name "$ACCOUNT_NAME" \
  --resource-group "$RESOURCE_GROUP" \
  --scope "/" \
  --principal-id "$PRINCIPAL_ID" \
  --role-definition-id "$ROLE_ID"

2.15 - Azure Event Grid binding spec

Detailed documentation on the Azure Event Grid binding component

Component format

To setup an Azure Event Grid binding create a component of type bindings.azure.eventgrid. See this guide on how to create and apply a binding configuration.

See this for the documentation for Azure Event Grid.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <name>
spec:
  type: bindings.azure.eventgrid
  version: v1
  metadata:
  # Required Output Binding Metadata
  - name: accessKey
    value: "[AccessKey]"
  - name: topicEndpoint
    value: "[TopicEndpoint]"
  # Required Input Binding Metadata
  - name: azureTenantId
    value: "[AzureTenantId]"
  - name: azureSubscriptionId
    value: "[AzureSubscriptionId]"
  - name: azureClientId
    value: "[ClientId]"
  - name: azureClientSecret
    value: "[ClientSecret]"
  - name: subscriberEndpoint
    value: "[SubscriberEndpoint]"
  - name: handshakePort
    # Make sure to pass this as a string, with quotes around the value
    value: "[HandshakePort]"
  - name: scope
    value: "[Scope]"
  # Optional Input Binding Metadata
  - name: eventSubscriptionName
    value: "[EventSubscriptionName]"
  # Optional metadata
  - name: direction
    value: "input, output"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
accessKeyYOutputThe Access Key to be used for publishing an Event Grid Event to a custom topic"accessKey"
topicEndpointYOutputThe topic endpoint in which this output binding should publish events"topic-endpoint"
azureTenantIdYInputThe Azure tenant ID of the Event Grid resource"tenentID"
azureSubscriptionIdYInputThe Azure subscription ID of the Event Grid resource"subscriptionId"
azureClientIdYInputThe client ID that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages"clientId"
azureClientSecretYInputThe client id that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages"clientSecret"
subscriberEndpointYInputThe HTTPS endpoint of the webhook Event Grid sends events (formatted as Cloud Events) to. If you’re not re-writing URLs on ingress, it should be in the form of: "https://[YOUR HOSTNAME]/<path>"
If testing on your local machine, you can use something like ngrok to create a public endpoint.
"https://[YOUR HOSTNAME]/<path>"
handshakePortYInputThe container port that the input binding listens on when receiving events on the webhook"9000"
scopeYInputThe identifier of the resource to which the event subscription needs to be created or updated. See the scope section for more details"/subscriptions/{subscriptionId}/"
eventSubscriptionNameNInputThe name of the event subscription. Event subscription names must be between 3 and 64 characters long and should use alphanumeric letters only"name"
directionNInput/OutputThe direction of the binding"input", "output", "input, output"

Scope

Scope is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, a resource group, a top-level resource belonging to a resource provider namespace, or an Event Grid topic. For example:

  • /subscriptions/{subscriptionId}/ for a subscription
  • /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName} for a resource group
  • /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName} for a resource
  • /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName} for an Event Grid topic

Values in braces {} should be replaced with actual values.

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create: publishes a message on the Event Grid topic

Receiving events

You can use the Event Grid binding to receive events from a variety of sources and actions. Learn more about all of the available event sources and handlers that work with Event Grid.

In the following table, you can find the list of Dapr components that can raise events.

Event sourcesDapr components
Azure Blob StorageAzure Blob Storage binding
Azure Blob Storage state store
Azure Cache for RedisRedis binding
Redis pub/sub
Azure Event HubsAzure Event Hubs pub/sub
Azure Event Hubs binding
Azure IoT HubAzure Event Hubs pub/sub
Azure Event Hubs binding
Azure Service BusAzure Service Bus binding
Azure Service Bus pub/sub topics and queues
Azure SignalR ServiceSignalR binding

Microsoft Entra ID credentials

The Azure Event Grid binding requires an Microsoft Entra ID application and service principal for two reasons:

  • Creating an event subscription when Dapr is started (and updating it if the Dapr configuration changes)
  • Authenticating messages delivered by Event Hubs to your application.

Requirements:

For the first purpose, you will need to create an Azure Service Principal. After creating it, take note of the Microsoft Entra ID application’s clientID (a UUID), and run the following script with the Azure CLI:

# Set the client ID of the app you created
CLIENT_ID="..."
# Scope of the resource, usually in the format:
# `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}`
SCOPE="..."

# First ensure that Azure Resource Manager provider is registered for Event Grid
az provider register --namespace "Microsoft.EventGrid"
az provider show --namespace "Microsoft.EventGrid" --query "registrationState"
# Give the SP needed permissions so that it can create event subscriptions to Event Grid
az role assignment create --assignee "$CLIENT_ID" --role "EventGrid EventSubscription Contributor" --scopes "$SCOPE"

For the second purpose, first download a script:

curl -LO "https://raw.githubusercontent.com/dapr/components-contrib/master/.github/infrastructure/conformance/azure/setup-eventgrid-sp.ps1"

Then, using PowerShell (pwsh), run:

# Set the client ID of the app you created
$clientId = "..."

# Authenticate with the Microsoft Graph
# You may need to add the -TenantId flag to the next command if needed
Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All"
./setup-eventgrid-sp.ps1 $clientId

Note: if your directory does not have a Service Principal for the application “Microsoft.EventGrid”, you may need to run the command Connect-MgGraph and sign in as an admin for the Microsoft Entra ID tenant (this is related to permissions on the Microsoft Entra ID directory, and not the Azure subscription). Otherwise, please ask your tenant’s admin to sign in and run this PowerShell command: New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7" (the UUID is a constant)

Testing locally

  • Install ngrok
  • Run locally using a custom port, for example 9000, for handshakes
# Using port 9000 as an example
ngrok http --host-header=localhost 9000
  • Configure the ngrok’s HTTPS endpoint and the custom port to input binding metadata
  • Run Dapr
# Using default ports for .NET core web api and Dapr as an example
dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run

Testing on Kubernetes

Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren’t accepted. In order to enable traffic from the public internet to your app’s Dapr sidecar you need an ingress controller enabled with Dapr. There’s a good article on this topic: Kubernetes NGINX ingress controller with Dapr.

To get started, first create a dapr-annotations.yaml file for Dapr annotations:

controller:
  podAnnotations:
    dapr.io/enabled: "true"
    dapr.io/app-id: "nginx-ingress"
    dapr.io/app-port: "80"

Then install the NGINX ingress controller to your Kubernetes cluster with Helm 3 using the annotations:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx -f ./dapr-annotations.yaml -n default
# Get the public IP for the ingress controller
kubectl get svc -l component=controller -o jsonpath='Public IP is: {.items[0].status.loadBalancer.ingress[0].ip}{"\n"}'

If deploying to Azure Kubernetes Service, you can follow the official Microsoft documentation for rest of the steps:

  • Add an A record to your DNS zone
  • Install cert-manager
  • Create a CA cluster issuer

Final step for enabling communication between Event Grid and Dapr is to define http and custom port to your app’s service and an ingress in Kubernetes. This example uses a .NET Core web api and Dapr default ports and custom port 9000 for handshakes.

# dotnetwebapi.yaml
kind: Service
apiVersion: v1
metadata:
  name: dotnetwebapi
  labels:
    app: dotnetwebapi
spec:
  selector:
    app: dotnetwebapi
  ports:
    - name: webapi
      protocol: TCP
      port: 80
      targetPort: 80
    - name: dapr-eventgrid
      protocol: TCP
      port: 9000
      targetPort: 9000
  type: ClusterIP

---
  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    name: eventgrid-input-rule
    annotations:
      kubernetes.io/ingress.class: nginx
      cert-manager.io/cluster-issuer: letsencrypt
  spec:
    tls:
      - hosts:
        - dapr.<your custom domain>
        secretName: dapr-tls
    rules:
      - host: dapr.<your custom domain>
        http:
          paths:
            - path: /api/events
              backend:
                serviceName: dotnetwebapi
                servicePort: 9000

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dotnetwebapi
  labels:
    app: dotnetwebapi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dotnetwebapi
  template:
    metadata:
      labels:
        app: dotnetwebapi
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "dotnetwebapi"
        dapr.io/app-port: "5000"
    spec:
      containers:
      - name: webapi
        image: <your container image>
        ports:
        - containerPort: 5000
        imagePullPolicy: Always

Deploy the binding and app (including ingress) to Kubernetes

# Deploy Dapr components
kubectl apply -f eventgrid.yaml
# Deploy your app and Nginx ingress
kubectl apply -f dotnetwebapi.yaml

Note: This manifest deploys everything to Kubernetes’ default namespace.

Troubleshooting possible issues with Nginx controller

After initial deployment the “Daprized” Nginx controller can malfunction. To check logs and fix issue (if it exists) follow these steps.

$ kubectl get pods -l app=nginx-ingress

NAME                                                   READY   STATUS    RESTARTS   AGE
nginx-nginx-ingress-controller-649df94867-fp6mg        2/2     Running   0          51m
nginx-nginx-ingress-default-backend-6d96c457f6-4nbj5   1/1     Running   0          55m

$ kubectl logs nginx-nginx-ingress-controller-649df94867-fp6mg nginx-ingress-controller

# If you see 503s logged from calls to webhook endpoint '/api/events' restart the pod
# .."OPTIONS /api/events HTTP/1.1" 503..

$ kubectl delete pod nginx-nginx-ingress-controller-649df94867-fp6mg

# Check the logs again - it should start returning 200
# .."OPTIONS /api/events HTTP/1.1" 200..

2.16 - Azure Event Hubs binding spec

Detailed documentation on the Azure Event Hubs binding component

Component format

To setup an Azure Event Hubs binding, create a component of type bindings.azure.eventhubs. See this guide on how to create and apply a binding configuration.

See this for instructions on how to set up an Event Hub.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.eventhubs
  version: v1
  metadata:
    # Hub name ("topic")
    - name: eventHub
      value: "mytopic"
    - name: consumerGroup
      value: "myapp"
    # Either connectionString or eventHubNamespace is required
    # Use connectionString when *not* using Microsoft Entra ID
    - name: connectionString
      value: "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"
    # Use eventHubNamespace when using Microsoft Entra ID
    - name: eventHubNamespace
      value: "namespace"
    - name: enableEntityManagement
      value: "false"
    - name: enableInOrderMessageDelivery
      value: "false"
    # The following four properties are needed only if enableEntityManagement is set to true
    - name: resourceGroupName
      value: "test-rg"
    - name: subscriptionID
      value: "value of Azure subscription ID"
    - name: partitionCount
      value: "1"
    - name: messageRetentionInDays
      value: "3"
    # Checkpoint store attributes
    - name: storageAccountName
      value: "myeventhubstorage"
    - name: storageAccountKey
      value: "112233445566778899"
    - name: storageContainerName
      value: "myeventhubstoragecontainer"
    # Alternative to passing storageAccountKey
    - name: storageConnectionString
      value: "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>"
    # Optional metadata
    - name: getAllMessageProperties
      value: "true"
    - name: direction
      value: "input, output"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
eventHubY*Input/OutputThe name of the Event Hubs hub (“topic”). Required if using Microsoft Entra ID authentication or if the connection string doesn’t contain an EntityPath valuemytopic
connectionStringY*Input/OutputConnection string for the Event Hub or the Event Hub namespace.
* Mutally exclusive with eventHubNamespace field.
* Required when not using Microsoft Entra ID Authentication
"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}" or "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"
eventHubNamespaceY*Input/OutputThe Event Hub Namespace name.
* Mutally exclusive with connectionString field.
* Required when using Microsoft Entra ID Authentication
"namespace"
enableEntityManagementNInput/OutputBoolean value to allow management of the EventHub namespace and storage account. Default: false"true", "false"
enableInOrderMessageDeliveryNInput/OutputBoolean value to allow messages to be delivered in the order in which they were posted. This assumes partitionKey is set when publishing or posting to ensure ordering across partitions. Default: false"true", "false"
resourceGroupNameNInput/OutputName of the resource group the Event Hub namespace is part of. Required when entity management is enabled"test-rg"
subscriptionIDNInput/OutputAzure subscription ID value. Required when entity management is enabled"azure subscription id"
partitionCountNInput/OutputNumber of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: "1""2"
messageRetentionInDaysNInput/OutputNumber of days to retain messages for in the newly created Event Hub namespace. Used only when entity management is enabled. Default: "1""90"
consumerGroupYInputThe name of the Event Hubs Consumer Group to listen on"group1"
storageAccountNameYInputStorage account name to use for the checkpoint store."myeventhubstorage"
storageAccountKeyY*InputStorage account key for the checkpoint store account.
* When using Microsoft Entra ID, it’s possible to omit this if the service principal has access to the storage account too.
"112233445566778899"
storageConnectionStringY*InputConnection string for the checkpoint store, alternative to specifying storageAccountKey"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"
storageContainerNameYInputStorage container name for the storage account name."myeventhubstoragecontainer"
getAllMessagePropertiesNInputWhen set to true, retrieves all user/app/custom properties from the Event Hub message and forwards them in the returned event metadata. Default setting is "false"."true", "false"
directionNInput/OutputThe direction of the binding."input", "output", "input, output"

Microsoft Entra ID authentication

The Azure Event Hubs pub/sub component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Binding support

This component supports output binding with the following operations:

  • create: publishes a new message to Azure Event Hubs

Input Binding to Azure IoT Hub Events

Azure IoT Hub provides an endpoint that is compatible with Event Hubs, so Dapr apps can create input bindings to read Azure IoT Hub events using the Event Hubs bindings component.

The device-to-cloud events created by Azure IoT Hub devices will contain additional IoT Hub System Properties, and the Azure Event Hubs binding for Dapr will return the following as part of the response metadata:

System Property NameDescription & Routing Query Keyword
iothub-connection-auth-generation-idThe connectionDeviceGenerationId of the device that sent the message. See IoT Hub device identity properties.
iothub-connection-auth-methodThe connectionAuthMethod used to authenticate the device that sent the message.
iothub-connection-device-idThe deviceId of the device that sent the message. See IoT Hub device identity properties.
iothub-connection-module-idThe moduleId of the device that sent the message. See IoT Hub device identity properties.
iothub-enqueuedtimeThe enqueuedTime in RFC3339 format that the device-to-cloud message was received by IoT Hub.
message-idThe user-settable AMQP messageId.

For example, the headers of a HTTP Read() response would contain:

{
  'user-agent': 'fasthttp',
  'host': '127.0.0.1:3000',
  'content-type': 'application/json',
  'content-length': '120',
  'iothub-connection-device-id': 'my-test-device',
  'iothub-connection-auth-generation-id': '637618061680407492',
  'iothub-connection-auth-method': '{"scope":"module","type":"sas","issuer":"iothub","acceptingIpFilterRule":null}',
  'iothub-connection-module-id': 'my-test-module-a',
  'iothub-enqueuedtime': '2021-07-13T22:08:09Z',
  'message-id': 'my-custom-message-id',
  'x-opt-sequence-number': '35',
  'x-opt-enqueued-time': '2021-07-13T22:08:09Z',
  'x-opt-offset': '21560',
  'traceparent': '00-4655608164bc48b985b42d39865f3834-ed6cf3697c86e7bd-01'
}

2.17 - Azure OpenAI binding spec

Detailed documentation on the Azure OpenAI binding component

Component format

To setup an Azure OpenAI binding create a component of type bindings.azure.openai. See this guide on how to create and apply a binding configuration. See this for the documentation for Azure OpenAI Service.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.openai
  version: v1
  metadata:
  - name: apiKey # Required
    value: "1234567890abcdef"
  - name: endpoint # Required
    value: "https://myopenai.openai.azure.com"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
endpointYOutputAzure OpenAI service endpoint URL."https://myopenai.openai.azure.com"
apiKeyY*OutputThe access key of the Azure OpenAI service. Only required when not using Microsoft Entra ID authentication."1234567890abcdef"
azureTenantIdY*InputThe tenant ID of the Azure OpenAI resource. Only required when apiKey is not provided."tenentID"
azureClientIdY*InputThe client ID that should be used by the binding to create or update the Azure OpenAI Subscription and to authenticate incoming messages. Only required when apiKey is not provided."clientId"
azureClientSecretY*InputThe client secret that should be used by the binding to create or update the Azure OpenAI Subscription and to authenticate incoming messages. Only required when apiKey is not provided."clientSecret"

Microsoft Entra ID authentication

The Azure OpenAI binding component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Example Configuration

apiVersion: dapr.io/v1alpha1
kind: component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.openai
  version: v1
  metadata:
  - name: endpoint
    value: "https://myopenai.openai.azure.com"
  - name: azureTenantId
    value: "***"
  - name: azureClientId
    value: "***"
  - name: azureClientSecret
    value: "***"

Binding support

This component supports output binding with the following operations:

Completion API

To call the completion API with a prompt, invoke the Azure OpenAI binding with a POST method and the following JSON body:

{
  "operation": "completion",
  "data": {
    "deploymentId": "my-model",
    "prompt": "A dog is",
    "maxTokens":5
    }
}

The data parameters are:

  • deploymentId - string that specifies the model deployment ID to use.
  • prompt - string that specifies the prompt to generate completions for.
  • maxTokens - (optional) defines the max number of tokens to generate. Defaults to 16 for completion API.
  • temperature - (optional) defines the sampling temperature between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. Defaults to 1.0 for completion API.
  • topP - (optional) defines the sampling temperature. Defaults to 1.0 for completion API.
  • n - (optional) defines the number of completions to generate. Defaults to 1 for completion API.
  • presencePenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Defaults to 0.0 for completion API.
  • frequencyPenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Defaults to 0.0 for completion API.

Read more about the importance and usage of these parameters in the Azure OpenAI API documentation.

Examples

curl -d '{ "data": {"deploymentId: "my-model" , "prompt": "A dog is ", "maxTokens":15}, "operation": "completion" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the following JSON:

[
  {
    "finish_reason": "length",
    "index": 0,
    "text": " a pig in a dress.\n\nSun, Oct 20, 2013"
  },
  {
    "finish_reason": "length",
    "index": 1,
    "text": " the only thing on earth that loves you\n\nmore than he loves himself.\"\n\n"
  }
]

Chat Completion API

To perform a chat-completion operation, invoke the Azure OpenAI binding with a POST method and the following JSON body:

{
    "operation": "chat-completion",
    "data": {
        "deploymentId": "my-model",
        "messages": [
            {
                "role": "system",
                "message": "You are a bot that gives really short replies"
            },
            {
                "role": "user",
                "message": "Tell me a joke"
            }
        ],
        "n": 2,
        "maxTokens": 30,
        "temperature": 1.2
    }
}

The data parameters are:

  • deploymentId - string that specifies the model deployment ID to use.
  • messages - array of messages that will be used to generate chat completions. Each message is of the form:
    • role - string that specifies the role of the message. Can be either user, system or assistant.
    • message - string that specifies the conversation message for the role.
  • maxTokens - (optional) defines the max number of tokens to generate. Defaults to 16 for the chat completion API.
  • temperature - (optional) defines the sampling temperature between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. Defaults to 1.0 for the chat completion API.
  • topP - (optional) defines the sampling temperature. Defaults to 1.0 for the chat completion API.
  • n - (optional) defines the number of completions to generate. Defaults to 1 for the chat completion API.
  • presencePenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Defaults to 0.0 for the chat completion API.
  • frequencyPenalty - (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Defaults to 0.0 for the chat completion API.

Example

curl -d '{
  "data": {
      "deploymentId": "my-model",
      "messages": [
          {
              "role": "system",
              "message": "You are a bot that gives really short replies"
          },
          {
              "role": "user",
              "message": "Tell me a joke"
          }
      ],
      "n": 2,
      "maxTokens": 30,
      "temperature": 1.2
  },
  "operation": "chat-completion"
}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the following JSON:

[
  {
    "finish_reason": "stop",
    "index": 0,
    "message": {
      "content": "Why was the math book sad? Because it had too many problems.",
      "role": "assistant"
    }
  },
  {
    "finish_reason": "stop",
    "index": 1,
    "message": {
      "content": "Why did the tomato turn red? Because it saw the salad dressing!",
      "role": "assistant"
    }
  }
]

Get Embedding API

The get-embedding operation returns a vector representation of a given input that can be easily consumed by machine learning models and other algorithms. To perform a get-embedding operation, invoke the Azure OpenAI binding with a POST method and the following JSON body:

{
    "operation": "get-embedding",
    "data": {
        "deploymentId": "my-model",
        "message": "The capital of France is Paris."
    }
}

The data parameters are:

  • deploymentId - string that specifies the model deployment ID to use.
  • message - string that specifies the text to embed.

Example

curl -d '{
  "data": {
      "deploymentId": "embeddings",
      "message": "The capital of France is Paris."
  },
  "operation": "get-embedding"
}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the following JSON:

[0.018574921,-0.00023652936,-0.0057790717,.... (1536 floats total for ada)]

Learn more about the Azure OpenAI output binding

Watch the following Community Call presentation to learn more about the Azure OpenAI output binding.

2.18 - Azure Service Bus Queues binding spec

Detailed documentation on the Azure Service Bus Queues binding component

Component format

To setup Azure Service Bus Queues binding create a component of type bindings.azure.servicebusqueues. See this guide on how to create and apply a binding configuration.

Connection String Authentication

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.servicebusqueues
  version: v1
  metadata:
  - name: connectionString # Required when not using Azure Authentication.
    value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
  - name: queueName
    value: "queue1"
  # - name: timeoutInSec # Optional
  #   value: "60"
  # - name: handlerTimeoutInSec # Optional
  #   value: "60"
  # - name: disableEntityManagement # Optional
  #   value: "false"
  # - name: maxDeliveryCount # Optional
  #   value: "3"
  # - name: lockDurationInSec # Optional
  #   value: "60"
  # - name: lockRenewalInSec # Optional
  #   value: "20"
  # - name: maxActiveMessages # Optional
  #   value: "10000"
  # - name: maxConcurrentHandlers # Optional
  #   value: "10"
  # - name: defaultMessageTimeToLiveInSec # Optional
  #   value: "10"
  # - name: autoDeleteOnIdleInSec # Optional
  #   value: "3600"
  # - name: minConnectionRecoveryInSec # Optional
  #   value: "2"
  # - name: maxConnectionRecoveryInSec # Optional
  #   value: "300"
  # - name: maxRetriableErrorsPerSec # Optional
  #   value: "10"
  # - name: publishMaxRetries # Optional
  #   value: "5"
  # - name: publishInitialRetryIntervalInMs # Optional
  #   value: "500"
  # - name: direction
  #   value: "input, output"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
connectionStringYInput/OutputThe Service Bus connection string. Required unless using Microsoft Entra ID authentication."Endpoint=sb://************"
queueNameYInput/OutputThe Service Bus queue name. Queue names are case-insensitive and will always be forced to lowercase."queuename"
timeoutInSecNInput/OutputTimeout for all invocations to the Azure Service Bus endpoint, in seconds. Note that this option impacts network calls and it’s unrelated to the TTL applies to messages. Default: "60""60"
namespaceNameNInput/OutputParameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication."namespace.servicebus.windows.net"
disableEntityManagementNInput/OutputWhen set to true, queues and subscriptions do not get created automatically. Default: "false""true", "false"
lockDurationInSecNInput/OutputDefines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server."30"
autoDeleteOnIdleInSecNInput/OutputTime in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: "0" (disabled)"3600"
defaultMessageTimeToLiveInSecNInput/OutputDefault message time to live, in seconds. Used during subscription creation only."10"
maxDeliveryCountNInput/OutputDefines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server."10"
minConnectionRecoveryInSecNInput/OutputMinimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: "2""5"
maxConnectionRecoveryInSecNInput/OutputMaximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: "300" (5 minutes)"600"
maxActiveMessagesNDefines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: "1""1"
handlerTimeoutInSecNInputTimeout for invoking the app’s handler. Default: "0" (no timeout)"30"
minConnectionRecoveryInSecNInputMinimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: "2""5"
maxConnectionRecoveryInSecNInputMaximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the binding waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: "300" (5 minutes)"600"
lockRenewalInSecNInputDefines the frequency at which buffered message locks will be renewed. Default: "20"."20"
maxActiveMessagesNInputDefines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: "1""2000"
maxConcurrentHandlersNInputDefines the maximum number of concurrent message handlers; set to 0 for unlimited. Default: "1""10"
maxRetriableErrorsPerSecNInputMaximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: "10""10"
publishMaxRetriesNOutputThe max number of retries for when Azure Service Bus responds with “too busy” in order to throttle messages. Defaults: "5""5"
publishInitialRetryIntervalInMsNOutputTime in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: "500""500"
directionNInput/OutputThe direction of the binding"input", "output", "input, output"

Microsoft Entra ID authentication

The Azure Service Bus Queues binding component supports authentication using all Microsoft Entra ID mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

Example Configuration

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.servicebusqueues
  version: v1
  metadata:
  - name: azureTenantId
    value: "***"
  - name: azureClientId
    value: "***"
  - name: azureClientSecret
    value: "***"
  - name: namespaceName
    # Required when using Azure Authentication.
    # Must be a fully-qualified domain name
    value: "servicebusnamespace.servicebus.windows.net"
  - name: queueName
    value: queue1
  - name: ttlInSeconds
    value: 60

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create: publishes a message to the specified queue

Message metadata

Azure Service Bus messages extend the Dapr message format with additional contextual metadata. Some metadata fields are set by Azure Service Bus itself (read-only) and others can be set by the client when publishing a message through Invoke binding call with create operation.

Sending a message with metadata

To set Azure Service Bus metadata when sending a message, set the query parameters on the HTTP request or the gRPC metadata as documented here.

  • metadata.MessageId
  • metadata.CorrelationId
  • metadata.SessionId
  • metadata.Label
  • metadata.ReplyTo
  • metadata.PartitionKey
  • metadata.To
  • metadata.ContentType
  • metadata.ScheduledEnqueueTimeUtc
  • metadata.ReplyToSessionId

Receiving a message with metadata

When Dapr calls your application, it attaches Azure Service Bus message metadata to the request using either HTTP headers or gRPC metadata. In addition to the settable metadata listed above, you can also access the following read-only message metadata.

  • metadata.DeliveryCount
  • metadata.LockedUntilUtc
  • metadata.LockToken
  • metadata.EnqueuedTimeUtc
  • metadata.SequenceNumber

To find out more details on the purpose of any of these metadata properties refer to the official Azure Service Bus documentation.

In addition, all entries of ApplicationProperties from the original Azure Service Bus message are appended as metadata.<application property's name>.

Specifying a TTL per message

Time to live can be defined on a per-queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at the queue level.

To set time to live at message level use the metadata section in the request body during the binding invocation: the field name is ttlInSeconds.

curl -X POST http://localhost:3500/v1.0/bindings/myServiceBusQueue \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "ttlInSeconds": "60"
        },
        "operation": "create"
      }'

Schedule a message

A message can be scheduled for delayed processing.

To schedule a message, use the metadata section in the request body during the binding invocation: the field name is ScheduledEnqueueTimeUtc.

The supported timestamp formats are RFC1123 and RFC3339.

curl -X POST http://localhost:3500/v1.0/bindings/myServiceBusQueue \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "ScheduledEnqueueTimeUtc": "Tue, 02 Jan 2024 15:04:05 GMT"
        },
        "operation": "create"
      }'

2.19 - Azure SignalR binding spec

Detailed documentation on the Azure SignalR binding component

Component format

To setup Azure SignalR binding create a component of type bindings.azure.signalr. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.signalr
  version: v1
  metadata:
  - name: connectionString
    value: "Endpoint=https://<your-azure-signalr>.service.signalr.net;AccessKey=<your-access-key>;Version=1.0;"
  - name: hub  # Optional
    value: "<hub name>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
connectionStringYOutputThe Azure SignalR connection string"Endpoint=https://<your-azure-signalr>.service.signalr.net;AccessKey=<your-access-key>;Version=1.0;"
hubNOutputDefines the hub in which the message will be send. The hub can be dynamically defined as a metadata value when publishing to an output binding (key is “hub”)"myhub"
endpointNOutputEndpoint of Azure SignalR; required if not included in the connectionString or if using Microsoft Entra ID"https://<your-azure-signalr>.service.signalr.net"
accessKeyNOutputAccess key"your-access-key"

Microsoft Entra ID authentication

The Azure SignalR binding component supports authentication using all Microsoft Entra ID mechanisms. See the docs for authenticating to Azure to learn more about the relevant component metadata fields based on your choice of Microsoft Entra ID authentication mechanism.

You have two options to authenticate this component with Microsoft Entra ID:

  • Pass individual metadata keys:
    • endpoint for the endpoint
    • If needed: azureClientId, azureTenantId and azureClientSecret
  • Pass a connection string with AuthType=aad specified:
    • System-assigned managed identity: Endpoint=https://<servicename>.service.signalr.net;AuthType=aad;Version=1.0;
    • User-assigned managed identity: Endpoint=https://<servicename>.service.signalr.net;AuthType=aad;ClientId=<clientid>;Version=1.0;
    • Microsoft Entra ID application: Endpoint=https://<servicename>.service.signalr.net;AuthType=aad;ClientId=<clientid>;ClientSecret=<clientsecret>;TenantId=<tenantid>;Version=1.0;
      Note that you cannot use a connection string if your application’s ClientSecret contains a ; character.

Binding support

This component supports output binding with the following operations:

  • create

Additional information

By default the Azure SignalR output binding will broadcast messages to all connected users. To narrow the audience there are two options, both configurable in the Metadata property of the message:

  • group: Sends the message to a specific Azure SignalR group
  • user: Sends the message to a specific Azure SignalR user

Applications publishing to an Azure SignalR output binding should send a message with the following contract:

{
    "data": {
        "Target": "<enter message name>",
        "Arguments": [
            {
                "sender": "dapr",
                "text": "Message from dapr output binding"
            }
        ]
    },
    "metadata": {
        "group": "chat123"
    },
    "operation": "create"
}

For more information on integration Azure SignalR into a solution check the documentation

2.20 - Azure Storage Queues binding spec

Detailed documentation on the Azure Storage Queues binding component

Component format

To setup Azure Storage Queues binding create a component of type bindings.azure.storagequeues. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.azure.storagequeues
  version: v1
  metadata:
  - name: accountName
    value: "account1"
  - name: accountKey
    value: "***********"
  - name: queueName
    value: "myqueue"
# - name: pollingInterval
#   value: "30s"
# - name: ttlInSeconds
#   value: "60"
# - name: decodeBase64
#   value: "false"
# - name: encodeBase64
#   value: "false"
# - name: endpoint
#   value: "http://127.0.0.1:10001"
# - name: visibilityTimeout
#   value: "30s"
# - name: initialVisibilityDelay
#   value: "30s"
# - name: direction 
#   value: "input, output"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
accountNameYInput/OutputThe name of the Azure Storage account"account1"
accountKeyY*Input/OutputThe access key of the Azure Storage account. Only required when not using Microsoft Entra ID authentication."access-key"
queueNameYInput/OutputThe name of the Azure Storage queue"myqueue"
pollingIntervalNOutputSet the interval to poll Azure Storage Queues for new messages, as a Go duration value. Default: "10s""30s"
ttlInSecondsNOutputParameter to set the default message time to live. If this parameter is omitted, messages will expire after 10 minutes. See also"60"
decodeBase64NInputConfiguration to decode base64 content received from the Storage Queue into a string. Defaults to falsetrue, false
encodeBase64NOutputIf enabled base64 encodes the data payload before uploading to Azure storage queues. Default false.true, false
endpointNInput/OutputOptional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https://), the IP or FQDN, and optional port."http://127.0.0.1:10001" or "https://accountName.queue.example.com"
initialVisibilityDelayNInputAllows setting a custom queue visibility timeout to avoid immediate retrying of recently failed messages. Defaults to 30 seconds."100s"
visibilityTimeoutNInputSets a delay before a message becomes visible in the queue after being added. It can also be specified per message by setting the initialVisibilityDelay property in the invocation request’s metadata. Defaults to 0 seconds."30s"
directionNInput/OutputDirection of the binding."input", "output", "input, output"

Microsoft Entra ID authentication

The Azure Storage Queue binding component supports authentication using all Microsoft Entra ID mechanisms. See the docs for authenticating to Azure to learn more about the relevant component metadata fields based on your choice of Microsoft Entra ID authentication mechanism.

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

Specifying a TTL per message

Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.

To set time to live at message level use the metadata section in the request body during the binding invocation.

The field name is ttlInSeconds.

Example:

curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "ttlInSeconds": "60"
        },
        "operation": "create"
      }'

Specifying a Initial Visibility delay per message

An initial visibility delay can be defined on queue level or at the message level. The value defined at message level overwrites any value set at a queue level.

To set an initial visibility delay value at the message level, use the metadata section in the request body during the binding invocation.

The field name is initialVisbilityDelay.

Example.

curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "initialVisbilityDelay": "30"
        },
        "operation": "create"
      }'

2.21 - Cloudflare Queues bindings spec

Detailed documentation on the Cloudflare Queues component

Component format

This output binding for Dapr allows interacting with Cloudflare Queues to publish new messages. It is currently not possible to consume messages from a Queue using Dapr.

To setup a Cloudflare Queues binding, create a component of type bindings.cloudflare.queues. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.cloudflare.queues
  version: v1
  # Increase the initTimeout if Dapr is managing the Worker for you
  initTimeout: "120s"
  metadata:
    # Name of the existing Cloudflare Queue (required)
    - name: queueName
      value: ""
    # Name of the Worker (required)
    - name: workerName
      value: ""
    # PEM-encoded private Ed25519 key (required)
    - name: key
      value: |
        -----BEGIN PRIVATE KEY-----
        MC4CAQ...
        -----END PRIVATE KEY-----
    # Cloudflare account ID (required to have Dapr manage the Worker)
    - name: cfAccountID
      value: ""
    # API token for Cloudflare (required to have Dapr manage the Worker)
    - name: cfAPIToken
      value: ""
    # URL of the Worker (required if the Worker has been pre-created outside of Dapr)
    - name: workerUrl
      value: ""

Spec metadata fields

FieldRequiredBinding supportDetailsExample
queueNameYOutputName of the existing Cloudflare Queue"mydaprqueue"
keyYOutputEd25519 private key, PEM-encodedSee example above
cfAccountIDY/NOutputCloudflare account ID. Required to have Dapr manage the worker."456789abcdef8b5588f3d134f74ac"def
cfAPITokenY/NOutputAPI token for Cloudflare. Required to have Dapr manage the Worker."secret-key"
workerUrlY/NOutputURL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr."https://mydaprqueue.mydomain.workers.dev"

When you configure Dapr to create your Worker for you, you may need to set a longer value for the initTimeout property of the component, to allow enough time for the Worker script to be deployed. For example: initTimeout: "120s"

Binding support

This component supports output binding with the following operations:

  • publish (alias: create): Publish a message to the Queue.
    The data passed to the binding is used as-is for the body of the message published to the Queue.
    This operation does not accept any metadata property.

Create a Cloudflare Queue

To use this component, you must have a Cloudflare Queue created in your Cloudflare account.

You can create a new Queue in one of two ways:

  • Using the Cloudflare dashboard

  • Using the Wrangler CLI:

    # Authenticate if needed with `npx wrangler login` first
    npx wrangler queues create <NAME>
    # For example: `npx wrangler queues create myqueue`
    

Configuring the Worker

Because Cloudflare Queues can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Queue.

Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on workerd.

If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:

  • workerName: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the “workers.dev” domain configured for your Cloudflare account is mydomain.workers.dev and you set workerName to mydaprqueue, the Worker that Dapr deploys will be available at https://mydaprqueue.mydomain.workers.dev.
  • cfAccountID: ID of your Cloudflare account. You can find this in your browser’s URL bar after logging into the Cloudflare dashboard, with the ID being the hex string right after dash.cloudflare.com. For example, if the URL is https://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef, the value for cfAccountID is 456789abcdef8b5588f3d134f74acdef.
  • cfAPIToken: API token with permission to create and edit Workers. You can create it from the “API Tokens” page in the “My Profile” section in the Cloudflare dashboard:
    1. Click on “Create token”.
    2. Select the “Edit Cloudflare Workers” template.
    3. Follow the on-screen instructions to generate a new API token.

When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it’s up-to-date. If the Worker doesn’t exist, or if it’s using an outdated version, Dapr creates or upgrades it for you automatically.

If you’d rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.

To manually provision a Worker script, you will need to have Node.js installed on your local machine.

  1. Create a new folder where you’ll place the source code of the Worker, for example: daprworker.
  2. If you haven’t already, authenticate with Wrangler (the Cloudflare Workers CLI) using: npx wrangler login.
  3. Inside the newly-created folder, create a new wrangler.toml file with the contents below, filling in the missing information as appropriate:
# Name of your Worker, for example "mydaprqueue"
name = ""

# Do not change these options
main = "worker.js"
compatibility_date = "2022-12-09"
usage_model = "bundled"

[vars]
# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).
# Example:
# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----
PUBLIC_KEY = ""
# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprqueue".
TOKEN_AUDIENCE = ""

# Set the next two values to the name of your Queue, for example "myqueue".
# Note that they will both be set to the same value.
[[queues.producers]]
queue = ""
binding = ""

Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the public part of the key when deploying a Worker!

  1. Copy the (pre-compiled and minified) code of the Worker in the worker.js file. You can do that with this command:
# Set this to the version of Dapr that you're using
DAPR_VERSION="release-1.15"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
  1. Deploy the Worker using Wrangler:
npx wrangler publish

Once your Worker has been deployed, you will need to initialize the component with these two metadata options:

  • workerName: Name of the Worker script. This is the value you set in the name property in the wrangler.toml file.
  • workerUrl: URL of the deployed Worker. The npx wrangler command will show the full URL to you, for example https://mydaprqueue.mydomain.workers.dev.

Generate an Ed25519 key pair

All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Cloudflare Queue). These include industry-standard measures such as:

  • All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
  • All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
  • The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).

To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.

Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you’re using an older version of OpenSSL.

Note for Mac users: on macOS, the “openssl” binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn’t support Ed25519 keys. If you’re using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using brew install openssl@3 then replacing openssl in the commands below with $(brew --prefix)/opt/openssl@3/bin/openssl.

You can generate a new Ed25519 key pair with OpenSSL using:

openssl genpkey -algorithm ed25519 -out private.pem
openssl pkey -in private.pem -pubout -out public.pem

On macOS, using openssl@3 from Homebrew:

$(brew --prefix)/opt/openssl@3/bin/openssl genpkey -algorithm ed25519 -out private.pem
$(brew --prefix)/opt/openssl@3/bin/openssl pkey -in private.pem -pubout -out public.pem

If you don’t have the step CLI already, install it following the official instructions.

Next, you can generate a new Ed25519 key pair with the step CLI using:

step crypto keypair \
  public.pem private.pem \
  --kty OKP --curve Ed25519 \
  --insecure --no-password

Regardless of how you generated your key pair, with the instructions above you’ll have two files:

  • private.pem contains the private part of the key; use the contents of this file for the key property of the component’s metadata.
  • public.pem contains the public part of the key, which you’ll need only if you’re deploying a Worker manually (as per the instructions in the previoius section).

2.22 - commercetools GraphQL binding spec

Detailed documentation on the commercetools GraphQL binding component

Component format

To setup commercetools GraphQL binding create a component of type bindings.commercetools. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.commercetools
  version: v1
  metadata:
  - name: region # required.
    value: "region"
  - name: provider # required.
    value: "gcp"
  - name: projectKey # required.
    value: "<project-key>"
  - name: clientID # required.
    value: "*****************"
  - name: clientSecret # required.
    value: "*****************"
  - name: scopes # required.
    value: "<project-scopes>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
regionYOutputThe region of the commercetools project"europe-west1"
providerYOutputThe cloud provider, either gcp or aws"gcp", "aws"
projectKeyYOutputThe commercetools project key
clientIDYOutputThe commercetools client ID for the project
clientSecretYOutputThe commercetools client secret for the project
scopesYOutputThe commercetools scopes for the project"manage_project:project-key"

For more information see commercetools - Creating an API Client and commercetools - Regions.

Binding support

This component supports output binding with the following operations:

  • create

2.23 - Cron binding spec

Detailed documentation on the cron binding component

Component format

To setup cron binding create a component of type bindings.cron. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.cron
  version: v1
  metadata:
  - name: schedule
    value: "@every 15m" # valid cron schedule
  - name: direction
    value: "input"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
scheduleYInputThe valid cron schedule to use. See this for more details"@every 15m"
directionNInputThe direction of the binding"input"

Schedule Format

The Dapr cron binding supports following formats:

CharacterDescriptorAcceptable values
1Second0 to 59, or *
2Minute0 to 59, or *
3Hour0 to 23, or * (UTC)
4Day of the month1 to 31, or *
5Month1 to 12, or *
6Day of the week0 to 7 (where 0 and 7 represent Sunday), or *

For example:

  • 30 * * * * * - every 30 seconds
  • 0 */15 * * * * - every 15 minutes
  • 0 30 3-6,20-23 * * * - every hour on the half hour in the range 3-6am, 8-11pm
  • CRON_TZ=America/New_York 0 30 04 * * * - every day at 4:30am New York time

You can learn more about cron and the supported formats here

For ease of use, the Dapr cron binding also supports few shortcuts:

  • @every 15s where s is seconds, m minutes, and h hours
  • @daily or @hourly which runs at that period from the time the binding is initialized

Listen to the cron binding

After setting up the cron binding, all you need to do is listen on an endpoint that matches the name of your component. Assume the [NAME] is scheduled. This will be made as a HTTP POST request. The below example shows how a simple Node.js Express application can receive calls on the /scheduled endpoint and write a message to the console.

app.post('/scheduled', async function(req, res){
    console.log("scheduled endpoint called", req.body)
    res.status(200).send()
});

When running this code, note that the /scheduled endpoint is called every fifteen minutes by the Dapr sidecar.

Binding support

This component supports input binding interface.

2.24 - GCP Pub/Sub binding spec

Detailed documentation on the GCP Pub/Sub binding component

Component format

To setup GCP Pub/Sub binding create a component of type bindings.gcp.pubsub. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.gcp.pubsub
  version: v1
  metadata:
  - name: topic
    value: "topic1"
  - name: subscription
    value: "subscription1"
  - name: type
    value: "service_account"
  - name: project_id
    value: "project_111"
  - name: private_key_id
    value: "*************"
  - name: client_email
    value: "name@domain.com"
  - name: client_id
    value: "1111111111111111"
  - name: auth_uri
    value: "https://accounts.google.com/o/oauth2/auth"
  - name: token_uri
    value: "https://oauth2.googleapis.com/token"
  - name: auth_provider_x509_cert_url
    value: "https://www.googleapis.com/oauth2/v1/certs"
  - name: client_x509_cert_url
    value: "https://www.googleapis.com/robot/v1/metadata/x509/<project-name>.iam.gserviceaccount.com"
  - name: private_key
    value: "PRIVATE KEY"
  - name: direction
    value: "input, output"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
topicYOutputGCP Pub/Sub topic name"topic1"
subscriptionNGCP Pub/Sub subscription name"name1"
typeYOutputGCP credentials typeservice_account
project_idYOutputGCP project idprojectId
private_key_idNOutputGCP private key id"privateKeyId"
private_keyYOutputGCP credentials private key. Replace with x509 cert12345-12345
client_emailYOutputGCP client email"client@email.com"
client_idNOutputGCP client id0123456789-0123456789
auth_uriNOutputGoogle account OAuth endpointhttps://accounts.google.com/o/oauth2/auth
token_uriNOutputGoogle account token urihttps://oauth2.googleapis.com/token
auth_provider_x509_cert_urlNOutputGCP credentials cert urlhttps://www.googleapis.com/oauth2/v1/certs
client_x509_cert_urlNOutputGCP credentials project x509 cert urlhttps://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com
directionNInput/OutputThe direction of the binding."input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

2.25 - GCP Storage Bucket binding spec

Detailed documentation on the GCP Storage Bucket binding component

Component format

To setup GCP Storage Bucket binding create a component of type bindings.gcp.bucket. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.gcp.bucket
  version: v1
  metadata:
  - name: bucket
    value: "mybucket"
  - name: type
    value: "service_account"
  - name: project_id
    value: "project_111"
  - name: private_key_id
    value: "*************"
  - name: client_email
    value: "name@domain.com"
  - name: client_id
    value: "1111111111111111"
  - name: auth_uri
    value: "https://accounts.google.com/o/oauth2/auth"
  - name: token_uri
    value: "https://oauth2.googleapis.com/token"
  - name: auth_provider_x509_cert_url
    value: "https://www.googleapis.com/oauth2/v1/certs"
  - name: client_x509_cert_url
    value: "https://www.googleapis.com/robot/v1/metadata/x509/<project-name>.iam.gserviceaccount.com"
  - name: private_key
    value: "PRIVATE KEY"
  - name: decodeBase64
    value: "<bool>"
  - name: encodeBase64
    value: "<bool>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
bucketYOutputThe bucket name"mybucket"
project_idYOutputGCP project IDprojectId
typeNOutputThe GCP credentials type"service_account"
private_key_idNOutputIf using explicit credentials, this field should contain the private_key_id field from the service account json document"privateKeyId"
private_keyNOutputIf using explicit credentials, this field should contain the private_key field from the service account json. Replace with x509 cert12345-12345
client_emailNOutputIf using explicit credentials, this field should contain the client_email field from the service account json"client@email.com"
client_idNOutputIf using explicit credentials, this field should contain the client_id field from the service account json0123456789-0123456789
auth_uriNOutputIf using explicit credentials, this field should contain the auth_uri field from the service account jsonhttps://accounts.google.com/o/oauth2/auth
token_uriNOutputIf using explicit credentials, this field should contain the token_uri field from the service account jsonhttps://oauth2.googleapis.com/token
auth_provider_x509_cert_urlNOutputIf using explicit credentials, this field should contain the auth_provider_x509_cert_url field from the service account jsonhttps://www.googleapis.com/oauth2/v1/certs
client_x509_cert_urlNOutputIf using explicit credentials, this field should contain the client_x509_cert_url field from the service account jsonhttps://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com
decodeBase64NOutputConfiguration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). true is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to falsetrue, false
encodeBase64NOutputConfiguration to encode base64 file content before return the content. (In case of opening a file with binary content). true is the only allowed positive value. Other positive variations like "True", "1" are not acceptable. Defaults to falsetrue, false

GCP Credentials

Since the GCP Storage Bucket component uses the GCP Go Client Libraries, by default it authenticates using Application Default Credentials. This is explained further in the Authenticate to GCP Cloud services using client libraries guide. Also, see how to Set up Application Default Credentials.

Binding support

This component supports output binding with the following operations:

Create file

To perform a create operation, invoke the GCP Storage Bucket binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated. See below for Metadata support to set the name

{
  "operation": "create",
  "data": "YOUR_CONTENT"
}

The metadata parameters are:

  • key - (optional) the name of the object
  • decodeBase64 - (optional) configuration to decode base64 file content before saving to storage

Examples

Save text to a random generated UUID file

On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Upload a file

To upload a file, pass the file contents as the data payload; you may want to encode this in e.g. Base64 for binary content.

Then you can upload it as you would normally:

curl -d "{ \"operation\": \"create\", \"data\": \"(YOUR_FILE_CONTENTS)\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "$(cat my-test-file.jpg)", "metadata": { "key": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body will contain the following JSON:

{
    "objectURL":"https://storage.googleapis.com/<your bucket>/<key>",
}

Get object

To perform a get file operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object
  • encodeBase64 - (optional) configuration to encode base64 file content before return the content.

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the object.

Bulk get objects

To perform a bulk get operation that retrieves all bucket files at once, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "bulkGet",
}

The metadata parameters are:

  • encodeBase64 - (optional) configuration to encode base64 file content before return the content for all files

Example

curl -d '{ \"operation\": \"bulkget\"}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "bulkget"}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains an array of objects, where each object represents a file in the bucket with the following structure:

[
  {
    "name": "file1.txt",
    "data": "content of file1",
    "attrs": {
      "bucket": "mybucket",
      "name": "file1.txt",
      "size": 1234,
      ...
    }
  },
  {
    "name": "file2.txt",
    "data": "content of file2",
    "attrs": {
      "bucket": "mybucket",
      "name": "file2.txt",
      "size": 5678,
      ...
    }
  }
]

Each object in the array contains:

  • name: The name of the file
  • data: The content of the file
  • attrs: Object attributes from GCP Storage including metadata like creation time, size, content type, etc.

Delete object

To perform a delete object operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object

Examples

Delete object
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body will be retuned if successful.

List objects

To perform a list object operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "list",
  "data": {
    "maxResults": 10,
    "prefix": "file",
    "delimiter": "i0FvxAn2EOEL6"
  }
}

The data parameters are:

  • maxResults - (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
  • prefix - (optional) it can be used to filter objects starting with prefix.
  • delimiter - (optional) it can be used to restrict the results to only the kobjects in the given “directory”. Without the delimiter, the entire tree under the prefix is returned

Response

The response body contains the list of found objects.

The list of objects will be returned as JSON array in the following form:

[
	{
		"Bucket": "<your bucket>",
		"Name": "02WGzEdsUWNlQ",
		"ContentType": "image/png",
		"ContentLanguage": "",
		"CacheControl": "",
		"EventBasedHold": false,
		"TemporaryHold": false,
		"RetentionExpirationTime": "0001-01-01T00:00:00Z",
		"ACL": null,
		"PredefinedACL": "",
		"Owner": "",
		"Size": 5187,
		"ContentEncoding": "",
		"ContentDisposition": "",
		"MD5": "aQdLBCYV0BxA51jUaxc3pQ==",
		"CRC32C": 1058633505,
		"MediaLink": "https://storage.googleapis.com/download/storage/v1/b/<your bucket>/o/02WGzEdsUWNlQ?generation=1631553155678071&alt=media",
		"Metadata": null,
		"Generation": 1631553155678071,
		"Metageneration": 1,
		"StorageClass": "STANDARD",
		"Created": "2021-09-13T17:12:35.679Z",
		"Deleted": "0001-01-01T00:00:00Z",
		"Updated": "2021-09-13T17:12:35.679Z",
		"CustomerKeySHA256": "",
		"KMSKeyName": "",
		"Prefix": "",
		"Etag": "CPf+mpK5/PICEAE="
	}
]

Copy objects

To perform a copy object operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "copy",
  "metadata": {
    "destinationBucket": "destination-bucket-name",
  }
}

The metadata parameters are:

  • destinationBucket - the name of the destination bucket (required)

Move objects

To perform a move object operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "move",
  "metadata": {
    "destinationBucket": "destination-bucket-name",
  }
}

The metadata parameters are:

  • destinationBucket - the name of the destination bucket (required)

Rename objects

To perform a rename object operation, invoke the GCP bucket binding with a POST method and the following JSON body:

{
  "operation": "rename",
  "metadata": {
    "newName": "object-new-name",
  }
}

The metadata parameters are:

  • newName - the new name of the object (required)

2.26 - GraphQL binding spec

Detailed documentation on the GraphQL binding component

Component format

To setup GraphQL binding create a component of type bindings.graphql. See this guide on how to create and apply a binding configuration. To separate normal config settings (e.g. endpoint) from headers, “header:” is used a prefix on the header names.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: example.bindings.graphql
spec:
  type: bindings.graphql
  version: v1
  metadata:
    - name: endpoint
      value: "http://localhost:8080/v1/graphql"
    - name: header:x-hasura-access-key
      value: "adminkey"
    - name: header:Cache-Control
      value: "no-cache"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
endpointYOutputGraphQL endpoint string See here for more details"http://localhost:4000/graphql/graphql"
header:[HEADERKEY]NOutputGraphQL header. Specify the header key in the name, and the header value in the value."no-cache" (see above)
variable:[VARIABLEKEY]NOutputGraphQL query variable. Specify the variable name in the name, and the variable value in the value."123" (see below)

Endpoint and Header format

The GraphQL binding uses GraphQL client internally.

Binding support

This component supports output binding with the following operations:

  • query
  • mutation

query

The query operation is used for query statements, which returns the metadata along with data in a form of an array of row values.

Request

in := &dapr.InvokeBindingRequest{
Name:      "example.bindings.graphql",
Operation: "query",
Metadata: map[string]string{ "query": `query { users { name } }`},
}

To use a query that requires query variables, add a key-value pair to the metadata map, wherein every key corresponding to a query variable is the variable name prefixed with variable:

in := &dapr.InvokeBindingRequest{
Name: "example.bindings.graphql",
Operation: "query",
Metadata: map[string]string{ 
  "query": `query HeroNameAndFriends($episode: string!) { hero(episode: $episode) { name } }`,
  "variable:episode": "JEDI",
}

2.27 - HTTP binding spec

Detailed documentation on the HTTP binding component

Alternative

The service invocation API allows invoking non-Dapr HTTP endpoints and is the recommended approach. Read “How-To: Invoke Non-Dapr Endpoints using HTTP” for more information.

Setup Dapr component

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.http
  version: v1
  metadata:
    - name: url
      value: "http://something.com"
    #- name: maxResponseBodySize
    #  value: "100Mi" # OPTIONAL maximum amount of data to read from a response
    #- name: MTLSRootCA
    #  value: "/Users/somepath/root.pem" # OPTIONAL path to root CA or PEM-encoded string
    #- name: MTLSClientCert
    #  value: "/Users/somepath/client.pem" # OPTIONAL path to client cert or PEM-encoded string
    #- name: MTLSClientKey
    #  value: "/Users/somepath/client.key" # OPTIONAL path to client key or PEM-encoded string
    #- name: MTLSRenegotiation
    #  value: "RenegotiateOnceAsClient" # OPTIONAL one of: RenegotiateNever, RenegotiateOnceAsClient, RenegotiateFreelyAsClient
    #- name: securityToken # OPTIONAL <token to include as a header on HTTP requests>
    #  secretKeyRef:
    #    name: mysecret
    #    key: "mytoken"
    #- name: securityTokenHeader
    #  value: "Authorization: Bearer" # OPTIONAL <header name for the security token>
    #- name: errorIfNot2XX
    #  value: "false" # OPTIONAL

Spec metadata fields

FieldRequiredBinding supportDetailsExample
urlYOutputThe base URL of the HTTP endpoint to invokehttp://host:port/path, http://myservice:8000/customers
maxResponseBodySizeNOutputMaximum length of the response to read. A whole number is interpreted as bytes; units such as Ki, Mi, Gi (SI) or `kM
MTLSRootCANOutputPath to root CA certificate or PEM-encoded string
MTLSClientCertNOutputPath to client certificate or PEM-encoded string
MTLSClientKeyNOutputPath client private key or PEM-encoded string
MTLSRenegotiationNOutputType of mTLS renegotiation to be usedRenegotiateOnceAsClient
securityTokenNOutputThe value of a token to be added to a HTTP request as a header. Used together with securityTokenHeader
securityTokenHeaderNOutputThe name of the header for securityToken on a HTTP request
errorIfNot2XXNOutputIf a binding error should be thrown when the response is not in the 2xx range. Defaults to true

The values for MTLSRootCA, MTLSClientCert and MTLSClientKey can be provided in three ways:

  • Secret store reference:

    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: <NAME>
    spec:
      type: bindings.http
      version: v1
      metadata:
      - name: url
        value: http://something.com
      - name: MTLSRootCA
        secretKeyRef:
          name: mysecret
          key: myrootca
    auth:
      secretStore: <NAME_OF_SECRET_STORE_COMPONENT>
    
  • Path to the file: the absolute path to the file can be provided as a value for the field.

  • PEM encoded string: the PEM-encoded string can also be provided as a value for the field.

Binding support

This component supports output binding with the following HTTP methods/verbs:

  • create : For backward compatibility and treated like a post
  • get : Read data/records
  • head : Identical to get except that the server does not return a response body
  • post : Typically used to create records or send commands
  • put : Update data/records
  • patch : Sometimes used to update a subset of fields of a record
  • delete : Delete a data/record
  • options : Requests for information about the communication options available (not commonly used)
  • trace : Used to invoke a remote, application-layer loop- back of the request message (not commonly used)

Request

Operation metadata fields

All of the operations above support the following metadata fields

FieldRequiredDetailsExample
pathNThe path to append to the base URL. Used for accessing specific URIs."/1234", "/search?lastName=Jones"
Field with a capitalized first letterNAny fields that have a capital first letter are sent as request headers"Content-Type", "Accept"

Retrieving data

To retrieve data from the HTTP endpoint, invoke the HTTP binding with a GET method and the following JSON body:

{
  "operation": "get"
}

Optionally, a path can be specified to interact with resource URIs:

{
  "operation": "get",
  "metadata": {
    "path": "/things/1234"
  }
}

Response

The response body contains the data returned by the HTTP endpoint. The data field contains the HTTP response body as a byte slice (Base64 encoded via curl). The metadata field contains:

FieldRequiredDetailsExample
statusCodeYThe HTTP status code200, 404, 503
statusYThe status description"200 OK", "201 Created"
Field with a capitalized first letterNAny fields that have a capital first letter are sent as request headers"Content-Type"

Example

Requesting the base URL

curl -d "{ \"operation\": \"get\" }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Requesting a specific path

curl -d "{ \"operation\": \"get\", \"metadata\": { \"path\": \"/things/1234\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "path": "/things/1234" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Sending and updating data

To send data to the HTTP endpoint, invoke the HTTP binding with a POST, PUT, or PATCH method and the following JSON body:

{
  "operation": "post",
  "data": "content (default is JSON)",
  "metadata": {
    "path": "/things",
    "Content-Type": "application/json; charset=utf-8"
  }
}

Example

Posting a new record

curl -d "{ \"operation\": \"post\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"path\": \"/things\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "post", "data": "YOUR_BASE_64_CONTENT", "metadata": { "path": "/things" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Using HTTPS

The HTTP binding can also be used with HTTPS endpoints by configuring the Dapr sidecar to trust the server’s SSL certificate.

  1. Update the binding URL to use https instead of http.
  2. If you need to add a custom TLS certificate, refer How-To: Install certificates in the Dapr sidecar, to install the TLS certificates in the sidecar.

Example

Update the binding component

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
  namespace: <NAMESPACE>
spec:
  type: bindings.http
  version: v1
  metadata:
  - name: url
    value: https://my-secured-website.com # Use HTTPS

Install the TLS certificate in the sidecar

When the sidecar is not running inside a container, the TLS certificate can be directly installed on the host operating system.

Below is an example when the sidecar is running as a container. The SSL certificate is located on the host computer at /tmp/ssl/cert.pem.

version: '3'
services:
  my-app:
    # ...
  dapr-sidecar:
    image: "daprio/daprd:1.8.0"
    command: [
      "./daprd",
     "-app-id", "myapp",
     "-app-port", "3000",
     ]
    volumes:
        - "./components/:/components"
        - "/tmp/ssl/:/certificates" # Mount the certificates folder to the sidecar container at /certificates
    environment:
      - "SSL_CERT_DIR=/certificates" # Set the environment variable to the path of the certificates folder
    depends_on:
      - my-app

The sidecar can read the TLS certificate from a variety of sources. See How-to: Mount Pod volumes to the Dapr sidecar for more. In this example, we store the TLS certificate as a Kubernetes secret.

kubectl create secret generic myapp-cert --from-file /tmp/ssl/cert.pem

The YAML below is an example of the Kubernetes deployment that mounts the above secret to the sidecar and sets SSL_CERT_DIR to install the certificates.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "myapp"
        dapr.io/app-port: "8000"
        dapr.io/volume-mounts: "cert-vol:/certificates" # Mount the certificates folder to the sidecar container at /certificates
        dapr.io/env: "SSL_CERT_DIR=/certificates" # Set the environment variable to the path of the certificates folder
    spec:
      volumes:
        - name: cert-vol
          secret:
            secretName: myapp-cert
...

Invoke the binding securely

curl -d "{ \"operation\": \"get\" }" \
      https://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get" }' \
      https://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Using mTLS or enabling client TLS authentication along with HTTPS

You can configure the HTTP binding to use mTLS or client TLS authentication along with HTTPS by providing the MTLSRootCA, MTLSClientCert, and MTLSClientKey metadata fields in the binding component.

These fields can be passed as a file path or as a pem encoded string:

  • If the file path is provided, the file is read and the contents are used.
  • If the PEM-encoded string is provided, the string is used as is.

When these fields are configured, the Dapr sidecar uses the provided certificate to authenticate itself with the server during the TLS handshake process.

If the remote server is enforcing TLS renegotiation, you also need to set the metadata field MTLSRenegotiation. This field accepts one of following options:

  • RenegotiateNever
  • RenegotiateOnceAsClient
  • RenegotiateFreelyAsClient

For more details see the Go RenegotiationSupport documentation.

You can use this when the server with which the HTTP binding is configured to communicate requires mTLS or client TLS authentication.

2.28 - Huawei OBS binding spec

Detailed documentation on the Huawei OBS binding component

Component format

To setup Huawei Object Storage Service (OBS) (output) binding create a component of type bindings.huawei.obs. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.huawei.obs
  version: v1
  - name: bucket
    value: "<your-bucket-name>"
  - name: endpoint
    value: "<obs-bucket-endpoint>"
  - name: accessKey
    value: "<your-access-key>"
  - name: secretKey
    value: "<your-secret-key>"
  # optional fields
  - name: region
    value: "<your-bucket-region>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
bucketYOutputThe name of the Huawei OBS bucket to write to"My-OBS-Bucket"
endpointYOutputThe specific Huawei OBS endpoint"obs.cn-north-4.myhuaweicloud.com"
accessKeyYOutputThe Huawei Access Key (AK) to access this resource"************"
secretKeyYOutputThe Huawei Secret Key (SK) to access this resource"************"
regionNOutputThe specific Huawei region of the bucket"cn-north-4"

Binding support

This component supports output binding with the following operations:

Create file

To perform a create operation, invoke the Huawei OBS binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated. See below for Metadata support to set the destination file name

{
  "operation": "create",
  "data": "YOUR_CONTENT"
}

Examples

Save text to a random generated UUID file

On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response JSON body contains the statusCode and the versionId fields. The versionId will have a value returned only if the bucket versioning is enabled and an empty string otherwise.

Upload file

To upload a binary file (for example, .jpg, .zip), invoke the Huawei OBS binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated, if you don’t specify the key. See the example below for metadata support to set the destination file name. This API can be used to upload a regular file, such as a plain text file.

{
  "operation": "upload",
  "metadata": {
     "key": "DESTINATION_FILE_NAME"
   },
  "data": {
     "sourceFile": "PATH_TO_YOUR_SOURCE_FILE"
   }
}

Example

curl -d "{ \"operation\": \"upload\", \"data\": { \"sourceFile\": \".\my-test-file.jpg\" }, \"metadata\": { \"key\": \"my-test-file.jpg\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "upload", "data": { "sourceFile": "./my-test-file.jpg" }, "metadata": { "key": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response JSON body contains the statusCode and the versionId fields. The versionId will have a value returned only if the bucket versioning is enabled and an empty string otherwise.

Get object

To perform a get file operation, invoke the Huawei OBS binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the object.

Delete object

To perform a delete object operation, invoke the Huawei OBS binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "key": "my-test-file.txt"
  }
}

The metadata parameters are:

  • key - the name of the object

Examples

Delete object
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body are returned if successful.

List objects

To perform a list object operation, invoke the Huawei OBS binding with a POST method and the following JSON body:

{
  "operation": "list",
  "data": {
    "maxResults": 5,
    "prefix": "dapr-",
    "marker": "obstest",
    "delimiter": "jpg"
  }
}

The data parameters are:

  • maxResults - (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
  • prefix - (optional) limits the response to keys that begin with the specified prefix.
  • marker - (optional) marker is where you want Huawei OBS to start listing from. Huawei OBS starts listing after this specified key. Marker can be any key in the bucket. The marker value may then be used in a subsequent call to request the next set of list items.
  • delimiter - (optional) A delimiter is a character you use to group keys. It returns objects/files with their object key other than that is specified by the delimiter pattern.

Example

curl -d '{ \"operation\": \"list\", \"data\": { \"maxResults\": 5, \"prefix\": \"dapr-\", \"marker\": \"obstest\", \"delimiter\": \"jpg\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "list", "data": { "maxResults": 5, "prefix": "dapr-", "marker": "obstest", "delimiter": "jpg" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the list of found objects.

2.29 - InfluxDB binding spec

Detailed documentation on the InfluxDB binding component

Component format

To setup InfluxDB binding create a component of type bindings.influx. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.influx
  version: v1
  metadata:
  - name: url # Required
    value: "<INFLUX-DB-URL>"
  - name: token # Required
    value: "<TOKEN>"
  - name: org # Required
    value: "<ORG>"
  - name: bucket # Required
    value: "<BUCKET>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
urlYOutputThe URL for the InfluxDB instance"http://localhost:8086"
tokenYOutputThe authorization token for InfluxDB"mytoken"
orgYOutputThe InfluxDB organization"myorg"
bucketYOutputBucket name to write to"mybucket"

Binding support

This component supports output binding with the following operations:

  • create
  • query

Query

In order to query InfluxDB, use a query operation along with a raw key in the call’s metadata, with the query as the value:

curl -X POST http://localhost:3500/v1.0/bindings/myInfluxBinding \
  -H "Content-Type: application/json" \
  -d "{
        \"metadata\": {
          \"raw\": "SELECT * FROM 'sith_lords'"
        },
        \"operation\": \"query\"
      }"

2.30 - Kafka binding spec

Detailed documentation on the Kafka binding component

Component format

To setup Kafka binding create a component of type bindings.kafka. See this guide on how to create and apply a binding configuration. For details on using secretKeyRef, see the guide on how to reference secrets in components.

All component metadata field values can carry templated metadata values, which are resolved on Dapr sidecar startup. For example, you can choose to use {namespace} as the consumerGroup, to enable using the same appId in different namespaces using the same topics as described in this article.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: kafka-binding
spec:
  type: bindings.kafka
  version: v1
  metadata:
  - name: topics # Optional. Used for input bindings.
    value: "topic1,topic2"
  - name: brokers # Required.
    value: "localhost:9092,localhost:9093"
  - name: consumerGroup # Optional. Used for input bindings.
    value: "group1"
  - name: publishTopic # Optional. Used for output bindings.
    value: "topic3"
  - name: authRequired # Required.
    value: "true"
  - name: saslUsername # Required if authRequired is `true`.
    value: "user"
  - name: saslPassword # Required if authRequired is `true`.
    secretKeyRef:
      name: kafka-secrets
      key: "saslPasswordSecret"
  - name: saslMechanism
    value: "SHA-512"
  - name: initialOffset # Optional. Used for input bindings.
    value: "newest"
  - name: maxMessageBytes # Optional.
    value: "1024"
  - name: heartbeatInterval # Optional.
    value: 5s
  - name: sessionTimeout # Optional.
    value: 15s
  - name: version # Optional.
    value: "2.0.0"
  - name: direction
    value: "input, output"
  - name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
    value: http://localhost:8081
  - name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.
    value: XYAXXAZ
  - name: schemaRegistryAPISecret # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret.
    value: "ABCDEFGMEADFF"
  - name: schemaCachingEnabled # Optional. When using Schema Registry Avro serialization/deserialization. Enables caching for schemas.
    value: true
  - name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.
    value: 5m
  - name: escapeHeaders # Optional.
    value: false

Spec metadata fields

FieldRequiredBinding supportDetailsExample
topicsNInputA comma-separated string of topics."mytopic1,topic2"
brokersYInput/OutputA comma-separated string of Kafka brokers."localhost:9092,dapr-kafka.myapp.svc.cluster.local:9093"
clientIDNInput/OutputA user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes."my-dapr-app"
consumerGroupNInputA kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic."group1"
consumeRetryEnabledNInput/OutputEnable consume retry by setting to "true". Default to false in Kafka binding component."true", "false"
publishTopicYOutputThe topic to publish to."mytopic"
authRequiredNDeprecatedEnable SASL authentication with the Kafka brokers."true", "false"
authTypeYInput/OutputConfigure or disable authentication. Supported values: none, password, mtls, or oidc"password", "none"
saslUsernameNInput/OutputThe SASL username used for authentication. Only required if authRequired is set to "true"."adminuser"
saslPasswordNInput/OutputThe SASL password used for authentication. Can be secretKeyRef to use a secret reference. Only required if authRequired is set to "true"."", "KeFg23!"
saslMechanismNInput/OutputThe SASL authentication mechanism you’d like to use. Only required if authtype is set to "password". If not provided, defaults to PLAINTEXT, which could cause a break for some services, like Amazon Managed Service for Kafka."SHA-512", "SHA-256", "PLAINTEXT"
initialOffsetNInputThe initial offset to use if no offset was previously committed. Should be “newest” or “oldest”. Defaults to “newest”."oldest"
maxMessageBytesNInput/OutputThe maximum size in bytes allowed for a single Kafka message. Defaults to 1024."2048"
oidcTokenEndpointNInput/OutputFull URL to an OAuth2 identity provider access token endpoint. Required when authType is set to oidchttps://identity.example.com/v1/token"
oidcClientIDNInput/OutputThe OAuth2 client ID that has been provisioned in the identity provider. Required when authType is set to oidc"dapr-kafka"
oidcClientSecretNInput/OutputThe OAuth2 client secret that has been provisioned in the identity provider: Required when authType is set to oidc"KeFg23!"
oidcScopesNInput/OutputComma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when authType is set to oidc. Defaults to "openid""openid,kafka-prod"
versionNInput/OutputKafka cluster version. Defaults to 2.0.0. Please note that this needs to be mandatorily set to 1.0.0 for EventHubs with Kafka."1.0.0"
directionNInput/OutputThe direction of the binding."input", "output", "input, output"
oidcExtensionsNInput/OutputString containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token{"cluster":"kafka","poolid":"kafkapool"}
schemaRegistryURLNRequired when using Schema Registry Avro serialization/deserialization. The Schema Registry URL.http://localhost:8081
schemaRegistryAPIKeyNWhen using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key.XYAXXAZ
schemaRegistryAPISecretNWhen using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret.ABCDEFGMEADFF
schemaCachingEnabledNWhen using Schema Registry Avro serialization/deserialization. Enables caching for schemas. Default is truetrue
schemaLatestVersionCacheTTLNWhen using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min5m
clientConnectionTopicMetadataRefreshIntervalNInput/OutputThe interval for the client connection’s topic metadata to be refreshed with the broker as a Go duration. Defaults to 9m."4m"
clientConnectionKeepAliveIntervalNInput/OutputThe maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely."4m"
consumerFetchDefaultNInput/OutputThe default number of message bytes to fetch from the broker in each request. Default is "1048576" bytes."2097152"
heartbeatIntervalNInputThe interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the sessionTimeout value. Defaults to "3s"."5s"
sessionTimeoutNInputThe timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s"."20s"
escapeHeadersNInputEnables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is false.true

Note

The metadata version must be set to 1.0.0 when using Azure EventHubs with Kafka.

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

Authentication

Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. Learn more about Kafka’s authentication method for both the Kafka binding and Kafka pub/sub components.

Specifying a partition key

When invoking the Kafka binding, its possible to provide an optional partition key by using the metadata section in the request body.

The field name is partitionKey.

Example:

curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "partitionKey": "key1"
        },
        "operation": "create"
      }'

Response

An HTTP 204 (No Content) and empty body will be returned if successful.

2.31 - Kitex

Detailed documentation on the Kitex binding component

Overview

The binding for Kitex mainly utilizes the generic-call feature in Kitex. Learn more from the official documentation around Kitex generic-call. Currently, Kitex only supports Thrift generic calls. The implementation integrated into components-contrib adopts binary generic calls.

Component format

To setup an Kitex binding, create a component of type bindings.kitex. See the How-to: Use output bindings to interface with external resources guide on creating and applying a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: bindings.kitex
spec:
  type: bindings.kitex
  version: v1
  metadata: 
  - name: hostPorts
    value: "127.0.0.1:8888"
  - name: destService
    value: "echo"
  - name: methodName
    value: "echo"
  - name: version
    value: "0.5.0"

Spec metadata fields

The InvokeRequest.Metadata for bindings.kitex requires the client to fill in four required items when making a call:

  • hostPorts
  • destService
  • methodName
  • version
FieldRequiredBinding supportDetailsExample
hostPortsYOutputIP address and port information of the Kitex server (Thrift)"127.0.0.1:8888"
destServiceYOutputService name of the Kitex server (Thrift)"echo"
methodNameYOutputMethod name under a specific service name of the Kitex server (Thrift)"echo"
versionYOutputKitex version"0.5.0"

Binding support

This component supports output binding with the following operations:

  • get

Example

When using Kitex binding:

  • The client needs to pass in the correct Thrift-encoded binary
  • The server needs to be a Thrift Server.

The kitex_output_test can be used as a reference. For example, the variable reqData needs to be encoded by the Thrift protocol before sending, and the returned data needs to be decoded by the Thrift protocol.

Request

{
  "operation": "get",
  "metadata": {
    "hostPorts": "127.0.0.1:8888",
    "destService": "echo",
    "methodName": "echo",
    "version":"0.5.0"
  },
  "data": reqdata
}

2.32 - KubeMQ binding spec

Detailed documentation on the KubeMQ binding component

Component format

To setup KubeMQ binding create a component of type bindings.kubemq. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: binding-topic
spec:
  type: bindings.kubemq
  version: v1
  metadata:
    - name: address
      value: "localhost:50000"
    - name: channel
      value: "queue1"
    - name: direction
      value: "input, output"

Spec metadata fields

FieldRequiredDetailsExample
addressYAddress of the KubeMQ server"localhost:50000"
channelYThe Queue channel name"queue1"
authTokenNAuth JWT token for connection. Check out KubeMQ Authentication"ew..."
autoAcknowledgedNSets if received queue message is automatically acknowledged"true" or "false" (default is "false")
pollMaxItemsNSets the number of messages to poll on every connection"1"
pollTimeoutSecondsNSets the time in seconds for each poll interval"3600"
directionNThe direction of the binding"input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

Create a KubeMQ broker

  1. Obtain KubeMQ Key.
  2. Wait for an email confirmation with your Key

You can run a KubeMQ broker with Docker:

docker run -d -p 8080:8080 -p 50000:50000 -p 9090:9090 -e KUBEMQ_TOKEN=<your-key> kubemq/kubemq

You can then interact with the server using the client port: localhost:50000

  1. Obtain KubeMQ Key.
  2. Wait for an email confirmation with your Key

Then Run the following kubectl commands:

kubectl apply -f https://deploy.kubemq.io/init
kubectl apply -f https://deploy.kubemq.io/key/<your-key>

Install KubeMQ CLI

Go to KubeMQ CLI and download the latest version of the CLI.

Browse KubeMQ Dashboard

Open a browser and navigate to http://localhost:8080

With KubeMQCTL installed, run the following command:

kubemqctl get dashboard

Or, with kubectl installed, run port-forward command:

kubectl port-forward svc/kubemq-cluster-api -n kubemq 8080:8080

KubeMQ Documentation

Visit KubeMQ Documentation for more information.

2.33 - Kubernetes Events binding spec

Detailed documentation on the Kubernetes Events binding component

Component format

To setup Kubernetes Events binding create a component of type bindings.kubernetes. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.kubernetes
  version: v1
  metadata:
  - name: namespace
    value: "<NAMESPACE>"
  - name: resyncPeriodInSec
    value: "<seconds>"
  - name: direction
    value: "input"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
namespaceYInputThe Kubernetes namespace to read events from"default"
resyncPeriodInSecNInputThe period of time to refresh event list from Kubernetes API server. Defaults to "10""15"
directionNInputThe direction of the binding"input"
kubeconfigPathNInputThe path to the kubeconfig file. If not specified, the binding uses the default in-cluster config value"/path/to/kubeconfig"

Binding support

This component supports input binding interface.

Output format

Output received from the binding is of format bindings.ReadResponse with the Data field populated with the following structure:

 {
   "event": "",
   "oldVal": {
     "metadata": {
       "name": "hello-node.162c2661c524d095",
       "namespace": "kube-events",
       "selfLink": "/api/v1/namespaces/kube-events/events/hello-node.162c2661c524d095",
       ...
     },
     "involvedObject": {
       "kind": "Deployment",
       "namespace": "kube-events",
       ...
     },
     "reason": "ScalingReplicaSet",
     "message": "Scaled up replica set hello-node-7bf657c596 to 1",
     ...
   },
   "newVal": {
     "metadata": { "creationTimestamp": "null" },
     "involvedObject": {},
     "source": {},
     "firstTimestamp": "null",
     "lastTimestamp": "null",
     "eventTime": "null",
     ...
   }
 }

Three different event types are available:

  • Add : Only the newVal field is populated, oldVal field is an empty v1.Event, event is add
  • Delete : Only the oldVal field is populated, newVal field is an empty v1.Event, event is delete
  • Update : Both the oldVal and newVal fields are populated, event is update

Required permissions

For consuming events from Kubernetes, permissions need to be assigned to a User/Group/ServiceAccount using [RBAC Auth] mechanism of Kubernetes.

Role

One of the rules need to be of the form as below to give permissions to get, watch and list events. API Groups can be as restrictive as needed.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: <ROLENAME>
rules:
- apiGroups: [""]
  resources: ["events"]
  verbs: ["get", "watch", "list"]

RoleBinding

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: <NAME>
subjects:
- kind: ServiceAccount
  name: default # or as need be, can be changed
roleRef:
  kind: Role
  name: <ROLENAME> # same as the one above
  apiGroup: ""

2.34 - Local Storage binding spec

Detailed documentation on the Local Storage binding component

Component format

To set up the Local Storage binding, create a component of type bindings.localstorage. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.localstorage
  version: v1
  metadata:
  - name: rootPath
    value: "<string>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
rootPathYOutputThe root path anchor to which files can be read / saved"/temp/files"

Binding support

This component supports output binding with the following operations:

Create file

To perform a create file operation, invoke the Local Storage binding with a POST method and the following JSON body:

Note: by default, a random UUID is generated. See below for Metadata support to set the name

{
  "operation": "create",
  "data": "YOUR_CONTENT"
}

Examples

Save text to a random generated UUID file

On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)

curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World" }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save text to a specific file
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"fileName\": \"my-test-file.txt\" } }" \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "fileName": "my-test-file.txt" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
Save a binary file

To upload a file, encode it as Base64. The binding should automatically detect the Base64 encoding.

curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body will contain the following JSON:

{
   "fileName": "<filename>"
}

Get file

To perform a get file operation, invoke the Local Storage binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "fileName": "myfile"
  }
}

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "fileName": "myfile" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the file.

List files

To perform a list files operation, invoke the Local Storage binding with a POST method and the following JSON body:

{
  "operation": "list"
}

If you only want to list the files beneath a particular directory below the rootPath, specify the relative directory name as the fileName in the metadata.

{
  "operation": "list",
  "metadata": {
    "fileName": "my/cool/directory"
  }
}

Example

curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response is a JSON array of file names.

Delete file

To perform a delete file operation, invoke the Local Storage binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "fileName": "myfile"
  }
}

Example

curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body will be returned if successful.

Metadata information

By default the Local Storage output binding auto generates a UUID as the file name. It is configurable in the metadata property of the message.

{
    "data": "file content",
    "metadata": {
        "fileName": "filename.txt"
    },
    "operation": "create"
}

2.35 - MQTT3 binding spec

Detailed documentation on the MQTT3 binding component

Component format

To setup a MQTT3 binding create a component of type bindings.mqtt3. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.mqtt3
  version: v1
  metadata:
    - name: url
      value: "tcp://[username][:password]@host.domain[:port]"
    - name: topic
      value: "mytopic"
    - name: consumerID
      value: "myapp"
    # Optional
    - name: retain
      value: "false"
    - name: cleanSession
      value: "false"
    - name: backOffMaxRetries
      value: "0"
    - name: direction
      value: "input, output"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
urlYInput/OutputAddress of the MQTT broker. Can be secretKeyRef to use a secret reference.
Use the tcp:// URI scheme for non-TLS communication.
Use the ssl:// URI scheme for TLS communication.
"tcp://[username][:password]@host.domain[:port]"
topicYInput/OutputThe topic to listen on or send events to."mytopic"
consumerIDYInput/OutputThe client ID used to connect to the MQTT broker."myMqttClientApp"
retainNInput/OutputDefines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to "false"."true", "false"
cleanSessionNInput/OutputSets the clean_session flag in the connection message to the MQTT broker if "true". Defaults to "false"."true", "false"
caCertRequired for using TLSInput/OutputCertificate Authority (CA) certificate in PEM format for verifying server TLS certificates.See example below
clientCertRequired for using TLSInput/OutputTLS client certificate in PEM format. Must be used with clientKey.See example below
clientKeyRequired for using TLSInput/OutputTLS client key in PEM format. Must be used with clientCert. Can be secretKeyRef to use a secret reference.See example below
backOffMaxRetriesNInputThe maximum number of retries to process the message before returning an error. Defaults to "0", which means that no retries will be attempted. "-1" can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries."3"
directionNInput/OutputThe direction of the binding"input", "output", "input, output"

Communication using TLS

To configure communication using TLS, ensure that the MQTT broker (e.g. emqx) is configured to support certificates and provide the caCert, clientCert, clientKey metadata in the component configuration. For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt-binding
spec:
  type: bindings.mqtt3
  version: v1
  metadata:
    - name: url
      value: "ssl://host.domain[:port]"
    - name: topic
      value: "topic1"
    - name: consumerID
      value: "myapp"
    # TLS configuration
    - name: caCert
      value: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
    - name: clientCert
      value: |
        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----
    - name: clientKey
      secretKeyRef:
        name: myMqttClientKey
        key: myMqttClientKey
    # Optional
    - name: retain
      value: "false"
    - name: cleanSession
      value: "false"
    - name: backoffMaxRetries
      value: "0"

Note that while the caCert and clientCert values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.

Consuming a shared topic

When consuming a shared topic, each consumer must have a unique identifier. If running multiple instances of an application, you configure the component’s consumerID metadata with a {uuid} tag, which will give each instance a randomly generated consumerID value on start up. For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt-binding
  namespace: default
spec:
  type: bindings.mqtt3
  version: v1
  metadata:
  - name: consumerID
    value: "{uuid}"
  - name: url
    value: "tcp://admin:public@localhost:1883"
  - name: topic
    value: "topic1"
  - name: retain
    value: "false"
  - name: cleanSession
    value: "true"
  - name: backoffMaxRetries
    value: "0"

In this case, the value of the consumer ID is random every time Dapr restarts, so you should set cleanSession to true as well.

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create: publishes a new message

Set topic per-request

You can override the topic in component metadata on a per-request basis:

{
  "operation": "create",
  "metadata": {
    "topic": "myTopic"
  },
  "data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}

Set retain property per-request

You can override the retain property in component metadata on a per-request basis:

{
  "operation": "create",
  "metadata": {
    "retain": "true"
  },
  "data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}

2.36 - MySQL & MariaDB binding spec

Detailed documentation on the MySQL binding component

Component format

The MySQL binding allows connecting to both MySQL and MariaDB databases. In this document, we refer to “MySQL” to indicate both databases.

To setup a MySQL binding create a component of type bindings.mysql. See this guide on how to create and apply a binding configuration.

The MySQL binding uses Go-MySQL-Driver internally.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.mysql
  version: v1
  metadata:
    - name: url # Required, define DB connection in DSN format
      value: "<CONNECTION_STRING>"
    - name: pemPath # Optional
      value: "<PEM PATH>"
    - name: maxIdleConns
      value: "<MAX_IDLE_CONNECTIONS>"
    - name: maxOpenConns
      value: "<MAX_OPEN_CONNECTIONS>"
    - name: connMaxLifetime
      value: "<CONNECTION_MAX_LIFE_TIME>"
    - name: connMaxIdleTime
      value: "<CONNECTION_MAX_IDLE_TIME>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
urlYOutputRepresent DB connection in Data Source Name (DNS) format. See here SSL details"user:password@tcp(localhost:3306)/dbname"
pemPathYOutputPath to the PEM file. Used with SSL connection"path/to/pem/file"
maxIdleConnsNOutputThe max idle connections. Integer greater than 0"10"
maxOpenConnsNOutputThe max open connections. Integer greater than 0"10"
connMaxLifetimeNOutputThe max connection lifetime. Duration string"12s"
connMaxIdleTimeNOutputThe max connection idle time. Duration string"12s"

SSL connection

If your server requires SSL your connection string must end of &tls=custom for example:

"<user>:<password>@tcp(<server>:3306)/<database>?allowNativePasswords=true&tls=custom"

You must replace the <PEM PATH> with a full path to the PEM file. If you are using Azure Database for MySQL see the Azure documentation on SSL database connections, for information on how to download the required certificate. The connection to MySQL requires a minimum TLS version of 1.2.

Multiple statements

By default, the MySQL Go driver only supports one SQL statement per query/command.

To allow multiple statements in one query you need to add multiStatements=true to a query string, for example:

"<user>:<password>@tcp(<server>:3306)/<database>?multiStatements=true"

While this allows batch queries, it also greatly increases the risk of SQL injections. Only the result of the first query is returned, all other results are silently discarded.

Binding support

This component supports output binding with the following operations:

  • exec
  • query
  • close

Parametrized queries

This binding supports parametrized queries, which allow separating the SQL query itself from user-supplied values. The usage of parametrized queries is strongly recommended for security reasons, as they prevent SQL Injection attacks.

For example:

-- ❌ WRONG! Includes values in the query and is vulnerable to SQL Injection attacks.
SELECT * FROM mytable WHERE user_key = 'something';

-- ✅ GOOD! Uses parametrized queries.
-- This will be executed with parameters ["something"]
SELECT * FROM mytable WHERE user_key = ?;

exec

The exec operation can be used for DDL operations (like table creation), as well as INSERT, UPDATE, DELETE operations which return only metadata (e.g. number of affected rows).

The params property is a string containing a JSON-encoded array of parameters.

Request

{
  "operation": "exec",
  "metadata": {
    "sql": "INSERT INTO foo (id, c1, ts) VALUES (?, ?, ?)",
    "params": "[1, \"demo\", \"2020-09-24T11:45:05Z07:00\"]"
  }
}

Response

{
  "metadata": {
    "operation": "exec",
    "duration": "294µs",
    "start-time": "2020-09-24T11:13:46.405097Z",
    "end-time": "2020-09-24T11:13:46.414519Z",
    "rows-affected": "1",
    "sql": "INSERT INTO foo (id, c1, ts) VALUES (?, ?, ?)"
  }
}

query

The query operation is used for SELECT statements, which returns the metadata along with data in a form of an array of row values.

The params property is a string containing a JSON-encoded array of parameters.

Request

{
  "operation": "query",
  "metadata": {
    "sql": "SELECT * FROM foo WHERE id < $1",
    "params": "[3]"
  }
}

Response

{
  "metadata": {
    "operation": "query",
    "duration": "432µs",
    "start-time": "2020-09-24T11:13:46.405097Z",
    "end-time": "2020-09-24T11:13:46.420566Z",
    "sql": "SELECT * FROM foo WHERE id < ?"
  },
  "data": [
    {column_name: value, column_name: value, ...},
    {column_name: value, column_name: value, ...},
    {column_name: value, column_name: value, ...},
  ]
}

Here column_name is the name of the column returned by query, and value is a value of this column. Note that values are returned as string or numbers (language specific data type)

close

The close operation can be used to explicitly close the DB connection and return it to the pool. This operation doesn’t have any response.

Request

{
  "operation": "close"
}

2.37 - PostgreSQL binding spec

Detailed documentation on the PostgreSQL binding component

Component format

To setup PostgreSQL binding create a component of type bindings.postgresql. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.postgresql
  version: v1
  metadata:
    # Connection string
    - name: connectionString
      value: "<CONNECTION STRING>"

Spec metadata fields

Authenticate using a connection string

The following metadata options are required to authenticate using a PostgreSQL connection string.

FieldRequiredDetailsExample
connectionStringYThe connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string."host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"

Authenticate using individual connection parameters

In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.

FieldRequiredDetailsExample
hostYThe host name or IP address of the PostgreSQL server"localhost"
hostaddrNThe IP address of the PostgreSQL server (alternative to host)"127.0.0.1"
portYThe port number of the PostgreSQL server"5432"
databaseYThe name of the database to connect to"my_db"
userYThe PostgreSQL user to connect as"postgres"
passwordYThe password for the PostgreSQL user"example"
sslRootCertNPath to the SSL root certificate file"/path/to/ca.crt"

Authenticate using Microsoft Entra ID

Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.

FieldRequiredDetailsExample
useAzureADYMust be set to true to enable the component to retrieve access tokens from Microsoft Entra ID."true"
connectionStringYThe connection string for the PostgreSQL database.
This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity; this is often the name of the corresponding principal (e.g. the name of the Microsoft Entra ID application). This connection string should not contain any password.
"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=my_db sslmode=require"
azureTenantIdNID of the Microsoft Entra ID tenant"cd4b2887-304c-…"
azureClientIdNClient ID (application ID)"c7dd251f-811f-…"
azureClientSecretNClient secret (application password)"Ecy3X…"

Authenticate using AWS IAM

Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam database role. Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.

FieldRequiredDetailsExample
useAWSIAMYMust be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases."true"
connectionStringYThe connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS.
"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"
awsRegionNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS Region where the AWS Relational Database Service is deployed to."us-east-1"
awsAccessKeyNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account"AKIAIOSFODNN7EXAMPLE"
awsSecretKeyNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
awsSessionTokenNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials."TOKEN"

Other metadata options

FieldRequiredBinding supportDetailsExample
timeoutNOutputTimeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s"30s", 30
maxConnsNOutputMaximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs."4"
connectionMaxIdleTimeNOutputMax idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose."5m"
queryExecModeNOutputControls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use exec or simple_protocol."simple_protocol"

URL format

The PostgreSQL binding uses pgx connection pool internally so the connectionString parameter can be any valid connection string, either in a DSN or URL format:

Example DSN

user=dapr password=secret host=dapr.example.com port=5432 dbname=my_dapr sslmode=verify-ca

Example URL

postgres://dapr:secret@dapr.example.com:5432/my_dapr?sslmode=verify-ca

Both methods also support connection pool configuration variables:

  • pool_min_conns: integer 0 or greater
  • pool_max_conns: integer greater than 0
  • pool_max_conn_lifetime: duration string
  • pool_max_conn_idle_time: duration string
  • pool_health_check_period: duration string

Binding support

This component supports output binding with the following operations:

  • exec
  • query
  • close

Parametrized queries

This binding supports parametrized queries, which allow separating the SQL query itself from user-supplied values. The usage of parametrized queries is strongly recommended for security reasons, as they prevent SQL Injection attacks.

For example:

-- ❌ WRONG! Includes values in the query and is vulnerable to SQL Injection attacks.
SELECT * FROM mytable WHERE user_key = 'something';

-- ✅ GOOD! Uses parametrized queries.
-- This will be executed with parameters ["something"]
SELECT * FROM mytable WHERE user_key = $1;

exec

The exec operation can be used for DDL operations (like table creation), as well as INSERT, UPDATE, DELETE operations which return only metadata (e.g. number of affected rows).

The params property is a string containing a JSON-encoded array of parameters.

Request

{
  "operation": "exec",
  "metadata": {
    "sql": "INSERT INTO foo (id, c1, ts) VALUES ($1, $2, $3)",
    "params": "[1, \"demo\", \"2020-09-24T11:45:05Z07:00\"]"
  }
}

Response

{
  "metadata": {
    "operation": "exec",
    "duration": "294µs",
    "start-time": "2020-09-24T11:13:46.405097Z",
    "end-time": "2020-09-24T11:13:46.414519Z",
    "rows-affected": "1",
    "sql": "INSERT INTO foo (id, c1, ts) VALUES ($1, $2, $3)"
  }
}

query

The query operation is used for SELECT statements, which returns the metadata along with data in a form of an array of row values.

The params property is a string containing a JSON-encoded array of parameters.

Request

{
  "operation": "query",
  "metadata": {
    "sql": "SELECT * FROM foo WHERE id < $1",
    "params": "[3]"
  }
}

Response

{
  "metadata": {
    "operation": "query",
    "duration": "432µs",
    "start-time": "2020-09-24T11:13:46.405097Z",
    "end-time": "2020-09-24T11:13:46.420566Z",
    "sql": "SELECT * FROM foo WHERE id < $1"
  },
  "data": "[
    [0,\"test-0\",\"2020-09-24T04:13:46Z\"],
    [1,\"test-1\",\"2020-09-24T04:13:46Z\"],
    [2,\"test-2\",\"2020-09-24T04:13:46Z\"]
  ]"
}

close

The close operation can be used to explicitly close the DB connection and return it to the pool. This operation doesn’t have any response.

Request

{
  "operation": "close"
}

2.38 - Postmark binding spec

Detailed documentation on the Postmark binding component

Component format

To setup Postmark binding create a component of type bindings.postmark. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: postmark
spec:
  type: bindings.postmark
  metadata:
  - name: accountToken
    value: "YOUR_ACCOUNT_TOKEN" # required, this is your Postmark account token
  - name: serverToken
    value: "YOUR_SERVER_TOKEN" # required, this is your Postmark server token
  - name: emailFrom
    value: "testapp@dapr.io" # optional
  - name: emailTo
    value: "dave@dapr.io" # optional
  - name: subject
    value: "Hello!" # optional

Spec metadata fields

FieldRequiredBinding supportDetailsExample
accountTokenYOutputThe Postmark account token, this should be considered a secret value"account token"
serverTokenYOutputThe Postmark server token, this should be considered a secret value"server token"
emailFromNOutputIf set this specifies the ‘from’ email address of the email message"me@exmaple.com"
emailToNOutputIf set this specifies the ’to’ email address of the email message"me@example.com"
emailCcNOutputIf set this specifies the ‘cc’ email address of the email message"me@example.com"
emailBccNOutputIf set this specifies the ‘bcc’ email address of the email message"me@example.com"
subjectNOutputIf set this specifies the subject of the email message"me@example.com"

You can specify any of the optional metadata properties on the output binding request too (e.g. emailFrom, emailTo, subject, etc.)

Combined, the optional metadata properties in the component configuration and the request payload should at least contain the emailFrom, emailTo and subject fields, as these are required to send an email with success.

Binding support

This component supports output binding with the following operations:

  • create

Example request payload

{
  "operation": "create",
  "metadata": {
    "emailTo": "changeme@example.net",
    "subject": "An email from Dapr Postmark binding"
  },
  "data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}

2.39 - RabbitMQ binding spec

Detailed documentation on the RabbitMQ binding component

Component format

To setup RabbitMQ binding create a component of type bindings.rabbitmq. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.rabbitmq
  version: v1
  metadata:
  - name: queueName
    value: "queue1"
  - name: host
    value: "amqp://[username][:password]@host.domain[:port]"
  - name: durable
    value: "true"
  - name: deleteWhenUnused
    value: "false"
  - name: ttlInSeconds
    value: "60"
  - name: prefetchCount
    value: "0"
  - name: exclusive
    value: "false"
  - name: maxPriority
    value: "5"
  - name: contentType
    value: "text/plain"
  - name: reconnectWaitInSeconds
    value: "5"
  - name: externalSasl
    value: "false"
  - name: caCert
    value: "null"
  - name: clientCert
    value: "null"
  - name: clientKey
    value: "null"
  - name: direction 
    value: "input, output"

Spec metadata fields

When a new RabbitMQ message gets published, all values from the associated metadata are added to the message’s header values.

FieldRequiredBinding supportDetailsExample
queueNameYInput/OutputThe RabbitMQ queue name"myqueue"
hostYInput/OutputThe RabbitMQ host address"amqp://[username][:password]@host.domain[:port]" or with TLS: "amqps://[username][:password]@host.domain[:port]"
durableNOutputTells RabbitMQ to persist message in storage. Defaults to "false""true", "false"
deleteWhenUnusedNInput/OutputEnables or disables auto-delete. Defaults to "false""true", "false"
ttlInSecondsNOutputSet the default message time to live at RabbitMQ queue level. If this parameter is omitted, messages won’t expire, continuing to exist on the queue until processed. See also60
prefetchCountNInputSet the Channel Prefetch Setting (QoS). If this parameter is omiited, QoS would set value to 0 as no limit0
exclusiveNInput/OutputDetermines whether the topic will be an exclusive topic or not. Defaults to "false""true", "false"
maxPriorityNInput/OutputParameter to set the priority queue. If this parameter is omitted, queue will be created as a general queue instead of a priority queue. Value between 1 and 255. See also"1", "10"
contentTypeNInput/OutputThe content type of the message. Defaults to “text/plain”."text/plain", "application/cloudevent+json" and so on
reconnectWaitInSecondsNInput/OutputRepresents the duration in seconds that the client should wait before attempting to reconnect to the server after a disconnection occurs. Defaults to "5"."5", "10"
externalSaslNInput/OutputWith TLS, should the username be taken from an additional field (e.g. CN.) See RabbitMQ Authentication Mechanisms. Defaults to "false"."true", "false"
caCertNInput/OutputThe CA certificate to use for TLS connection. Defaults to null."-----BEGIN CERTIFICATE-----\nMI..."
clientCertNInput/OutputThe client certificate to use for TLS connection. Defaults to null."-----BEGIN CERTIFICATE-----\nMI..."
clientKeyNInput/OutputThe client key to use for TLS connection. Defaults to null."-----BEGIN PRIVATE KEY-----\nMI..."
directionNInput/OutputThe direction of the binding."input", "output", "input, output"

Binding support

This component supports both input and output binding interfaces.

This component supports output binding with the following operations:

  • create

Specifying a TTL per message

Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.

To set time to live at message level use the metadata section in the request body during the binding invocation.

The field name is ttlInSeconds.

Example:

curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
  -H "Content-Type: application/json" \
  -d "{
        \"data\": {
          \"message\": \"Hi\"
        },
        \"metadata\": {
          \"ttlInSeconds\": "60"
        },
        \"operation\": \"create\"
      }"
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "ttlInSeconds": "60"
        },
        "operation": "create"
      }'

Specifying a priority per message

Priority can be defined at the message level. If maxPriority parameter is set, high priority messages will have priority over other low priority messages.

To set priority at message level use the metadata section in the request body during the binding invocation.

The field name is priority.

Example:

curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
  -H "Content-Type: application/json" \
  -d "{
        \"data\": {
          \"message\": \"Hi\"
        },
        \"metadata\": {
          "priority": \"5\"
        },
        \"operation\": \"create\"
      }"
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
  -H "Content-Type: application/json" \
  -d '{
        "data": {
          "message": "Hi"
        },
        "metadata": {
          "priority": "5"
        },
        "operation": "create"
      }'

2.40 - Redis binding spec

Detailed documentation on the Redis binding component

Component format

To setup Redis binding create a component of type bindings.redis. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.redis
  version: v1
  metadata:
  - name: redisHost
    value: "<address>:6379"
  - name: redisPassword
    value: "**************"
  - name: useEntraID
    value: "true"
  - name: enableTLS
    value: "<bool>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
redisHostYOutputThe Redis host address"localhost:6379"
redisPasswordNOutputThe Redis password"password"
redisUsernameNOutputUsername for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly."username"
useEntraIDNOutputImplements EntraID support for Azure Cache for Redis. Before enabling this:
  • The redisHost name must be specified in the form of "server:port"
  • TLS must be enabled
Learn more about this setting under Create a Redis instance > Azure Cache for Redis
"true", "false"
enableTLSNOutputIf the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to "false""true", "false"
clientCertNOutputThe content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here"----BEGIN CERTIFICATE-----\nMIIC..."
clientKeyNOutputThe content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here"----BEGIN PRIVATE KEY-----\nMIIE..."
failoverNOutputProperty to enable failover configuration. Needs sentinelMasterName to be set. Defaults to "false""true", "false"
sentinelMasterNameNOutputThe sentinel master name. See Redis Sentinel Documentation"", "mymaster"
sentinelUsernameNOutputUsername for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled"username"
sentinelPasswordNOutputPassword for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled"password"
redeliverIntervalNOutputThe interval between checking for pending messages to redelivery. Defaults to "60s". "0" disables redelivery."30s"
processingTimeoutNOutputThe amount time a message must be pending before attempting to redeliver it. Defaults to "15s". "0" disables redelivery."30s"
redisTypeNOutputThe type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node"."cluster"
redisDBNOutputDatabase selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0"."0"
redisMaxRetriesNOutputMaximum number of times to retry commands before giving up. Default is to not retry failed commands."5"
redisMinRetryIntervalNOutputMinimum backoff for redis commands between each retry. Default is "8ms"; "-1" disables backoff."8ms"
redisMaxRetryIntervalNOutputMaximum backoff for redis commands between each retry. Default is "512ms";"-1" disables backoff."5s"
dialTimeoutNOutputDial timeout for establishing new connections. Defaults to "5s"."5s"
readTimeoutNOutputTimeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s", "-1" for no timeout."3s"
writeTimeoutNOutputTimeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout."3s"
poolSizeNOutputMaximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU."20"
poolTimeoutNOutputAmount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second."5s"
maxConnAgeNOutputConnection age at which the client retires (closes) the connection. Default is to not close aged connections."30m"
minIdleConnsNOutputMinimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0"."2"
idleCheckFrequencyNOutputFrequency of idle checks made by idle connections reaper. Default is "1m". "-1" disables idle connections reaper."-1"
idleTimeoutNOutputAmount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m". "-1" disables idle timeout check."10m"

Binding support

This component supports output binding with the following operations:

  • create
  • get
  • delete

create

You can store a record in Redis using the create operation. This sets a key to hold a value. If the key already exists, the value is overwritten.

Request

{
  "operation": "create",
  "metadata": {
    "key": "key1"
  },
  "data": {
    "Hello": "World",
    "Lorem": "Ipsum"
  }
}

Response

An HTTP 204 (No Content) and empty body is returned if successful.

get

You can get a record in Redis using the get operation. This gets a key that was previously set.

This takes an optional parameter delete, which is by default false. When it is set to true, this operation uses the GETDEL operation of Redis. For example, it returns the value which was previously set and then deletes it.

Request

{
  "operation": "get",
  "metadata": {
    "key": "key1"
  },
  "data": {
  }
}

Response

{
  "data": {
    "Hello": "World",
    "Lorem": "Ipsum"
  }
}

Request with delete flag

{
  "operation": "get",
  "metadata": {
    "key": "key1",
    "delete": "true"
  },
  "data": {
  }
}

delete

You can delete a record in Redis using the delete operation. Returns success whether the key exists or not.

Request

{
  "operation": "delete",
  "metadata": {
    "key": "key1"
  }
}

Response

An HTTP 204 (No Content) and empty body is returned if successful.

Create a Redis instance

Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.0.0 or later.

Note: Dapr does not support Redis >= 7. It is recommended to use Redis 6

The Dapr CLI will automatically create and setup a Redis Streams instance for you. The Redis instance will be installed via Docker when you run dapr init, and the component file will be created in default directory. ($HOME/.dapr/components directory (Mac/Linux) or %USERPROFILE%\.dapr\components on Windows).

You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.

  1. Install Redis into your cluster.

    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm install redis bitnami/redis --set image.tag=6.2
    
  2. Run kubectl get pods to see the Redis containers now running in your cluster.

  3. Add redis-master:6379 as the redisHost in your redis.yaml file. For example:

        metadata:
        - name: redisHost
          value: redis-master:6379
    
  4. Next, we’ll get our Redis password, which is slightly different depending on the OS we’re using:

    • Windows: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64, which will create a file with your encoded password. Next, run certutil -decode encoded.b64 password.txt, which will put your redis password in a text file called password.txt. Copy the password and delete the two files.

    • Linux/MacOS: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode and copy the outputted password.

    Add this password as the redisPassword value in your redis.yaml file. For example:

        - name: redisPassword
          value: "lhDOkwTlp0"
    
  1. Create an Azure Cache for Redis instance using the official Microsoft documentation.

  2. Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.

    • For the Host name:
      • Navigate to the resource’s Overview page.
      • Copy the Host name value.
    • For your access key:
      • Navigate to Settings > Access Keys.
      • Copy and save your key.
  3. Add your key and your host name to a redis.yaml file that Dapr can apply to your cluster.

    • If you’re running a sample, add the host and key to the provided redis.yaml.
    • If you’re creating a project from the ground up, create a redis.yaml file as specified in the Component format section.
  4. Set the redisHost key to [HOST NAME FROM PREVIOUS STEP]:6379 and the redisPassword key to the key you saved earlier.

    Note: In a production-grade application, follow secret management instructions to securely manage your secrets.

  5. Enable EntraID support:

    • Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
    • Set useEntraID to "true" to implement EntraID support for Azure Cache for Redis.
  6. Set enableTLS to "true" to support TLS.

Note:useEntraID assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify the azureClientID property.

2.41 - RethinkDB binding spec

Detailed documentation on the RethinkDB binding component

Component format

The RethinkDB state store supports transactions which means it can be used to support Dapr actors. Dapr persists only the actor’s current state which doesn’t allow the users to track how actor’s state may have changed over time.

To enable users to track change of the state of actors, this binding leverages RethinkDB’s built-in capability to monitor RethinkDB table and event on change with both the old and new state. This binding creates a subscription on the Dapr state table and streams these changes using the Dapr input binding interface.

To setup RethinkDB statechange binding create a component of type bindings.rethinkdb.statechange. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: changes
spec:
  type: bindings.rethinkdb.statechange
  version: v1
  metadata:
  - name: address
    value: "<REPLACE-RETHINKDB-ADDRESS>" # Required, e.g. 127.0.0.1:28015 or rethinkdb.default.svc.cluster.local:28015).
  - name: database
    value: "<REPLACE-RETHINKDB-DB-NAME>" # Required, e.g. dapr (alpha-numerics only)
  - name: direction 
    value: "<DIRECTION-OF-RETHINKDB-BINDING>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
addressYInputAddress of RethinkDB server"27.0.0.1:28015", "rethinkdb.default.svc.cluster.local:28015"
databaseYInputRethinDB database name"dapr"
directionNInputDirection of the binding"input"

Binding support

This component only supports input binding interface.

2.42 - SFTP binding spec

Detailed documentation on the Secure File Transfer Protocol (SFTP) binding component

Component format

To set up the SFTP binding, create a component of type bindings.sftp. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.sftp
  version: v1
  metadata:
  - name: rootPath
    value: "<string>"
  - name: address
    value: "<string>"
  - name: username
    value: "<string>"
  - name: password
    value: "*****************"
  - name: privateKey
    value: "*****************"
  - name: privateKeyPassphrase
    value: "*****************"
  - name: hostPublicKey
    value: "*****************"
  - name: knownHostsFile
    value: "<string>"
  - name: insecureIgnoreHostKey
    value: "<bool>"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
rootPathYOutputRoot path for default working directory"/path"
addressYOutputAddress of SFTP server"localhost:22"
usernameYOutputUsername for authentication"username"
passwordNOutputPassword for username/password authentication"password"
privateKeyNOutputPrivate key for public key authentication
"|-
—–BEGIN OPENSSH PRIVATE KEY—–
*****************
—–END OPENSSH PRIVATE KEY—–"
privateKeyPassphraseNOutputPrivate key passphrase for public key authentication"passphrase"
hostPublicKeyNOutputHost public key for host validation"ecdsa-sha2-nistp256 *** root@openssh-server"
knownHostsFileNOutputKnown hosts file for host validation"/path/file"
insecureIgnoreHostKeyNOutputAllows to skip host validation. Defaults to "false""true", "false"

Binding support

This component supports output binding with the following operations:

Create file

To perform a create file operation, invoke the SFTP binding with a POST method and the following JSON body:

{
  "operation": "create",
  "data": "<YOUR_BASE_64_CONTENT>",
  "metadata": {
    "fileName": "<filename>",
  }
}

Example

curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the following JSON:

{
   "fileName": "<filename>"
}

Get file

To perform a get file operation, invoke the SFTP binding with a POST method and the following JSON body:

{
  "operation": "get",
  "metadata": {
    "fileName": "<filename>"
  }
}

Example

curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"filename\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "get", "metadata": { "fileName": "filename" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response body contains the value stored in the file.

List files

To perform a list files operation, invoke the SFTP binding with a POST method and the following JSON body:

{
  "operation": "list"
}

If you only want to list the files beneath a particular directory below the rootPath, specify the relative directory name as the fileName in the metadata.

{
  "operation": "list",
  "metadata": {
    "fileName": "my/cool/directory"
  }
}

Example

curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

The response is a JSON array of file names.

Delete file

To perform a delete file operation, invoke the SFTP binding with a POST method and the following JSON body:

{
  "operation": "delete",
  "metadata": {
    "fileName": "myfile"
  }
}

Example

curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
      http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

Response

An HTTP 204 (No Content) and empty body is returned if successful.

2.43 - SMTP binding spec

Detailed documentation on the SMTP binding component

Component format

To setup SMTP binding create a component of type bindings.smtp. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: smtp
spec:
  type: bindings.smtp
  version: v1
  metadata:
  - name: host
    value: "smtp host"
  - name: port
    value: "smtp port"
  - name: user
    value: "username"
  - name: password
    value: "password"
  - name: skipTLSVerify
    value: true|false
  - name: emailFrom
    value: "sender@example.com"
  - name: emailTo
    value: "receiver@example.com"
  - name: emailCC
    value: "cc@example.com"
  - name: emailBCC
    value: "bcc@example.com"
  - name: subject
    value: "subject"
  - name: priority
    value: "[value 1-5]"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
hostYOutputThe host where your SMTP server runs"smtphost"
portYOutputThe port your SMTP server listens on"9999"
userYOutputThe user to authenticate against the SMTP server"user"
passwordYOutputThe password of the user"password"
skipTLSVerifyNOutputIf set to true, the SMPT server’s TLS certificate will not be verified. Defaults to "false""true", "false"
emailFromNOutputIf set, this specifies the email address of the sender. See also"me@example.com"
emailToNOutputIf set, this specifies the email address of the receiver. See also"me@example.com"
emailCcNOutputIf set, this specifies the email address to CC in. See also"me@example.com"
emailBccNOutputIf set, this specifies email address to BCC in. See also"me@example.com"
subjectNOutputIf set, this specifies the subject of the email message. See also"subject of mail"
priorityNOutputIf set, this specifies the priority (X-Priority) of the email message, from 1 (lowest) to 5 (highest) (default value: 3). See also"1"

Binding support

This component supports output binding with the following operations:

  • create

Example request

You can specify any of the following optional metadata properties with each request:

  • emailFrom
  • emailTo
  • emailCC
  • emailBCC
  • subject
  • priority

When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the emailFrom, emailTo and subject fields.

The emailTo, emailCC and emailBCC fields can contain multiple email addresses separated by a semicolon.

Example:

{
  "operation": "create",
  "metadata": {
    "emailTo": "dapr-smtp-binding@example.net",
    "emailCC": "cc1@example.net; cc2@example.net",
    "subject": "Email subject",
    "priority: "1"
  },
  "data": "Testing Dapr SMTP Binding"
}

The emailTo, emailCC and emailBCC fields can contain multiple email addresses separated by a semicolon.

2.44 - Twilio SendGrid binding spec

Detailed documentation on the Twilio SendGrid binding component

Component format

To setup Twilio SendGrid binding create a component of type bindings.twilio.sendgrid. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: sendgrid
spec:
  type: bindings.twilio.sendgrid
  version: v1
  metadata:
  - name: emailFrom
    value: "testapp@dapr.io" # optional
  - name: emailFromName
    value: "test app" # optional
  - name: emailTo
    value: "dave@dapr.io" # optional
  - name: emailToName
    value: "dave" # optional
  - name: subject
    value: "Hello!" # optional
  - name: emailCc
    value: "jill@dapr.io" # optional
  - name: emailBcc
    value: "bob@dapr.io" # optional
  - name: dynamicTemplateId
    value: "d-123456789" # optional
  - name: dynamicTemplateData
    value: '{"customer":{"name":"John Smith"}}' # optional
  - name: apiKey
    value: "YOUR_API_KEY" # required, this is your SendGrid key

Spec metadata fields

FieldRequiredBinding supportDetailsExample
apiKeyYOutputSendGrid API key, this should be considered a secret value"apikey"
emailFromNOutputIf set this specifies the ‘from’ email address of the email message. Only a single email address is allowed. Optional field, see below"me@example.com"
emailFromNameNOutputIf set this specifies the ‘from’ name of the email message. Optional field, see below"me"
emailToNOutputIf set this specifies the ’to’ email address of the email message. Only a single email address is allowed. Optional field, see below"me@example.com"
emailToNameNOutputIf set this specifies the ’to’ name of the email message. Optional field, see below"me"
emailCcNOutputIf set this specifies the ‘cc’ email address of the email message. Only a single email address is allowed. Optional field, see below"me@example.com"
emailBccNOutputIf set this specifies the ‘bcc’ email address of the email message. Only a single email address is allowed. Optional field, see below"me@example.com"
subjectNOutputIf set this specifies the subject of the email message. Optional field, see below"subject of the email"

Binding support

This component supports output binding with the following operations:

  • create

Example request payload

You can specify any of the optional metadata properties on the output binding request too (e.g. emailFrom, emailTo, subject, etc.)

{
  "operation": "create",
  "metadata": {
    "emailTo": "changeme@example.net",
    "subject": "An email from Dapr SendGrid binding"
  },
  "data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}

Dynamic templates

If a dynamic template is used, a dynamicTemplateId needs to be provided and then the dynamicTemplateData is used:

{
  "operation": "create",
  "metadata": {
    "emailTo": "changeme@example.net",
    "subject": "An template email from Dapr SendGrid binding",
    "dynamicTemplateId": "d-123456789",
    "dynamicTemplateData": "{\"customer\":{\"name\":\"John Smith\"}}"
  }
}

2.45 - Twilio SMS binding spec

Detailed documentation on the Twilio SMS binding component

Component format

To setup Twilio SMS binding create a component of type bindings.twilio.sms. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.twilio.sms
  version: v1
  metadata:
  - name: toNumber # required.
    value: "111-111-1111"
  - name: fromNumber # required.
    value: "222-222-2222"
  - name: accountSid # required.
    value: "*****************"
  - name: authToken # required.
    value: "*****************"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
toNumberYOutputThe target number to send the sms to"111-111-1111"
fromNumberYOutputThe sender phone number"222-222-2222"
accountSidYOutputThe Twilio account SID"account sid"
authTokenYOutputThe Twilio auth token"auth token"

Binding support

This component supports output binding with the following operations:

  • create

2.46 - Wasm

Detailed documentation on the WebAssembly binding component

Overview

With WebAssembly, you can safely run code compiled in other languages. Runtimes execute WebAssembly Modules (Wasm), which are most often binaries with a .wasm extension.

The Wasm Binding allows you to invoke a program compiled to Wasm by passing commandline args or environment variables to it, similar to how you would with a normal subprocess. For example, you can satisfy an invocation using Python, even though Dapr is written in Go and is running on a platform that doesn’t have Python installed!

The Wasm binary must be a program compiled with the WebAssembly System Interface (WASI). The binary can be a program you’ve written such as in Go, or an interpreter you use to run inlined scripts, such as Python.

Minimally, you must specify a Wasm binary compiled with the canonical WASI version wasi_snapshot_preview1 (a.k.a. wasip1), often abbreviated to wasi.

Note: If compiling in Go 1.21+, this is GOOS=wasip1 GOARCH=wasm. In TinyGo, Rust, and Zig, this is the target wasm32-wasi.

You can also re-use an existing binary. For example, Wasm Language Runtimes distributes interpreters (including PHP, Python, and Ruby) already compiled to WASI.

Wasm binaries are loaded from a URL. For example, the URL file://rewrite.wasm loads rewrite.wasm from the current directory of the process. On Kubernetes, see How to: Mount Pod volumes to the Dapr sidecar to configure a filesystem mount that can contain Wasm binaries. It is also possible to fetch the Wasm binary from a remote URL. In this case, the URL must point exactly to one Wasm binary. For example:

  • http://example.com/rewrite.wasm, or
  • https://example.com/rewrite.wasm.

Dapr uses wazero to run these binaries, because it has no dependencies. This allows use of WebAssembly with no installation process except Dapr itself.

The Wasm output binding supports making HTTP client calls using the wasi-http specification. You can find example code for making HTTP calls in a variety of languages here:

Component format

To configure a Wasm binding, create a component of type bindings.wasm. See this guide on how to create and apply a binding configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: wasm
spec:
  type: bindings.wasm
  version: v1
  metadata:
    - name: url
      value: "file://uppercase.wasm"

Spec metadata fields

FieldDetailsRequiredExample
urlThe URL of the resource including the Wasm binary to instantiate. The supported schemes include file://, http://, and https://. The path of a file:// URL is relative to the Dapr process unless it begins with /.truefile://hello.wasm, https://example.com/hello.wasm

Binding support

This component supports output binding with the following operations:

  • execute

Example request

The data field, if present will be the program’s STDIN. You can optionally pass metadata properties with each request:

  • args any CLI arguments, comma-separated. This excludes the program name.

For example, consider binding the url to a Ruby interpreter, such as from webassembly-language-runtimes:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: wasm
spec:
  type: bindings.wasm
  version: v1
  metadata:
  - name: url
    value: "https://github.com/vmware-labs/webassembly-language-runtimes/releases/download/ruby%2F3.2.0%2B20230215-1349da9/ruby-3.2.0-slim.wasm"

Assuming that you wanted to start your Dapr at port 3500 with the Wasm Binding, you’d run:

$ dapr run --app-id wasm --dapr-http-port 3500 --resources-path components

The following request responds Hello "salaboy":

$ curl -X POST http://localhost:3500/v1.0/bindings/wasm -d'
{
  "operation": "execute",
  "metadata": {
    "args": "-ne,print \"Hello \"; print"
  },
  "data": "salaboy"
}'

2.47 - Zeebe command binding spec

Detailed documentation on the Zeebe command binding component

Component format

To setup Zeebe command binding create a component of type bindings.zeebe.command. See this guide on how to create and apply a binding configuration.

See this for Zeebe documentation.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.zeebe.command
  version: v1
  metadata:
  - name: gatewayAddr
    value: "<host>:<port>"
  - name: gatewayKeepAlive
    value: "45s"
  - name: usePlainTextConnection
    value: "true"
  - name: caCertificatePath
    value: "/path/to/ca-cert"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
gatewayAddrYOutputZeebe gateway address"localhost:26500"
gatewayKeepAliveNOutputSets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds"45s"
usePlainTextConnectionNOutputWhether to use a plain text connection or not"true", "false"
caCertificatePathNOutputThe path to the CA cert"/path/to/ca-cert"

Binding support

This component supports output binding with the following operations:

  • topology
  • deploy-process
  • deploy-resource
  • create-instance
  • cancel-instance
  • set-variables
  • resolve-incident
  • publish-message
  • activate-jobs
  • complete-job
  • fail-job
  • update-job-retries
  • throw-error

Output binding

Zeebe uses gRPC under the hood for the Zeebe client we use in this binding. Please consult the gRPC API reference for more information.

topology

The topology operation obtains the current topology of the cluster the gateway is part of.

To perform a topology operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {},
  "operation": "topology"
}
Response

The binding returns a JSON with the following response:

{
  "brokers": [
    {
      "nodeId": null,
      "host": "172.18.0.5",
      "port": 26501,
      "partitions": [
        {
          "partitionId": 1,
          "role": null,
          "health": null
        }
      ],
      "version": "0.26.0"
    }
  ],
  "clusterSize": 1,
  "partitionsCount": 1,
  "replicationFactor": 1,
  "gatewayVersion": "0.26.0"
}

The response values are:

  • brokers - list of brokers part of this cluster
    • nodeId - unique (within a cluster) node ID for the broker
    • host - hostname of the broker
    • port - port for the broker
    • port - port for the broker
    • partitions - list of partitions managed or replicated on this broker
      • partitionId - the unique ID of this partition
      • role - the role of the broker for this partition
      • health - the health of this partition
    • version - broker version
  • clusterSize - how many nodes are in the cluster
  • partitionsCount - how many partitions are spread across the cluster
  • replicationFactor - configured replication factor for this cluster
  • gatewayVersion - gateway version

deploy-process

Deprecated alias of ‘deploy-resource’.

deploy-resource

The deploy-resource operation deploys a single resource to Zeebe. A resource can be a process (BPMN) or a decision and a decision requirement (DMN).

To perform a deploy-resource operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": "YOUR_FILE_CONTENT",
  "metadata": {
    "fileName": "products-process.bpmn"
  },
  "operation": "deploy-resource"
}

The metadata parameters are:

  • fileName - the name of the resource file
Response

The binding returns a JSON with the following response:

{
  "key": 2251799813685252,
  "deployments": [
    {
      "Metadata": {
        "Process": {
          "bpmnProcessId": "products-process",
          "version": 2,
          "processDefinitionKey": 2251799813685251,
          "resourceName": "products-process.bpmn"
        }
      }
    }
  ]
}
{
  "key": 2251799813685253,
  "deployments": [
    {
      "Metadata": {
        "Decision": {
          "dmnDecisionId": "products-approval",
          "dmnDecisionName": "Products approval",
          "version": 1,
          "decisionKey": 2251799813685252,
          "dmnDecisionRequirementsId": "Definitions_0c98xne",
          "decisionRequirementsKey": 2251799813685251
        }
      }
    },
    {
      "Metadata": {
        "DecisionRequirements": {
          "dmnDecisionRequirementsId": "Definitions_0c98xne",
          "dmnDecisionRequirementsName": "DRD",
          "version": 1,
          "decisionRequirementsKey": 2251799813685251,
          "resourceName": "products-approval.dmn"
        }
      }
    }
  ]
}

The response values are:

  • key - the unique key identifying the deployment
  • deployments - a list of deployed resources, e.g. processes
    • metadata - deployment metadata, each deployment has only one metadata
      • process- metadata of a deployed process
        • bpmnProcessId - the bpmn process ID, as parsed during deployment; together with the version forms a unique identifier for a specific process definition
        • version - the assigned process version
        • processDefinitionKey - the assigned key, which acts as a unique identifier for this process
        • resourceName - the resource name from which this process was parsed
      • decision - metadata of a deployed decision
        • dmnDecisionId - the dmn decision ID, as parsed during deployment; together with the versions forms a unique identifier for a specific decision
        • dmnDecisionName - the dmn name of the decision, as parsed during deployment
        • version - the assigned decision version
        • decisionKey - the assigned decision key, which acts as a unique identifier for this decision
        • dmnDecisionRequirementsId - the dmn ID of the decision requirements graph that this decision is part of, as parsed during deployment
        • decisionRequirementsKey - the assigned key of the decision requirements graph that this decision is part of
      • decisionRequirements - metadata of a deployed decision requirements
        • dmnDecisionRequirementsId - the dmn decision requirements ID, as parsed during deployment; together with the versions forms a unique identifier for a specific decision
        • dmnDecisionRequirementsName - the dmn name of the decision requirements, as parsed during deployment
        • version - the assigned decision requirements version
        • decisionRequirementsKey - the assigned decision requirements key, which acts as a unique identifier for this decision requirements
        • resourceName - the resource name from which this decision requirements was parsed

create-instance

The create-instance operation creates and starts an instance of the specified process. The process definition to use to create the instance can be specified either using its unique key (as returned by the deploy-process operation), or using the BPMN process ID and a version.

Note that only processes with none start events can be started through this command.

Typically, process creation and execution are decoupled. This means that the command creates a new process instance and immediately responds with the process instance id. The execution of the process occurs after the response is sent. However, there are use cases that need to collect the results of a process when its execution is complete. By defining the withResult property, the command allows to “synchronously” execute processes and receive the results via a set of variables. The response is sent when the process execution is complete.

For more information please visit the official documentation.

To perform a create-instance operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "bpmnProcessId": "products-process",
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "create-instance"
}
{
  "data": {
    "processDefinitionKey": 2251799813685895,
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "create-instance"
}
{
  "data": {
    "bpmnProcessId": "products-process",
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    },
    "withResult": true,
    "requestTimeout": "30s",
    "fetchVariables": ["productId"]
  },
  "operation": "create-instance"
}

The data parameters are:

  • bpmnProcessId - the BPMN process ID of the process definition to instantiate
  • processDefinitionKey - the unique key identifying the process definition to instantiate
  • version - (optional, default: latest version) the version of the process to instantiate
  • variables - (optional) JSON document that will instantiate the variables for the root variable scope of the process instance; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and “b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a valid argument, as the root of the JSON document is an array and not an object
  • withResult - (optional, default: false) if set to true, the process will be instantiated and executed synchronously
  • requestTimeout - (optional, only used if withResult=true) timeout the request will be closed if the process is not completed before the requestTimeout. If requestTimeout = 0, uses the generic requestTimeout configured in the gateway.
  • fetchVariables - (optional, only used if withResult=true) list of names of variables to be included in variables property of the response. If empty, all visible variables in the root scope will be returned.
Response

The binding returns a JSON with the following response:

{
  "processDefinitionKey": 2251799813685895,
  "bpmnProcessId": "products-process",
  "version": 3,
  "processInstanceKey": 2251799813687851,
  "variables": "{\"productId\":\"some-product-id\"}"
}

The response values are:

  • processDefinitionKey - the key of the process definition which was used to create the process instance
  • bpmnProcessId - the BPMN process ID of the process definition which was used to create the process instance
  • version - the version of the process definition which was used to create the process instance
  • processInstanceKey - the unique identifier of the created process instance
  • variables - (optional, only if withResult=true was used in the request) JSON document consists of visible variables in the root scope; returned as a serialized JSON document

cancel-instance

The cancel-instance operation cancels a running process instance.

To perform a cancel-instance operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "processInstanceKey": 2251799813687851
  },
  "operation": "cancel-instance"
}

The data parameters are:

  • processInstanceKey - the process instance key
Response

The binding does not return a response body.

set-variables

The set-variables operation creates or updates variables for an element instance (e.g. process instance, flow element instance).

To perform a set-variables operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "elementInstanceKey": 2251799813687880,
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "set-variables"
}

The data parameters are:

  • elementInstanceKey - the unique identifier of a particular element; can be the process instance key (as obtained during instance creation), or a given element, such as a service task (see elementInstanceKey on the job message)
  • local - (optional, default: false) if true, the variables will be merged strictly into the local scope (as indicated by elementInstanceKey); this means the variables is not propagated to upper scopes. for example, let’s say we have two scopes, ‘1’ and ‘2’, with each having effective variables as: 1 => { "foo" : 2 }, and 2 => { "bar" : 1 }. if we send an update request with elementInstanceKey = 2, variables { "foo" : 5 }, and local is true, then scope 1 will be unchanged, and scope 2 will now be { "bar" : 1, "foo" 5 }. if local was false, however, then scope 1 would be { "foo": 5 }, and scope 2 would be { "bar" : 1 }
  • variables - a JSON serialized document describing variables as key value pairs; the root of the document must be an object
Response

The binding returns a JSON with the following response:

{
  "key": 2251799813687896
}

The response values are:

  • key - the unique key of the set variables command

resolve-incident

The resolve-incident operation resolves an incident.

To perform a resolve-incident operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "incidentKey": 2251799813686123
  },
  "operation": "resolve-incident"
}

The data parameters are:

  • incidentKey - the unique ID of the incident to resolve
Response

The binding does not return a response body.

publish-message

The publish-message operation publishes a single message. Messages are published to specific partitions computed from their correlation keys.

To perform a publish-message operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "messageName": "product-message",
    "correlationKey": "2",
    "timeToLive": "1m",
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    },
  },  
  "operation": "publish-message"
}

The data parameters are:

  • messageName - the name of the message
  • correlationKey - (optional) the correlation key of the message
  • timeToLive - (optional) how long the message should be buffered on the broker
  • messageId - (optional) the unique ID of the message; can be omitted. only useful to ensure only one message with the given ID will ever be published (during its lifetime)
  • variables - (optional) the message variables as a JSON document; to be valid, the root of the document must be an object, e.g. { “a”: “foo” }. [ “foo” ] would not be valid
Response

The binding returns a JSON with the following response:

{
  "key": 2251799813688225
}

The response values are:

  • key - the unique ID of the message that was published

activate-jobs

The activate-jobs operation iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to the client as they are activated.

To perform a activate-jobs operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "jobType": "fetch-products",
    "maxJobsToActivate": 5,
    "timeout": "5m",
    "workerName": "products-worker",
    "fetchVariables": [
      "productId",
      "productName",
      "productKey"
    ],
    "requestTimeout": "30s"
  },
  "operation": "activate-jobs"
}

The data parameters are:

  • jobType - the job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition type="fetch-products" />)
  • maxJobsToActivate - the maximum jobs to activate by this request
  • timeout - (optional, default: 5 minutes) a job returned after this call will not be activated by another call until the timeout has been reached
  • workerName - (optional, default: default) the name of the worker activating the jobs, mostly used for logging purposes
  • fetchVariables - (optional) a list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returned
  • requestTimeout - (optional) the request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated.
Response

The binding returns a JSON with the following response:

[
  {
    "key": 2251799813685267,
    "type": "fetch-products",
    "processInstanceKey": 2251799813685260,
    "bpmnProcessId": "products",
    "processDefinitionVersion": 1,
    "processDefinitionKey": 2251799813685249,
    "elementId": "Activity_test",
    "elementInstanceKey": 2251799813685266,
    "customHeaders": "{\"process-header-1\":\"1\",\"process-header-2\":\"2\"}",
    "worker": "test", 
    "retries": 1,
    "deadline": 1694091934039,
    "variables":"{\"productId\":\"some-product-id\"}"
  }
]

The response values are:

  • key - the key, a unique identifier for the job
  • type - the type of the job (should match what was requested)
  • processInstanceKey - the job’s process instance key
  • bpmnProcessId - the bpmn process ID of the job process definition
  • processDefinitionVersion - the version of the job process definition
  • processDefinitionKey - the key of the job process definition
  • elementId - the associated task element ID
  • elementInstanceKey - the unique key identifying the associated task, unique within the scope of the process instance
  • customHeaders - a set of custom headers defined during modelling; returned as a serialized JSON document
  • worker - the name of the worker which activated this job
  • retries - the amount of retries left to this job (should always be positive)
  • deadline - when the job can be activated again, sent as a UNIX epoch timestamp
  • variables - computed at activation time, consisting of all visible variables to the task scope; returned as a serialized JSON document

complete-job

The complete-job operation completes a job with the given payload, which allows completing the associated service task.

To perform a complete-job operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "jobKey": 2251799813686172,
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "complete-job"
}

The data parameters are:

  • jobKey - the unique job identifier, as obtained from the activate jobs response
  • variables - (optional) a JSON document representing the variables in the current task scope
Response

The binding does not return a response body.

fail-job

The fail-job operation marks the job as failed; if the retries argument is positive, then the job will be immediately activatable again, and a worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the job will not be activatable until the incident is resolved.

To perform a fail-job operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "jobKey": 2251799813685739,
    "retries": 5,
    "errorMessage": "some error occurred",
    "retryBackOff": "30s",
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "fail-job"
}

The data parameters are:

  • jobKey - the unique job identifier, as obtained when activating the job
  • retries - the amount of retries the job should have left
  • errorMessage - (optional) a message describing why the job failed this is particularly useful if a job runs out of retries and an incident is raised, as it this message can help explain why an incident was raised
  • retryBackOff - (optional) the back-off timeout for the next retry
  • variables - (optional) JSON document that will instantiate the variables at the local scope of the job’s associated task; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and “b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a valid argument, as the root of the JSON document is an array and not an object.
Response

The binding does not return a response body.

update-job-retries

The update-job-retries operation updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the underlying problem be solved.

To perform a update-job-retries operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "jobKey": 2251799813686172,
    "retries": 10
  },
  "operation": "update-job-retries"
}

The data parameters are:

  • jobKey - the unique job identifier, as obtained through the activate-jobs operation
  • retries - the new amount of retries for the job; must be positive
Response

The binding does not return a response body.

throw-error

The throw-error operation throw an error to indicate that a business error is occurred while processing the job. The error is identified by an error code and is handled by an error catch event in the process with the same error code.

To perform a throw-error operation, invoke the Zeebe command binding with a POST method, and the following JSON body:

{
  "data": {
    "jobKey": 2251799813686172,
    "errorCode": "product-fetch-error",
    "errorMessage": "The product could not be fetched",
    "variables": {
      "productId": "some-product-id",
      "productName": "some-product-name",
      "productKey": "some-product-key"
    }
  },
  "operation": "throw-error"
}

The data parameters are:

  • jobKey - the unique job identifier, as obtained when activating the job
  • errorCode - the error code that will be matched with an error catch event
  • errorMessage - (optional) an error message that provides additional context
  • variables - (optional) JSON document that will instantiate the variables at the local scope of the job’s associated task; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { “a”: 1, “b”: 2 } will create two variables, named “a” and “b” respectively, with their associated values. [{ “a”: 1, “b”: 2 }] would not be a valid argument, as the root of the JSON document is an array and not an object.
Response

The binding does not return a response body.

2.48 - Zeebe JobWorker binding spec

Detailed documentation on the Zeebe JobWorker binding component

Component format

To setup Zeebe JobWorker binding create a component of type bindings.zeebe.jobworker. See this guide on how to create and apply a binding configuration.

See this for Zeebe JobWorker documentation.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: bindings.zeebe.jobworker
  version: v1
  metadata:
  - name: gatewayAddr
    value: "<host>:<port>"
  - name: gatewayKeepAlive
    value: "45s"
  - name: usePlainTextConnection
    value: "true"
  - name: caCertificatePath
    value: "/path/to/ca-cert"
  - name: workerName
    value: "products-worker"
  - name: workerTimeout
    value: "5m"
  - name: requestTimeout
    value: "15s"
  - name: jobType
    value: "fetch-products"
  - name: maxJobsActive
    value: "32"
  - name: concurrency
    value: "4"
  - name: pollInterval
    value: "100ms"
  - name: pollThreshold
    value: "0.3"
  - name: fetchVariables
    value: "productId, productName, productKey"
  - name: autocomplete
    value: "true"
  - name: retryBackOff
    value: "30s"
  - name: direction
    value: "input"

Spec metadata fields

FieldRequiredBinding supportDetailsExample
gatewayAddrYInputZeebe gateway address"localhost:26500"
gatewayKeepAliveNInputSets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds"45s"
usePlainTextConnectionNInputWhether to use a plain text connection or not"true", "false"
caCertificatePathNInputThe path to the CA cert"/path/to/ca-cert"
workerNameNInputThe name of the worker activating the jobs, mostly used for logging purposes"products-worker"
workerTimeoutNInputA job returned after this call will not be activated by another call until the timeout has been reached; defaults to 5 minutes"5m"
requestTimeoutNInputThe request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated. Defaults to 10 seconds"30s"
jobTypeYInputthe job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition type="fetch-products" />)"fetch-products"
maxJobsActiveNInputSet the maximum number of jobs which will be activated for this worker at the same time. Defaults to 32"32"
concurrencyNInputThe maximum number of concurrent spawned goroutines to complete jobs. Defaults to 4"4"
pollIntervalNInputSet the maximal interval between polling for new jobs. Defaults to 100 milliseconds"100ms"
pollThresholdNInputSet the threshold of buffered activated jobs before polling for new jobs, i.e. threshold * maxJobsActive. Defaults to 0.3"0.3"
fetchVariablesNInputA list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returned"productId", "productName", "productKey"
autocompleteNInputIndicates if a job should be autocompleted or not. If not set, all jobs will be auto-completed by default. Disable it if the worker should manually complete or fail the job with either a business error or an incident"true", "false"
retryBackOffNInputThe back-off timeout for the next retry if a job fails15s
directionNInputThe direction of the binding"input"

Binding support

This component supports input binding interfaces.

Input binding

Variables

The Zeebe process engine handles the process state as also process variables which can be passed on process instantiation or which can be updated or created during process execution. These variables can be passed to a registered job worker by defining the variable names as comma-separated list in the fetchVariables metadata field. The process engine will then pass these variables with its current values to the job worker implementation.

If the binding will register three variables productId, productName and productKey then the worker will be called with the following JSON body:

{
  "productId": "some-product-id",
  "productName": "some-product-name",
  "productKey": "some-product-key"
}

Note: if the fetchVariables metadata field will not be passed, all process variables will be passed to the worker.

Headers

The Zeebe process engine has the ability to pass custom task headers to a job worker. These headers can be defined for every service task. Task headers will be passed by the binding as metadata (HTTP headers) to the job worker.

The binding will also pass the following job related variables as metadata. The values will be passed as string. The table contains also the original data type so that it can be converted back to the equivalent data type in the used programming language for the worker.

MetadataData typeDescription
X-Zeebe-Job-Keyint64The key, a unique identifier for the job
X-Zeebe-Job-TypestringThe type of the job (should match what was requested)
X-Zeebe-Process-Instance-Keyint64The job’s process instance key
X-Zeebe-Bpmn-Process-IdstringThe bpmn process ID of the job process definition
X-Zeebe-Process-Definition-Versionint32The version of the job process definition
X-Zeebe-Process-Definition-Keyint64The key of the job process definition
X-Zeebe-Element-IdstringThe associated task element ID
X-Zeebe-Element-Instance-Keyint64The unique key identifying the associated task, unique within the scope of the process instance
X-Zeebe-WorkerstringThe name of the worker which activated this job
X-Zeebe-Retriesint32The amount of retries left to this job (should always be positive)
X-Zeebe-Deadlineint64When the job can be activated again, sent as a UNIX epoch timestamp
X-Zeebe-AutocompleteboolThe autocomplete status that is defined in the binding metadata

3 - State store component specs

The supported state stores that interface with Dapr

The following table lists state stores supported, at various levels, by the Dapr state management building block. Learn how to set up different state stores for Dapr state management.

Table headers to note:

HeaderDescriptionExample
StatusComponent certification statusAlpha
Beta
Stable
Component versionThe version of the componentv1
Since runtime versionThe version of the Dapr runtime when the component status was set or updated1.11

Generic

ComponentCRUDTransactionalETagTTLActorsQueryStatusComponent versionSince runtime version
AerospikeTransactions: Not supportedTTL: Not supportedActors: Not supportedQuery: Not supportedAlphav11.0
Apache CassandraTransactions: Not supportedETag: Not supportedActors: Not supportedQuery: Not supportedStablev11.9
CockroachDBStablev11.10
CouchbaseTransactions: Not supportedTTL: Not supportedActors: Not supportedQuery: Not supportedAlphav11.0
etcdQuery: Not supportedBetav21.12
Hashicorp ConsulTransactions: Not supportedETag: Not supportedTTL: Not supportedActors: Not supportedQuery: Not supportedAlphav11.0
HazelcastTransactions: Not supportedETag: Not supportedTTL: Not supportedActors: Not supportedQuery: Not supportedAlphav11.0
In-memoryQuery: Not supportedStablev11.9
JetStream KVTransactions: Not supportedETag: Not supportedTTL: Not supportedActors: Not supportedQuery: Not supportedAlphav11.7
MemcachedTransactions: Not supportedETag: Not supportedActors: Not supportedQuery: Not supportedStablev11.9
MongoDBStablev11.0
MySQL & MariaDBQuery: Not supportedStablev11.10
Oracle DatabaseQuery: Not supportedBetav11.7
PostgreSQL v1Stablev11.0
PostgreSQL v2Query: Not supportedStablev21.13
RedisStablev11.0
RethinkDBTransactions: Not supportedETag: Not supportedTTL: Not supportedActors: Not supportedQuery: Not supportedBetav11.9
SQLiteQuery: Not supportedStablev11.11
ZookeeperTransactions: Not supportedTTL: Not supportedActors: Not supportedQuery: Not supportedAlphav11.0

Amazon Web Services (AWS)

ComponentCRUDTransactionalETagTTLActorsQueryStatusComponent versionSince runtime version
AWS DynamoDBQuery: Not supportedStablev11.10

Cloudflare

ComponentCRUDTransactionalETagTTLActorsQueryStatusComponent versionSince runtime version
Cloudflare Workers KVTransactions: Not supportedETag: Not supportedActors: Not supportedQuery: Not supportedBetav11.10

Google Cloud Platform (GCP)

ComponentCRUDTransactionalETagTTLActorsQueryStatusComponent versionSince runtime version
GCP FirestoreTransactions: Not supportedETag: Not supportedTTL: Not supportedActors: Not supportedQuery: Not supportedStablev11.11

Microsoft Azure

ComponentCRUDTransactionalETagTTLActorsQueryStatusComponent versionSince runtime version
Azure Blob StorageTransactions: Not supportedTTL: Not supportedActors: Not supportedQuery: Not supportedStablev21.13
Azure Cosmos DBStablev11.0
Azure Table StorageTransactions: Not supportedTTL: Not supportedActors: Not supportedQuery: Not supportedStablev11.9
Microsoft SQL ServerQuery: Not supportedStablev11.5

Oracle Cloud

ComponentCRUDTransactionalETagTTLActorsQueryStatusComponent versionSince runtime version
Autonomous Database (ATP and ADW)Query: Not supportedAlphav11.7
CoherenceTransactions: Not supportedETag: Not supportedActors: Not supportedQuery: Not supportedAlphav11.16
Object StorageTransactions: Not supportedActors: Not supportedQuery: Not supportedAlphav11.6

3.1 - Aerospike

Detailed information on the Aerospike state store component

Component format

To setup Aerospike state store create a component of type state.Aerospike. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.Aerospike
  version: v1
  metadata:
  - name: hosts
    value: <REPLACE-WITH-HOSTS> # Required. A comma delimited string of hosts. Example: "aerospike:3000,aerospike2:3000"
  - name: namespace
    value: <REPLACE-WITH-NAMESPACE> # Required. The aerospike namespace.
  - name: set
    value: <REPLACE-WITH-SET> # Optional

Spec metadata fields

FieldRequiredDetailsExample
hostsYHost name/port of database server"localhost:3000", "aerospike:3000,aerospike2:3000"
namespaceYThe Aerospike namespace"namespace"
setNThe setName in the database"myset"

Setup Aerospike

You can run Aerospike locally using Docker:

docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike

You can then interact with the server using localhost:3000.

The easiest way to install Aerospike on Kubernetes is by using the Helm chart:

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name my-aerospike --namespace aerospike stable/aerospike

This installs Aerospike into the aerospike namespace. To interact with Aerospike, find the service with: kubectl get svc aerospike -n aerospike.

For example, if installing using the example above, the Aerospike host address would be:

aerospike-my-aerospike.aerospike.svc.cluster.local:3000

3.2 - AWS DynamoDB

Detailed information on the AWS DynamoDB state store component

Component format

To setup a DynamoDB state store create a component of type state.aws.dynamodb. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.aws.dynamodb
  version: v1
  metadata:
  - name: table
    value: "Contracts"
  - name: accessKey
    value: "AKIAIOSFODNN7EXAMPLE" # Optional
  - name: secretKey
    value: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" # Optional
  - name: endpoint
    value: "http://localhost:8080" # Optional
  - name: region
    value: "eu-west-1" # Optional
  - name: sessionToken
    value: "myTOKEN" # Optional
  - name: ttlAttributeName
    value: "expiresAt" # Optional
  - name: partitionKey
    value: "ContractID" # Optional
  # Uncomment this if you wish to use AWS DynamoDB as a state store for actors (optional)
  #- name: actorStateStore
  #  value: "true"

Primary Key

In order to use DynamoDB as a Dapr state store, the table must have a primary key named key. See the section Partition Keys for an option to change this behavior.

Spec metadata fields

FieldRequiredDetailsExample
tableYname of the DynamoDB table to use"Contracts"
accessKeyNID of the AWS account with appropriate permissions to SNS and SQS. Can be secretKeyRef to use a secret reference"AKIAIOSFODNN7EXAMPLE"
secretKeyNSecret for the AWS user. Can be secretKeyRef to use a secret reference"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
regionNThe AWS region to the instance. See this page for valid regions: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html. Ensure that DynamoDB are available in that region."us-east-1"
endpointNAWS endpoint for the component to use. Only used for local development. The endpoint is unncessary when running against production AWS"http://localhost:4566"
sessionTokenNAWS session token to use. A session token is only required if you are using temporary security credentials."TOKEN"
ttlAttributeNameNThe table attribute name which should be used for TTL."expiresAt"
partitionKeyNThe table primary key or partition key attribute name. This field is used to replace the default primary key attribute name "key". See the section Partition Keys."ContractID"
actorStateStoreNConsider this state store for actors. Defaults to “false”"true", "false"

Setup AWS DynamoDB

See Authenticating to AWS for information about authentication-related attributes

Time to live (TTL)

In order to use DynamoDB TTL feature, you must enable TTL on your table and define the attribute name. The attribute name must be defined in the ttlAttributeName field. See official AWS docs.

Partition Keys

By default, the DynamoDB state store component uses the table attribute name key as primary/partition key in the DynamoDB table. This can be overridden by specifying a metadata field in the component configuration with a key of partitionKey and a value of the desired attribute name.

To learn more about DynamoDB primary/partition keys, read the AWS DynamoDB Developer Guide.

The following statestore.yaml file shows how to configure the DynamoDB state store component to use the partition key attribute name of ContractID:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
spec:
  type: state.aws.dynamodb
  version: v1
  metadata:
  - name: table
    value: "Contracts"
  - name: partitionKey
    value: "ContractID"

The above component specification assumes the following DynamoDB Table Layout:

{
    "Table": {
        "AttributeDefinitions": [
            {
                "AttributeName": "ContractID",
                "AttributeType": "S"
            }
        ],
        "TableName": "Contracts",
        "KeySchema": [
            {
                "AttributeName": "ContractID",
                "KeyType": "HASH"
            }
        ],
}

The following operation passes "A12345" as the value for key, and based on the component specification provided above, the Dapr runtime will replace the key attribute name with ContractID as the Partition/Primary Key sent to DynamoDB:

$ dapr run --app-id contractsprocessing --app-port ...

$ curl -X POST http://localhost:3500/v1.0/state/<store_name> \
  -H "Content-Type: application/json"
  -d '[
        {
          "key": "A12345",
          "value": "Dapr Contract"
        }
      ]'

The following AWS CLI Command displays the contents of the DynamoDB Contracts table:

$ aws dynamodb get-item \
    --table-name Contracts \
    --key '{"ContractID":{"S":"contractsprocessing||A12345"}}' 
{
    "Item": {
        "value": {
            "S": "Dapr Contract"
        },
        "etag": {
            "S": "....."
        },
        "ContractID": {
            "S": "contractsprocessing||A12345"
        }
    }
}

3.3 - Azure Blob Storage

Detailed information on the Azure Blob Store state store component

Component format

To setup the Azure Blob Storage state store create a component of type state.azure.blobstorage. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.azure.blobstorage
  # Supports v1 and v2. Users should always use v2 by default. There is no
  # migration path from v1 to v2, see `versioning` below.
  version: v2
  metadata:
  - name: accountName
    value: "[your_account_name]"
  - name: accountKey
    value: "[your_account_key]"
  - name: containerName
    value: "[your_container_name]"

Versioning

Dapr has 2 versions of the Azure Blob Storage state store component: v1 and v2. It is recommended to use v2 for all new applications. v1 is considered legacy and is preserved for compatibility with existing applications only.

In v1, a longstanding implementation issue was identified, where the key prefix was incorrectly stripped by the component, essentially behaving as if keyPrefix was always set to none.
The updated v2 of the component fixes the incorrect behavior and makes the state store correctly respect the keyPrefix property.

While v1 and v2 have the same metadata fields, they are otherwise incompatible, with no automatic data migration path for v1 to v2.

If you are using v1 of this component, you should continue to use v1 until you create a new state store.

Spec metadata fields

FieldRequiredDetailsExample
accountNameYThe storage account name"mystorageaccount".
accountKeyY (unless using Microsoft Entra ID)Primary or secondary storage key"key"
containerNameYThe name of the container to be used for Dapr state. The container will be created for you if it doesn’t exist"container"
azureEnvironmentNOptional name for the Azure environment if using a different Azure cloud"AZUREPUBLICCLOUD" (default value), "AZURECHINACLOUD", "AZUREUSGOVERNMENTCLOUD"
endpointNOptional custom endpoint URL. This is useful when using the Azurite emulator or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (http:// or https://), the IP or FQDN, and optional port."http://127.0.0.1:10000"
ContentTypeNThe blob’s content type"text/plain"
ContentMD5NThe blob’s MD5 hash"vZGKbMRDAnMs4BIwlXaRvQ=="
ContentEncodingNThe blob’s content encoding"UTF-8"
ContentLanguageNThe blob’s content language"en-us"
ContentDispositionNThe blob’s content disposition. Conveys additional information about how to process the response payload"attachment"
CacheControlNThe blob’s cache control"no-cache"

Setup Azure Blob Storage

Follow the instructions from the Azure documentation on how to create an Azure Storage Account.

If you wish to create a container for Dapr to use, you can do so beforehand. However, the Blob Storage state provider will create one for you automatically if it doesn’t exist.

In order to setup Azure Blob Storage as a state store, you will need the following properties:

  • accountName: The storage account name. For example: mystorageaccount.
  • accountKey: Primary or secondary storage account key.
  • containerName: The name of the container to be used for Dapr state. The container will be created for you if it doesn’t exist.

Authenticating with Microsoft Entra ID

This component supports authentication with Microsoft Entra ID as an alternative to use account keys. Whenever possible, it is recommended that you use Microsoft Entra ID for authentication in production systems, to take advantage of better security, fine-tuned access control, and the ability to use managed identities for apps running on Azure.

The following scripts are optimized for a bash or zsh shell and require the following apps installed:

You must also be authenticated with Azure in your Azure CLI.

  1. To get started with using Microsoft Entra ID for authenticating the Blob Storage state store component, make sure you’ve created an Microsoft Entra ID application and a Service Principal as explained in the Authenticating to Azure document.
    Once done, set a variable with the ID of the Service Principal that you created:
SERVICE_PRINCIPAL_ID="[your_service_principal_object_id]"
  1. Set the following variables with the name of your Azure Storage Account and the name of the Resource Group where it’s located:
STORAGE_ACCOUNT_NAME="[your_storage_account_name]"
RG_NAME="[your_resource_group_name]"
  1. Using RBAC, assign a role to our Service Principal so it can access data inside the Storage Account.
    In this case, you are assigning the “Storage blob Data Contributor” role, which has broad access; other more restrictive roles can be used as well, depending on your application.
RG_ID=$(az group show --resource-group ${RG_NAME} | jq -r ".id")
az role assignment create \
  --assignee "${SERVICE_PRINCIPAL_ID}" \
  --role "Storage blob Data Contributor" \
  --scope "${RG_ID}/providers/Microsoft.Storage/storageAccounts/${STORAGE_ACCOUNT_NAME}"

When authenticating your component using Microsoft Entra ID, the accountKey field is not required. Instead, please specify the required credentials in the component’s metadata (if any) according to the Authenticating to Azure document.

For example:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.azure.blobstorage
  version: v1
  metadata:
  - name: accountName
    value: "[your_account_name]"
  - name: containerName
    value: "[your_container_name]"
  - name: azureTenantId
    value: "[your_tenant_id]"
  - name: azureClientId
    value: "[your_client_id]"
  - name: azureClientSecret
    value : "[your_client_secret]"

Apply the configuration

In Kubernetes

To apply Azure Blob Storage state store to Kubernetes, use the kubectl CLI:

kubectl apply -f azureblob.yaml

Running locally

To run locally, create a components dir containing the YAML file and provide the path to the dapr run command with the flag --resources-path.

This state store creates a blob file in the container and puts raw state inside it.

For example, the following operation coming from service called myservice:

curl -X POST http://localhost:3500/v1.0/state \
  -H "Content-Type: application/json"
  -d '[
        {
          "key": "nihilus",
          "value": "darth"
        }
      ]'

This creates the blob file in the container with key as filename and value as the contents of file.

Concurrency

Azure Blob Storage state concurrency is achieved by using ETags according to the Azure Blob Storage documentation.

3.4 - Azure Cosmos DB (SQL API)

Detailed information on the Azure Cosmos DB (SQL API) state store component

Component format

To setup Azure Cosmos DB state store create a component of type state.azure.cosmosdb. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.azure.cosmosdb
  version: v1
  metadata:
  - name: url
    value: <REPLACE-WITH-URL>
  - name: masterKey
    value: <REPLACE-WITH-MASTER-KEY>
  - name: database
    value: <REPLACE-WITH-DATABASE>
  - name: collection
    value: <REPLACE-WITH-COLLECTION>
  # Uncomment this if you wish to use Azure Cosmos DB as a state store for actors (optional)
  #- name: actorStateStore
  #  value: "true"

If you wish to use Cosmos DB as an actor store, append the following to the yaml.

  - name: actorStateStore
    value: "true"

Spec metadata fields

FieldRequiredDetailsExample
urlYThe Cosmos DB url"https://******.documents.azure.com:443/".
masterKeyY*The key to authenticate to the Cosmos DB account. Only required when not using Microsoft Entra ID authentication."key"
databaseYThe name of the database"db"
collectionYThe name of the collection (container)"collection"
actorStateStoreNConsider this state store for actors. Defaults to "false""true", "false"

Microsoft Entra ID authentication

The Azure Cosmos DB state store component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

You can read additional information for setting up Cosmos DB with Azure AD authentication in the section below.

Setup Azure Cosmos DB

Follow the instructions from the Azure documentation on how to create an Azure Cosmos DB account. The database and collection must be created in Cosmos DB before Dapr can use it.

Important: The partition key for the collection must be named /partitionKey (note: this is case-sensitive).

In order to setup Cosmos DB as a state store, you need the following properties:

  • URL: the Cosmos DB url. for example: https://******.documents.azure.com:443/
  • Master Key: The key to authenticate to the Cosmos DB account. Skip this if using Microsoft Entra ID authentication.
  • Database: The name of the database
  • Collection: The name of the collection (or container)

TTLs and cleanups

This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to override the default TTL on the CosmodDB container, indicating when the data should be considered “expired”. Note that this value only takes effect if the container’s DefaultTimeToLive field has a non-NULL value. See the CosmosDB documentation for more information.

Best Practices for Production Use

Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the Cosmos DB documentation)

Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:

  • Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by scoping your components to specific applications.
  • Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
  • Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
  • Increase the initTimeout value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is 5s and should be increased. When using Kubernetes, increasing this value may also require an update to your Readiness and Liveness probes.
spec:
  type: state.azure.cosmosdb
  version: v1
  initTimeout: 5m
  metadata:

Data format

To use the Cosmos DB state store, your data must be sent to Dapr in JSON-serialized format. Having it just JSON serializable will not work.

If you are using the Dapr SDKs (for example the .NET SDK), the SDK automatically serializes your data to JSON.

If you want to invoke Dapr’s HTTP endpoint directly, take a look at the examples (using curl) in the Partition keys section below.

Partition keys

For non-actor state operations, the Azure Cosmos DB state store will use the key property provided in the requests to the Dapr API to determine the Cosmos DB partition key. This can be overridden by specifying a metadata field in the request with a key of partitionKey and a value of the desired partition.

The following operation uses nihilus as the partition key value sent to Cosmos DB:

curl -X POST http://localhost:3500/v1.0/state/<store_name> \
  -H "Content-Type: application/json"
  -d '[
        {
          "key": "nihilus",
          "value": "darth"
        }
      ]'

For non-actor state operations, if you want to control the Cosmos DB partition, you can specify it in metadata. Reusing the example above, here’s how to put it under the mypartition partition

curl -X POST http://localhost:3500/v1.0/state/<store_name> \
  -H "Content-Type: application/json"
  -d '[
        {
          "key": "nihilus",
          "value": "darth",
          "metadata": {
            "partitionKey": "mypartition"
          }
        }
      ]'

For actor state operations, the partition key is generated by Dapr using the appId, the actor type, and the actor id, such that data for the same actor always ends up under the same partition (you do not need to specify it). This is because actor state operations must use transactions, and in Cosmos DB the items in a transaction must be on the same partition.

Setting up Cosmos DB for authenticating with Microsoft Entra ID

When using the Dapr Cosmos DB state store and authenticating with Microsoft Entra ID, you need to perform a few additional steps to set up your environment.

Prerequisites:

  • You need a Service Principal created as per the instructions in the authenticating to Azure page. You need the ID of the Service Principal for the commands below (note that this is different from the client ID of your application, or the value you use for azureClientId in the metadata).
  • Azure CLI
  • jq
  • The scripts below are optimized for a bash or zsh shell

Granting your Microsoft Entra ID application access to Cosmos DB

You can find more information on the official documentation, including instructions to assign more granular permissions.

In order to grant your application permissions to access data stored in Cosmos DB, you need to assign it a custom role for the Cosmos DB data plane. In this example you’re going to use a built-in role, “Cosmos DB Built-in Data Contributor”, which grants your application full read-write access to the data; you can optionally create custom, fine-tuned roles following the instructions in the official docs.

# Name of the Resource Group that contains your Cosmos DB
RESOURCE_GROUP="..."
# Name of your Cosmos DB account
ACCOUNT_NAME="..."
# ID of your Service Principal object
PRINCIPAL_ID="..."
# ID of the "Cosmos DB Built-in Data Contributor" role
# You can also use the ID of a custom role
ROLE_ID="00000000-0000-0000-0000-000000000002"

az cosmosdb sql role assignment create \
  --account-name "$ACCOUNT_NAME" \
  --resource-group "$RESOURCE_GROUP" \
  --scope "/" \
  --principal-id "$PRINCIPAL_ID" \
  --role-definition-id "$ROLE_ID"

Optimizations

Optimizing Cosmos DB for bulk operation write performance

If you are building a system that only ever reads data from Cosmos DB via key (id), which is the default Dapr behavior when using the state management API or actors, there are ways you can optimize Cosmos DB for improved write speeds. This is done by excluding all paths from indexing. By default, Cosmos DB indexes all fields inside of a document. On systems that are write-heavy and run little-to-no queries on values within a document, this indexing policy slows down the time it takes to write or update a document in Cosmos DB. This is exacerbated in high-volume systems.

For example, the default Terraform definition for a Cosmos SQL container indexing reads as follows:

indexing_policy {
  indexing_mode = "consistent"

  included_path {
    path = "/*"
  }
}

It is possible to force Cosmos DB to only index the id and partitionKey fields by excluding all other fields from indexing. This can be done by updating the above to read as follows:

indexing_policy {
  # This could also be set to "none" if you are using the container purely as a key-value store. This may be applicable if your container is only going to be used as a distributed cache.
  indexing_mode = "consistent" 

  # Note that included_path has been replaced with excluded_path
  excluded_path {
    path = "/*"
  }
}

Optimizing Cosmos DB for cost savings

If you intend to use Cosmos DB only as a key-value pair, it may be in your interest to consider converting your state object to JSON and compressing it before persisting it to state, and subsequently decompressing it when reading it out of state. This is because Cosmos DB bills your usage based on the maximum number of RU/s used in a given time period (typically each hour). Furthermore, RU usage is calculated as 1 RU per 1 KB of data you read or write. Compression helps by reducing the size of the data stored in Cosmos DB and subsequently reducing RU usage.

This savings is particularly significant for Dapr actors. While the Dapr State Management API does a base64 encoding of your object before saving, Dapr actor state is saved as raw, formatted JSON. This means multiple lines with indentations for formatting. Compressing can signficantly reduce the size of actor state objects. For example, if you have an actor state object that is 75KB in size when the actor is hydrated, you will use 75 RU/s to read that object out of state. If you then modify the state object and it grows to 100KB, you will use 100 RU/s to write that object to Cosmos DB, totalling 175 RU/s for the I/O operation. Let’s say your actors are concurrently handling 1000 requests per second, you will need at least 175,000 RU/s to meet that load. With effective compression, the size reduction can be in the region of 90%, which means you will only need in the region of 17,500 RU/s to meet the load.

3.5 - Azure Table Storage

Detailed information on the Azure Table Storage state store component which can be used to connect to Cosmos DB Table API and Azure Tables

Component format

To setup Azure Tablestorage state store create a component of type state.azure.tablestorage. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.azure.tablestorage
  version: v1
  metadata:
  - name: accountName
    value: <REPLACE-WITH-ACCOUNT-NAME>
  - name: accountKey
    value: <REPLACE-WITH-ACCOUNT-KEY>
  - name: tableName
    value: <REPLACE-WITH-TABLE-NAME>
# - name: cosmosDbMode
#   value: false

Spec metadata fields

FieldRequiredDetailsExample
accountNameYThe storage account name"mystorageaccount".
accountKeyYPrimary or secondary storage key"key"
tableNameYThe name of the table to be used for Dapr state. The table will be created for you if it doesn’t exist"table"
cosmosDbModeNIf enabled, connects to Cosmos DB Table API instead of Azure Tables (Storage Accounts). Defaults to false."false"
serviceURLNThe full storage service endpoint URL. Useful for Azure environments other than public cloud."https://mystorageaccount.table.core.windows.net/"
skipCreateTableNSkips the check for and, if necessary, creation of the specified storage table. This is useful when using active directory authentication with minimal privileges. Defaults to false."true"

Microsoft Entra ID authentication

The Azure Cosmos DB state store component supports authentication using all Microsoft Entra ID mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Microsoft Entra ID authentication mechanism, see the docs for authenticating to Azure.

You can read additional information for setting up Cosmos DB with Microsoft Entra ID authentication in the section below.

Option 1: Setup Azure Table Storage

Follow the instructions from the Azure documentation on how to create an Azure Storage Account.

If you wish to create a table for Dapr to use, you can do so beforehand. However, Table Storage state provider will create one for you automatically if it doesn’t exist, unless the skipCreateTable option is enabled.

In order to setup Azure Table Storage as a state store, you will need the following properties:

  • AccountName: The storage account name. For example: mystorageaccount.
  • AccountKey: Primary or secondary storage key. Skip this if using Microsoft Entra ID authentication.
  • TableName: The name of the table to be used for Dapr state. The table will be created for you if it doesn’t exist, unless the skipCreateTable option is enabled.
  • cosmosDbMode: Set this to false to connect to Azure Tables.

Option 2: Setup Azure Cosmos DB Table API

Follow the instructions from the Azure documentation on creating a Cosmos DB account with Table API.

If you wish to create a table for Dapr to use, you can do so beforehand. However, Table Storage state provider will create one for you automatically if it doesn’t exist, unless the skipCreateTable option is enabled.

In order to setup Azure Cosmos DB Table API as a state store, you will need the following properties:

  • AccountName: The Cosmos DB account name. For example: mycosmosaccount.
  • AccountKey: The Cosmos DB master key. Skip this if using Microsoft Entra ID authentication.
  • TableName: The name of the table to be used for Dapr state. The table will be created for you if it doesn’t exist, unless the skipCreateTable option is enabled.
  • cosmosDbMode: Set this to true to connect to Azure Tables.

Partitioning

The Azure Table Storage state store uses the key property provided in the requests to the Dapr API to determine the row key. Service Name is used for partition key. This provides best performance, as each service type stores state in it’s own table partition.

This state store creates a column called Value in the table storage and puts raw state inside it.

For example, the following operation coming from service called myservice

curl -X POST http://localhost:3500/v1.0/state \
  -H "Content-Type: application/json"
  -d '[
        {
          "key": "nihilus",
          "value": "darth"
        }
      ]'

will create the following record in a table:

PartitionKeyRowKeyValue
myservicenihilusdarth

Concurrency

Azure Table Storage state concurrency is achieved by using ETags according to the official documentation.

3.6 - Cassandra

Detailed information on the Cassandra state store component

Component format

To setup Cassandra state store create a component of type state.cassandra. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.cassandra
  version: v1
  metadata:
  - name: hosts
    value: <REPLACE-WITH-COMMA-DELIMITED-HOSTS> # Required. Example: cassandra.cassandra.svc.cluster.local
  - name: username
    value: <REPLACE-WITH-PASSWORD> # Optional. default: ""
  - name: password
    value: <REPLACE-WITH-PASSWORD> # Optional. default: ""
  - name: consistency
    value: <REPLACE-WITH-CONSISTENCY> # Optional. default: "All"
  - name: table
    value: <REPLACE-WITH-TABLE> # Optional. default: "items"
  - name: keyspace
    value: <REPLACE-WITH-KEYSPACE> # Optional. default: "dapr"
  - name: protoVersion
    value: <REPLACE-WITH-PROTO-VERSION> # Optional. default: "4"
  - name: replicationFactor
    value: <REPLACE-WITH-REPLICATION-FACTOR> #  Optional. default: "1"

Spec metadata fields

FieldRequiredDetailsExample
hostsYComma separated value of the hosts"cassandra.cassandra.svc.cluster.local".
portNPort for communication. Default "9042""9042"
usernameYThe username of database user. No default"user"
passwordYThe password for the user"password"
consistencyNThe consistency values"All", "Quorum"
tableNTable name. Defaults to "items""items", "tab"
keyspaceNThe cassandra keyspace to use. Defaults to "dapr""dapr"
protoVersionNThe proto version for the client. Defaults to "4""3", "4"
replicationFactorNThe replication factor for the calls. Defaults to "1""3"

Setup Cassandra

You can run Cassandra locally with the Datastax Docker image:

docker run -e DS_LICENSE=accept --memory 4g --name my-dse -d datastax/dse-server -g -s -k

You can then interact with the server using localhost:9042.

The easiest way to install Cassandra on Kubernetes is by using the Helm chart:

kubectl create namespace cassandra
helm install cassandra incubator/cassandra --namespace cassandra

This installs Cassandra into the cassandra namespace by default. To interact with Cassandra, find the service with: kubectl get svc -n cassandra.

For example, if installing using the example above, the Cassandra DNS would be:

cassandra.cassandra.svc.cluster.local

Apache Ignite

Apache Ignite’s integration with Cassandra as a caching layer is not supported by this component.

Apache Ignite

Apache Ignite’s integration with Cassandra as a caching layer is not supported by this component.

3.7 - Cloudflare Workers KV

Detailed information on the Cloudflare Workers KV state store component

Create a Dapr component

To setup a Cloudflare Workers KV state store, create a component of type state.cloudflare.workerskv. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.cloudflare.workerskv
  version: v1
  # Increase the initTimeout if Dapr is managing the Worker for you
  initTimeout: "120s"
  metadata:
    # ID of the Workers KV namespace (required)
    - name: kvNamespaceID
      value: ""
    # Name of the Worker (required)
    - name: workerName
      value: ""
    # PEM-encoded private Ed25519 key (required)
    - name: key
      value: |
        -----BEGIN PRIVATE KEY-----
        MC4CAQ...
        -----END PRIVATE KEY-----
    # Cloudflare account ID (required to have Dapr manage the Worker)
    - name: cfAccountID
      value: ""
    # API token for Cloudflare (required to have Dapr manage the Worker)
    - name: cfAPIToken
      value: ""
    # URL of the Worker (required if the Worker has been pre-created outside of Dapr)
    - name: workerUrl
      value: ""

Spec metadata fields

FieldRequiredDetailsExample
kvNamespaceIDYID of the pre-created Workers KV namespace"123456789abcdef8b5588f3d134f74ac"
workerNameYName of the Worker to connect to"mydaprkv"
keyYEd25519 private key, PEM-encodedSee example above
cfAccountIDY/NCloudflare account ID. Required to have Dapr manage the worker."456789abcdef8b5588f3d134f74ac"def
cfAPITokenY/NAPI token for Cloudflare. Required to have Dapr manage the Worker."secret-key"
workerUrlY/NURL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr."https://mydaprkv.mydomain.workers.dev"

When you configure Dapr to create your Worker for you, you may need to set a longer value for the initTimeout property of the component, to allow enough time for the Worker script to be deployed. For example: initTimeout: "120s"

Create a Workers KV namespace

To use this component, you must have a Workers KV namespace created in your Cloudflare account.

You can create a new Workers KV namespace in one of two ways:

  • Using the Cloudflare dashboard
    Make note of the “ID” of the Workers KV namespace that you can see in the dashboard. This is a hex string (for example 123456789abcdef8b5588f3d134f74ac)–not the name you used when you created it!

  • Using the Wrangler CLI:

    # Authenticate if needed with `npx wrangler login` first
    wrangler kv:namespace create <NAME>
    

    The output contains the ID of the namespace, for example:

    { binding = "<NAME>", id = "123456789abcdef8b5588f3d134f74ac" }
    

Configuring the Worker

Because Cloudflare Workers KV namespaces can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Workers KV storage.

Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on workerd.

If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:

  • workerName: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the “workers.dev” domain configured for your Cloudflare account is mydomain.workers.dev and you set workerName to mydaprkv, the Worker that Dapr deploys will be available at https://mydaprkv.mydomain.workers.dev.
  • cfAccountID: ID of your Cloudflare account. You can find this in your browser’s URL bar after logging into the Cloudflare dashboard, with the ID being the hex string right after dash.cloudflare.com. For example, if the URL is https://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef, the value for cfAccountID is 456789abcdef8b5588f3d134f74acdef.
  • cfAPIToken: API token with permission to create and edit Workers and Workers KV namespaces. You can create it from the “API Tokens” page in the “My Profile” section in the Cloudflare dashboard:
    1. Click on “Create token”.
    2. Select the “Edit Cloudflare Workers” template.
    3. Follow the on-screen instructions to generate a new API token.

When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it’s up-to-date. If the Worker doesn’t exist, or if it’s using an outdated version, Dapr will create or upgrade it for you automatically.

If you’d rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.

To manually provision a Worker script, you will need to have Node.js installed on your local machine.

  1. Create a new folder where you’ll place the source code of the Worker, for example: daprworker.
  2. If you haven’t already, authenticate with Wrangler (the Cloudflare Workers CLI) using: npx wrangler login.
  3. Inside the newly-created folder, create a new wrangler.toml file with the contents below, filling in the missing information as appropriate:
# Name of your Worker, for example "mydaprkv"
name = ""

# Do not change these options
main = "worker.js"
compatibility_date = "2022-12-09"
usage_model = "bundled"

[vars]
# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).
# Example:
# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----
PUBLIC_KEY = ""
# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprkv".
TOKEN_AUDIENCE = ""

[[kv_namespaces]]
# Set the next two values to the ID (not name) of your KV namespace, for example "123456789abcdef8b5588f3d134f74ac".
# Note that they will both be set to the same value.
binding = ""
id = ""

Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the public part of the key when deploying a Worker!

  1. Copy the (pre-compiled and minified) code of the Worker in the worker.js file. You can do that with this command:
# Set this to the version of Dapr that you're using
DAPR_VERSION="release-1.15"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
  1. Deploy the Worker using Wrangler:
npx wrangler publish

Once your Worker has been deployed, you will need to initialize the component with these two metadata options:

  • workerName: Name of the Worker script. This is the value you set in the name property in the wrangler.toml file.
  • workerUrl: URL of the deployed Worker. The npx wrangler command will show the full URL to you, for example https://mydaprkv.mydomain.workers.dev.

Generate an Ed25519 key pair

All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Worker KV namespace). These include industry-standard measures such as:

  • All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
  • All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
  • The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).

To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.

Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you’re using an older version of OpenSSL.

Note for Mac users: on macOS, the “openssl” binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn’t support Ed25519 keys. If you’re using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using brew install openssl@3 then replacing openssl in the commands below with $(brew --prefix)/opt/openssl@3/bin/openssl.

You can generate a new Ed25519 key pair with OpenSSL using:

openssl genpkey -algorithm ed25519 -out private.pem
openssl pkey -in private.pem -pubout -out public.pem

On macOS, using openssl@3 from Homebrew:

$(brew --prefix)/opt/openssl@3/bin/openssl genpkey -algorithm ed25519 -out private.pem
$(brew --prefix)/opt/openssl@3/bin/openssl pkey -in private.pem -pubout -out public.pem

If you don’t have the step CLI already, install it following the official instructions.

Next, you can generate a new Ed25519 key pair with the step CLI using:

step crypto keypair \
  public.pem private.pem \
  --kty OKP --curve Ed25519 \
  --insecure --no-password

Regardless of how you generated your key pair, with the instructions above you’ll have two files:

  • private.pem contains the private part of the key; use the contents of this file for the key property of the component’s metadata.
  • public.pem contains the public part of the key, which you’ll need only if you’re deploying a Worker manually (as per the instructions in the previoius section).

Additional notes

  • Note that Cloudflare Workers KV doesn’t guarantee strong data consistency. Although changes are visible immediately (usually) for requests made to the same Cloudflare datacenter, it can take a certain amount of time (usually up to one minute) for changes to be replicated across all Cloudflare regions.
  • This state store supports TTLs with Dapr, but the minimum value for the TTL is 1 minute.

3.8 - CockroachDB

Detailed information on the CockroachDB state store component

Create a Dapr component

Create a file called cockroachdb.yaml, paste the following and replace the <CONNECTION STRING> value with your connection string. The connection string for CockroachDB follow the same standard for PostgreSQL connection string. For example, "host=localhost user=root port=26257 connect_timeout=10 database=dapr_test". See the CockroachDB documentation on database connections for information on how to define a connection string.

If you want to also configure CockroachDB to store actors, add the actorStateStore option as in the example below.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.cockroachdb
  version: v1
  metadata:
  # Connection string
  - name: connectionString
    value: "<CONNECTION STRING>"
  # Timeout for database operations, in seconds (optional)
  #- name: timeoutInSeconds
  #  value: 20
  # Name of the table where to store the state (optional)
  #- name: tableName
  #  value: "state"
  # Name of the table where to store metadata used by Dapr (optional)
  #- name: metadataTableName
  #  value: "dapr_metadata"
  # Cleanup interval in seconds, to remove expired rows (optional)
  #- name: cleanupIntervalInSeconds
  #  value: 3600
  # Max idle time for connections before they're closed (optional)
  #- name: connectionMaxIdleTime
  #  value: 0
  # Uncomment this if you wish to use CockroachDB as a state store for actors (optional)
  #- name: actorStateStore
  #  value: "true"

Spec metadata fields

FieldRequiredDetailsExample
connectionStringYThe connection string for CockroachDB"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"
timeoutInSecondsNTimeout, in seconds, for all database operations. Defaults to 2030
tableNameNName of the table where the data is stored. Defaults to state. Can optionally have the schema name as prefix, such as public.state"state", "public.state"
metadataTableNameNName of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata. Can optionally have the schema name as prefix, such as public.dapr_metadata"dapr_metadata", "public.dapr_metadata"
cleanupIntervalInSecondsNInterval, in seconds, to clean up rows with an expired TTL. Default: 3600 (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup.1800, -1
connectionMaxIdleTimeNMax idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose."5m"
actorStateStoreNConsider this state store for actors. Defaults to "false""true", "false"

Setup CockroachDB

  1. Run an instance of CockroachDB. You can run a local instance of CockroachDB in Docker CE with the following command:

    This example does not describe a production configuration because it sets a single-node cluster, it’s only recommend for local environment.

    docker run --name roach1 -p 26257:26257 cockroachdb/cockroach:v21.2.3 start-single-node --insecure
    
  2. Create a database for state data.

    To create a new database in CockroachDB, run the following SQL command inside container:

    docker exec -it roach1 ./cockroach sql --insecure -e 'create database dapr_test'
    

The easiest way to install CockroachDB on Kubernetes is by using the CockroachDB Operator:

Advanced

TTLs and cleanups

This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate after how many seconds the data should be considered “expired”.

Because CockroachDB doesn’t have built-in support for TTLs, you implement this in Dapr by adding a column in the state table indicating when the data should be considered “expired”. “Expired” records are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.

You can set the interval for the deletion of expired records with the cleanupIntervalInSeconds metadata property, which defaults to 3600 seconds (that is, 1 hour).

  • Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupIntervalInSeconds to a smaller value - for example, 300 (300 seconds, or 5 minutes).
  • If you do not plan to use TTLs with Dapr and the CockroachDB state store, you should consider setting cleanupIntervalInSeconds to a value <= 0 (e.g. 0 or -1) to disable the periodic cleanup and reduce the load on the database.

3.9 - Coherence

Detailed information on the Coherence state store component

Component format

To setup Coherence state store, create a component of type state.coherence. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.coherence
  version: v1
  metadata:
  - name: serverAddress
    value: <REPLACE-WITH-GRPC-PROXY-HOST-AND-PORT> # Required. Example: "my-cluster-grpc:1408"
  - name: tlsEnabled
    value: <REPLACE-WITH-BOOLEAN> # Optional
  - name: tlsClientCertPath
    value: <REPLACE-WITH-PATH> # Optional
  - name: tlsClientKey
    value: <REPLACE-WITH-PATH> # Optional
  - name: tlsCertsPath
    value: <REPLACE-WITH-PATH> # Optional
  - name: ignoreInvalidCerts
    value: <REPLACE-WITH-BOOLEAN> # Optional
  - name: scopeName
    value: <REPLACE-WITH-SCOPE> # Optional
  - name: requestTimeout
    value: <REPLACE-WITH-REQUEST-TIMEOUT> # Optional
  - name: nearCacheTTL
    value: <REPLACE-WITH-NEAR-CACHE-TTL> # Optional
  - name: nearCacheUnits
    value: <REPLACE-WITH-NEAR-CACHE-UNITS> # Optional
  - name: nearCacheMemory
    value: <REPLACE-WITH-NEAR-CACHE-MEMORY> # Optional

Spec metadata fields

FieldRequiredDetailsExample
serverAddressYComma delimited endpoints"my-cluster-grpc:1408"
tlsEnabledNIndicates if TLS should be enabled. Defaults to false"true"
tlsClientCertPathNClient certificate path for Coherence. Defaults to “”. Can be secretKeyRef to use a secret reference."-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."
tlsClientKeyNClient key for Coherence. Defaults to “”. Can be secretKeyRef to use a secret reference."-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."
tlsCertsPathNAdditional certificates for Coherence. Defaults to “”. Can be secretKeyRef to use a secret reference."-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."
ignoreInvalidCertsNIndicates if to ignore self-signed certificates for testing only, not to be used in production. Defaults to false"false"
scopeNameNA scope name to use for the internal cache. Defaults to """my-scope"
requestTimeoutNATimeout for calls to the cluster Defaults to “30s”"15s"
nearCacheTTLNIf non-zero a near cache is used and the TTL of the near cache is this value. Defaults to 0s"60s"
nearCacheUnitsNIf non-zero a near cache is used and the maximum size of the near cache is this value in units. Defaults to 0"1000"
nearCacheMemoryNIf non-zero a near cache is used and the maximum size of the near cache is this value in bytes. Defaults to 0"4096"

About Using Near Cache TTL

The Coherence state store allows you to specify a near cache to cache frequently accessed data when using the DAPR client. When you access data using Get(ctx context.Context, req *GetRequest), returned entries are stored in the near cache and subsequent data access for keys in the near cache is almost instant, where without a near cache each Get() operation results in a network call.

When using the near cache option, Coherence automatically adds a MapListener to the internal cache which listens on all cache events and updates or invalidates entries in the near cache that have been changed or removed on the server.

To manage the amount of memory used by the near cache, the following options are supported when creating one:

  • nearCacheTTL – objects expired after time in near cache, for example 5 minutes
  • nearCacheUnits – maximum number of cache entries in the near cache
  • nearCacheMemory – maximum amount of memory used by cache entries

You can specify either High-Units or Memory and in either case, optionally, a TTL.

The minimum expiry time for a near cache entry is 1/4 second. This is to ensure that expiry of elements is as efficient as possible. You will receive an error if you try to set the TTL to a lower value.

Setup Coherence

Run Coherence locally using Docker:

docker run -d -p 1408:1408 -p 30000:30000 ghcr.io/oracle/coherence-ce:25.03.1

You can then interact with the server using localhost:1408.

The easiest way to install Coherence on Kubernetes is by using the Coherence Operator:

Install the Operator:

kubectl apply -f https://github.com/oracle/coherence-operator/releases/download/v3.5.2/coherence-operator.yaml

Note: Change v3.5.2 to the latest release.

This installs the Coherence operator into the coherence namespace.

Create a Coherence Cluster yaml my-cluster.yaml

apiVersion: coherence.oracle.com/v1
kind: Coherence
metadata:
  name: my-cluster
spec:
  coherence:
    management:
      enabled: true
  ports:
    - name: management
    - name: grpc
      port: 1408

Apply the yaml

kubectl apply -f my-cluster.yaml

To interact with Coherence, find the service with: kubectl get svc and look for service named ‘*grpc’.

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                               AGE
kubernetes              ClusterIP   10.96.0.1      <none>        443/TCP                                               9m
my-cluster-grpc         ClusterIP   10.96.225.43   <none>        1408/TCP                                              7m3s
my-cluster-management   ClusterIP   10.96.41.6     <none>        30000/TCP                                             7m3s
my-cluster-sts          ClusterIP   None           <none>        7/TCP,7575/TCP,7574/TCP,6676/TCP,30000/TCP,1408/TCP   7m3s
my-cluster-wka          ClusterIP   None           <none>        7/TCP,7575/TCP,7574/TCP,6676/TCP                      7m3s

For example, if installing using the example above, the Coherence host address would be:

my-cluster-grpc

3.10 - Couchbase

Detailed information on the Couchbase state store component

Component format

To setup Couchbase state store create a component of type state.couchbase. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.couchbase
  version: v1
  metadata:
  - name: couchbaseURL
    value: <REPLACE-WITH-URL> # Required. Example: "http://localhost:8091"
  - name: username
    value: <REPLACE-WITH-USERNAME> # Required.
  - name: password
    value: <REPLACE-WITH-PASSWORD> # Required.
  - name: bucketName
    value: <REPLACE-WITH-BUCKET> # Required.

Spec metadata fields

FieldRequiredDetailsExample
couchbaseURLYThe URL of the Couchbase server"http://localhost:8091"
usernameYThe username for the database"user"
passwordYThe password for access"password"
bucketNameYThe bucket name to write to"bucket"

Setup Couchbase

You can run Couchbase locally using Docker:

docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 couchbase

You can then interact with the server using localhost:8091 and start the server setup.

The easiest way to install Couchbase on Kubernetes is by using the Helm chart:

helm repo add couchbase https://couchbase-partners.github.io/helm-charts/
helm install couchbase/couchbase-operator
helm install couchbase/couchbase-cluster

3.11 - Etcd

Detailed information on the Etcd state store component

Component format

To setup an Etcd state store create a component of type state.etcd. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.etcd
  # Supports v1 and v2. Users should always use v2 by default. There is no
  # migration path from v1 to v2, see `versioning` below.
  version: v2
  metadata:
  - name: endpoints
    value: <CONNECTION STRING> # Required. Example: 192.168.0.1:2379,192.168.0.2:2379,192.168.0.3:2379
  - name: keyPrefixPath
    value: <KEY PREFIX STRING> # Optional. default: "". Example: "dapr"
  - name: tlsEnable
    value: <ENABLE TLS> # Optional. Example: "false"
  - name: ca
    value: <CA> # Optional. Required if tlsEnable is `true`.
  - name: cert
    value: <CERT> # Optional. Required if tlsEnable is `true`.
  - name: key
    value: <KEY> # Optional. Required if tlsEnable is `true`.
  # Uncomment this if you wish to use Etcd as a state store for actors (optional)
  #- name: actorStateStore
  #  value: "true"

Versioning

Dapr has 2 versions of the Etcd state store component: v1 and v2. It is recommended to use v2, as v1 is deprecated.

While v1 and v2 have the same metadata fields, v1 causes data inconsistencies in apps when using Actor TTLs from Dapr v1.12. v1 and v2 are incompatible with no data migration path for v1 to v2 on an existing active Etcd cluster and keyPrefixPath. If you are using v1, you should continue to use v1 until you create a new Etcd cluster or use a different keyPrefixPath.

Spec metadata fields

FieldRequiredDetailsExample
endpointsYConnection string to the Etcd cluster"192.168.0.1:2379,192.168.0.2:2379,192.168.0.3:2379"
keyPrefixPathNKey prefix path in Etcd. Default is no prefix."dapr"
tlsEnableNWhether to enable TLS for connecting to Etcd."false"
caNCA certificate for connecting to Etcd, PEM-encoded. Can be secretKeyRef to use a secret reference."-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."
certNTLS certificate for connecting to Etcd, PEM-encoded. Can be secretKeyRef to use a secret reference."-----BEGIN CERTIFICATE-----\nMIIDUTCC..."
keyNTLS key for connecting to Etcd, PEM-encoded. Can be secretKeyRef to use a secret reference."-----BEGIN PRIVATE KEY-----\nMIIEpAIB..."
actorStateStoreNConsider this state store for actors. Defaults to "false""true", "false"

Setup Etcd

You can run Etcd database locally using Docker Compose. Create a new file called docker-compose.yml and add the following contents as an example:

version: '2'
services:
  etcd:
    image: gcr.io/etcd-development/etcd:v3.4.20
    ports:
      - "2379:2379"
    command: etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379```

Save the docker-compose.yml file and run the following command to start the Etcd server:

docker-compose up -d

This starts the Etcd server in the background and expose the default Etcd port of 2379. You can then interact with the server using the etcdctl command-line client on localhost:12379. For example:

etcdctl --endpoints=localhost:2379 put mykey myvalue

Use Helm to quickly create an Etcd instance in your Kubernetes cluster. This approach requires Installing Helm.

Follow the Bitnami instructions to get started with setting up Etcd in Kubernetes.

3.12 - GCP Firestore (Datastore mode)

Detailed information on the GCP Firestore state store component

Component format

To setup GCP Firestore state store create a component of type state.gcp.firestore. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.gcp.firestore
  version: v1
  metadata:
  - name: project_id
    value: <REPLACE-WITH-PROJECT-ID> # Required.
  - name: type 
    value: <REPLACE-WITH-CREDENTIALS-TYPE> # Required.
  - name: endpoint # Optional. 
    value: "http://localhost:8432"
  - name: private_key_id
    value: <REPLACE-WITH-PRIVATE-KEY-ID> # Optional.
  - name: private_key
    value: <REPLACE-WITH-PRIVATE-KEY> # Optional, but Required if `private_key_id` is specified.
  - name: client_email
    value: <REPLACE-WITH-CLIENT-EMAIL> # Optional, but Required if `private_key_id` is specified.
  - name: client_id
    value: <REPLACE-WITH-CLIENT-ID> # Optional, but Required if `private_key_id` is specified.
  - name: auth_uri
    value: <REPLACE-WITH-AUTH-URI> # Optional.
  - name: token_uri
    value: <REPLACE-WITH-TOKEN-URI> # Optional.
  - name: auth_provider_x509_cert_url
    value: <REPLACE-WITH-AUTH-X509-CERT-URL> # Optional.
  - name: client_x509_cert_url
    value: <REPLACE-WITH-CLIENT-x509-CERT-URL> # Optional.
  - name: entity_kind
    value: <REPLACE-WITH-ENTITY-KIND> # Optional. default: "DaprState"
  - name: noindex
    value: <REPLACE-WITH-BOOLEAN> # Optional. default: "false"

Spec metadata fields

FieldRequiredDetailsExample
project_idYThe ID of the GCP project to use"project-id"
typeYThe credentials type"service_account"
endpointNGCP endpoint for the component to use. Only used for local development with (for example) GCP Datastore Emulator. The endpoint is unnecessary when running against the GCP production API."localhost:8432"
private_key_idNThe ID of the prvate key to use"private-key-id"
privateKeyNIf using explicit credentials, this field should contain the private_key field from the service account json-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B
client_emailNThe email address for the client"eample@example.com"
client_idNThe client id value to use for authentication"client-id"
auth_uriNThe authentication URI to use"https://accounts.google.com/o/oauth2/auth"
token_uriNThe token URI to query for Auth token"https://oauth2.googleapis.com/token"
auth_provider_x509_cert_urlNThe auth provider certificate URL"https://www.googleapis.com/oauth2/v1/certs"
client_x509_cert_urlNThe client certificate URL"https://www.googleapis.com/robot/v1/metadata/x509/x"
entity_kindNThe entity name in Filestore. Defaults to "DaprState""DaprState"
noindexNWhether to disable indexing of state entities. Use this setting if you encounter Firestore index size limitations. Defaults to "false""true"

GCP Credentials

Since the GCP Firestore component uses the GCP Go Client Libraries, by default it authenticates using Application Default Credentials. This is explained in the Authenticate to GCP Cloud services using client libraries guide.

Setup GCP Firestore

You can use the GCP Datastore emulator to run locally using the instructions here.

You can then interact with the server using http://localhost:8432.

Follow the instructions here to get started with setting up Firestore in Google Cloud.

3.13 - HashiCorp Consul

Detailed information on the HashiCorp Consul state store component

Component format

To setup Hashicorp Consul state store create a component of type state.consul. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.consul
  version: v1
  metadata:
  - name: datacenter
    value: <REPLACE-WITH-DATA-CENTER> # Required. Example: dc1
  - name: httpAddr
    value: <REPLACE-WITH-CONSUL-HTTP-ADDRESS> # Required. Example: "consul.default.svc.cluster.local:8500"
  - name: aclToken
    value: <REPLACE-WITH-ACL-TOKEN> # Optional. default: ""
  - name: scheme
    value: <REPLACE-WITH-SCHEME> # Optional. default: "http"
  - name: keyPrefixPath
    value: <REPLACE-WITH-TABLE> # Optional. default: ""

Spec metadata fields

FieldRequiredDetailsExample
datacenterYDatacenter to use"dc1"
httpAddrYAddress of the Consul server"consul.default.svc.cluster.local:8500"
aclTokenNPer Request ACL Token. Default is """token"
schemeNScheme is the URI scheme for the Consul server. Default is "http""http"
keyPrefixPathNKey prefix path in Consul. Default is """dapr"

Setup HashiCorp Consul

You can run Consul locally using Docker:

docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul

You can then interact with the server using localhost:8500.

The easiest way to install Consul on Kubernetes is by using the Helm chart:

helm install consul stable/consul

This installs Consul into the default namespace. To interact with Consul, find the service with: kubectl get svc consul.

For example, if installing using the example above, the Consul host address would be:

consul.default.svc.cluster.local:8500

3.14 - Hazelcast

Detailed information on the Hazelcast state store component

Create a Dapr component

To setup Hazelcast state store create a component of type state.hazelcast. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.hazelcast
  version: v1
  metadata:
  - name: hazelcastServers
    value: <REPLACE-WITH-HOSTS> # Required. A comma delimited string of servers. Example: "hazelcast:3000,hazelcast2:3000"
  - name: hazelcastMap
    value: <REPLACE-WITH-MAP> # Required. Hazelcast map configuration.

Spec metadata fields

FieldRequiredDetailsExample
hazelcastServersYA comma delimited string of servers"hazelcast:3000,hazelcast2:3000"
hazelcastMapYHazelcast Map configuration"foo-map"

Setup Hazelcast

You can run Hazelcast locally using Docker:

docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701" -p 5701:5701 hazelcast/hazelcast

You can then interact with the server using the 127.0.0.1:5701.

The easiest way to install Hazelcast on Kubernetes is by using the Helm chart.

3.15 - In-memory

Detailed documentation on the in-memory state component

The in-memory state store component maintains state in the Dapr sidecar’s memory. This is primarily meant for development purposes. State is not replicated across multiple sidecars and is lost when the Dapr sidecar is restarted.

Component format

To setup in-memory state store, create a component of type state.in-memory. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.in-memory
  version: v1
  metadata: 
  # Uncomment this if you wish to use In-memory as a state store for actors (optional)
  #- name: actorStateStore
  #  value: "true"

Note: While in-memory does not require any specific metadata for the component to work, spec.metadata is a required field.

3.16 - JetStream KV

Detailed information on the JetStream KV state store component

Component format

To setup a JetStream KV state store create a component of type state.jetstream. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.jetstream
  version: v1
  metadata:
  - name: natsURL
    value: "nats://localhost:4222"
  - name: jwt
    value: "eyJhbGciOiJ...6yJV_adQssw5c" # Optional. Used for decentralized JWT authentication
  - name: seedKey
    value: "SUACS34K232O...5Z3POU7BNIL4Y" # Optional. Used for decentralized JWT authentication
  - name: bucket
    value: "<bucketName>"

Spec metadatafield

FieldRequiredDetailsExample
natsURLYNATS server address URLnats://localhost:4222
jwtNNATS decentralized authentication JWTeyJhbGciOiJ...6yJV_adQssw5c
seedKeyNNATS decentralized authentication seed keySUACS34K232O...5Z3POU7BNIL4Y
bucketYJetStream KV bucket name"<bucketName>"

Create a NATS server

You can run a NATS Server with JetStream enabled locally using Docker:

docker run -d -p 4222:4222 nats:latest -js

You can then interact with the server using the client port: localhost:4222.

Install NATS JetStream on Kubernetes by using the helm:

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats

This installs a single NATS server into the default namespace. To interact with NATS, find the service with: kubectl get svc my-nats.

Creating a JetStream KV bucket

It is necessary to create a key value bucket, this can easily done via NATS CLI.

nats kv add <bucketName>

3.17 - Memcached

Detailed information on the Memcached state store component

Component format

To setup Memcached state store create a component of type state.memcached. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.memcached
  version: v1
  metadata:
  - name: hosts
    value: <REPLACE-WITH-COMMA-DELIMITED-ENDPOINTS> # Required. Example: "memcached.default.svc.cluster.local:11211"
  - name: maxIdleConnections
    value: <REPLACE-WITH-MAX-IDLE-CONNECTIONS> # Optional. default: "2"
  - name: timeout
    value: <REPLACE-WITH-TIMEOUT> # Optional. default: "1000"

Spec metadata fields

FieldRequiredDetailsExample
hostsYComma delimited endpoints"memcached.default.svc.cluster.local:11211"
maxIdleConnectionsNThe max number of idle connections. Defaults to "2""3"
timeoutNThe timeout for the calls in milliseconds. Defaults to "1000""1000"

Setup Memcached

You can run Memcached locally using Docker:

docker run --name my-memcache -d memcached

You can then interact with the server using localhost:11211.

The easiest way to install Memcached on Kubernetes is by using the Helm chart:

helm install memcached stable/memcached

This installs Memcached into the default namespace. To interact with Memcached, find the service with: kubectl get svc memcached.

For example, if installing using the example above, the Memcached host address would be:

memcached.default.svc.cluster.local:11211

3.18 - Microsoft SQL Server & Azure SQL

Detailed information on the Microsoft SQL Server state store component

Component format

This state store component can be used with both Microsoft SQL Server and Azure SQL.

To set up this state store, create a component of type state.sqlserver. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.sqlserver
  version: v1
  metadata:
    # Authenticate using SQL Server credentials
    - name: connectionString
      value: |
        Server=myServerName\myInstanceName;Database=myDataBase;User Id=myUsername;Password=myPassword;

    # Authenticate with Microsoft Entra ID (Azure SQL only)
    # "useAzureAD" be set to "true"
    - name: useAzureAD
      value: true
    # Connection string or URL of the Azure SQL database, optionally containing the database
    - name: connectionString
      value: |
        sqlserver://myServerName.database.windows.net:1433?database=myDataBase

    # Other optional fields (listing default values)
    - name: tableName
      value: "state"
    - name: metadataTableName
      value: "dapr_metadata"
    - name: schema
      value: "dbo"
    - name: keyType
      value: "string"
    - name: keyLength
      value: "200"
    - name: indexedProperties
      value: ""
    - name: cleanupIntervalInSeconds
      value: "3600"
   # Uncomment this if you wish to use Microsoft SQL Server as a state store for actors (optional)
   #- name: actorStateStore
   #  value: "true"

If you wish to use SQL server as an actor state store, append the following to the metadata:

  - name: actorStateStore
    value: "true"

Spec metadata fields

Authenticate using SQL Server credentials

The following metadata options are required to authenticate using SQL Server credentials. This is supported on both SQL Server and Azure SQL.

FieldRequiredDetailsExample
connectionStringYThe connection string used to connect.
If the connection string contains the database, it must already exist. Otherwise, if the database is omitted, a default database named “Dapr” is created.
"Server=myServerName\myInstanceName;Database=myDataBase;User Id=myUsername;Password=myPassword;"

Authenticate using Microsoft Entra ID

Authenticating with Microsoft Entra ID is supported with Azure SQL only. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.

FieldRequiredDetailsExample
useAzureADYMust be set to true to enable the component to retrieve access tokens from Microsoft Entra ID."true"
connectionStringYThe connection string or URL of the Azure SQL database, without credentials.
If the connection string contains the database, it must already exist. Otherwise, if the database is omitted, a default database named “Dapr” is created.
"sqlserver://myServerName.database.windows.net:1433?database=myDataBase"
azureTenantIdNID of the Microsoft Entra ID tenant"cd4b2887-304c-47e1-b4d5-65447fdd542b"
azureClientIdNClient ID (application ID)"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"
azureClientSecretNClient secret (application password)"Ecy3XG7zVZK3/vl/a2NSB+a1zXLa8RnMum/IgD0E"

Other metadata options

FieldRequiredDetailsExample
tableNameNThe name of the table to use. Alpha-numeric with underscores. Defaults to "state""table_name"
metadataTableNameNName of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata."dapr_metadata"
keyTypeNThe type of key used. Supported values: "string" (default), "uuid", "integer"."string"
keyLengthNThe max length of key. Ignored if “keyType” is not string. Defaults to "200""200"
schemaNThe schema to use. Defaults to "dbo""dapr","dbo"
indexedPropertiesNList of indexed properties, as a string containing a JSON document.'[{"column": "transactionid", "property": "id", "type": "int"}, {"column": "customerid", "property": "customer", "type": "nvarchar(100)"}]'
actorStateStoreNIndicates that Dapr should configure this component for the actor state store (more information)."true"
cleanupIntervalInSecondsNInterval, in seconds, to clean up rows with an expired TTL. Default: "3600" (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup."1800", "-1"

Create a Microsoft SQL Server/Azure SQL instance

Follow the instructions from the Azure documentation on how to create a SQL database. The database must be created before Dapr consumes it.

In order to setup SQL Server as a state store, you need the following properties:

  • Connection String: The SQL Server connection string. For example: server=localhost;user id=sa;password=your-password;port=1433;database=mydatabase;
  • Schema: The database schema to use (default=dbo). Will be created if does not exist
  • Table Name: The database table name. Will be created if does not exist
  • Indexed Properties: Optional properties from json data which will be indexed and persisted as individual column

Create a dedicated user

When connecting with a dedicated user (not sa), these authorizations are required for the user - even when the user is owner of the desired database schema:

  • CREATE TABLE
  • CREATE TYPE

TTLs and cleanups

This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate after how many seconds the data should be considered “expired”.

Because SQL Server doesn’t have built-in support for TTLs, Dapr implements this by adding a column in the state table indicating when the data should be considered “expired”. “Expired” records are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.

You can set the interval for the deletion of expired records with the cleanupIntervalInSeconds metadata property, which defaults to 3600 seconds (that is, 1 hour).

  • Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupIntervalInSeconds to a smaller value - for example, 300 (300 seconds, or 5 minutes).
  • If you do not plan to use TTLs with Dapr and the SQL Server state store, you should consider setting cleanupIntervalInSeconds to a value <= 0 (e.g. 0 or -1) to disable the periodic cleanup and reduce the load on the database.

The state store does not have an index on the ExpireDate column, which means that each clean up operation must perform a full table scan. If you intend to write to the table with a large number of records that use TTLs, you should consider creating an index on the ExpireDate column. An index makes queries faster, but uses more storage space and slightly slows down writes.

CREATE CLUSTERED INDEX expiredate_idx ON state(ExpireDate ASC)

3.19 - MongoDB

Detailed information on the MongoDB state store component

Component format

To setup MongoDB state store create a component of type state.mongodb. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.mongodb
  version: v1
  metadata:
  - name: server
    value: <REPLACE-WITH-SERVER> # Required unless "host" field is set . Example: "server.example.com"
  - name: host
    value: <REPLACE-WITH-HOST> # Required unless "server" field is set . Example: "mongo-mongodb.default.svc.cluster.local:27017"
  - name: username
    value: <REPLACE-WITH-USERNAME> # Optional. Example: "admin"
  - name: password
    value: <REPLACE-WITH-PASSWORD> # Optional.
  - name: databaseName
    value: <REPLACE-WITH-DATABASE-NAME> # Optional. default: "daprStore"
  - name: collectionName
    value: <REPLACE-WITH-COLLECTION-NAME> # Optional. default: "daprCollection"
  - name: writeConcern
    value: <REPLACE-WITH-WRITE-CONCERN> # Optional.
  - name: readConcern
    value: <REPLACE-WITH-READ-CONCERN> # Optional.
  - name: operationTimeout
    value: <REPLACE-WITH-OPERATION-TIMEOUT> # Optional. default: "5s"
  - name: params
    value: <REPLACE-WITH-ADDITIONAL-PARAMETERS> # Optional. Example: "?authSource=daprStore&ssl=true"
  # Uncomment this if you wish to use MongoDB as a state store for actors (optional)
  #- name: actorStateStore
  #  value: "true"

Actor state store and transactions support

When using as an actor state store or to leverage transactions, MongoDB must be running in a Replica Set.

If you wish to use MongoDB as an actor store, add this metadata option to your Component YAML:

  - name: actorStateStore
    value: "true"

Spec metadata fields

FieldRequiredDetailsExample
serverY1The server to connect to, when using DNS SRV record"server.example.com"
hostY1The host to connect to"mongo-mongodb.default.svc.cluster.local:27017"
usernameNThe username of the user to connect with (applicable in conjunction with host)"admin"
passwordNThe password of the user (applicable in conjunction with host)"password"
databaseNameNThe name of the database to use. Defaults to "daprStore""daprStore"
collectionNameNThe name of the collection to use. Defaults to "daprCollection""daprCollection"
writeConcernNThe write concern to use"majority"
readConcernNThe read concern to use"majority", "local","available", "linearizable", "snapshot"
operationTimeoutNThe timeout for the operation. Defaults to "5s""5s"
paramsN2Additional parameters to use"?authSource=daprStore&ssl=true"
actorStateStoreNConsider this state store for actors. Defaults to "false""true", "false"

[1] The server and host fields are mutually exclusive. If neither or both are set, Dapr returns an error.

[2] The params field accepts a query string that specifies connection specific options as <name>=<value> pairs, separated by & and prefixed with ?. e.g. to use “daprStore” db as authentication database and enabling SSL/TLS in connection, specify params as ?authSource=daprStore&ssl=true. See the mongodb manual for the list of available options and their use cases.

Setup MongoDB

You can run a single MongoDB instance locally using Docker:

docker run --name some-mongo -d -p 27017:27017 mongo

You can then interact with the server at localhost:27017. If you do not specify a databaseName value in your component definition, make sure to create a database named daprStore.

In order to use the MongoDB state store for transactions and as an actor state store, you need to run MongoDB as a Replica Set. Refer to the official documentation for how to create a 3-node Replica Set using Docker.

You can conveniently install MongoDB on Kubernetes using the Helm chart packaged by Bitnami. Refer to the documentation for the Helm chart for deploying MongoDB, both as a standalone server, and with a Replica Set (required for using transactions and actors). This installs MongoDB into the default namespace. To interact with MongoDB, find the service with: kubectl get svc mongo-mongodb. For example, if installing using the Helm defaults above, the MongoDB host address would be: mongo-mongodb.default.svc.cluster.local:27017 Follow the on-screen instructions to get the root password for MongoDB. The username is typically admin by default.

TTLs and cleanups

This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate when the data should be considered “expired”.

3.20 - MySQL & MariaDB

Detailed information on the MySQL state store component

Component format

The MySQL state store components allows connecting to both MySQL and MariaDB databases. In this document, we refer to “MySQL” to indicate both databases.

To setup MySQL state store create a component of type state.mysql. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.mysql
  version: v1
  metadata:
  - name: connectionString
    value: "<CONNECTION STRING>"
  - name: schemaName
    value: "<SCHEMA NAME>"
  - name: tableName
    value: "<TABLE NAME>"
  - name: timeoutInSeconds
    value: "30"
  - name: pemPath # Required if pemContents not provided. Path to pem file.
    value: "<PEM PATH>"
  - name: pemContents # Required if pemPath not provided. Pem value.
    value: "<PEM CONTENTS>"    
# Uncomment this if you wish to use MySQL & MariaDB as a state store for actors (optional)
  #- name: actorStateStore
  #  value: "true"

If you wish to use MySQL as an actor store, append the following to the yaml.

  - name: actorStateStore
    value: "true"

Spec metadata fields

FieldRequiredDetailsExample
connectionStringYThe connection string to connect to MySQL. Do not add the schema to the connection stringNon SSL connection: "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true", Enforced SSL Connection: "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true&tls=custom"
schemaNameNThe schema name to use. Will be created if schema does not exist. Defaults to "dapr_state_store""custom_schema", "dapr_schema"
tableNameNThe table name to use. Will be created if table does not exist. Defaults to "state""table_name", "dapr_state"
timeoutInSecondsNTimeout for all database operations. Defaults to 2030
pemPathNFull path to the PEM file to use for enforced SSL Connection required if pemContents is not provided. Cannot be used in K8s environment"/path/to/file.pem", "C:\path\to\file.pem"
pemContentsNContents of PEM file to use for enforced SSL Connection required if pemPath is not provided. Can be used in K8s environment"pem value"
cleanupIntervalInSecondsNInterval, in seconds, to clean up rows with an expired TTL. Default: 3600 (that is 1 hour). Setting this to values <=0 disables the periodic cleanup.1800, -1
actorStateStoreNConsider this state store for actors. Defaults to "false""true", "false"

Setup MySQL

Dapr can use any MySQL instance - containerized, running on your local dev machine, or a managed cloud service.

Run an instance of MySQL. You can run a local instance of MySQL in Docker CE with the following command:

This example does not describe a production configuration because it sets the password in plain text and the user name is left as the MySQL default of “root”.

docker run --name dapr-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest

We can use Helm to quickly create a MySQL instance in our Kubernetes cluster. This approach requires Installing Helm.

  1. Install MySQL into your cluster.

    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm install dapr-mysql bitnami/mysql
    
  2. Run kubectl get pods to see the MySQL containers now running in your cluster.

  3. Next, we’ll get our password, which is slightly different depending on the OS we’re using:

    • Windows: Run [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($(kubectl get secret --namespace default dapr-mysql -o jsonpath="{.data.mysql-root-password}"))) and copy the outputted password.

    • Linux/MacOS: Run kubectl get secret --namespace default dapr-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode and copy the outputted password.

  4. With the password you can construct your connection string.

Azure MySQL

If you are using MySQL on Azure see the Azure documentation on SSL database connections, for information on how to download the required certificate.

Non SSL connection

Replace the <CONNECTION STRING> value with your connection string. The connection string is a standard MySQL connection string. For example, "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true".

Enforced SSL connection

If your server requires SSL your connection string must end with &tls=custom for example, "<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true&tls=custom". You must replace the <PEM PATH> with a full path to the PEM file. The connection to MySQL will require a minimum TLS version of 1.2.

TTLs and cleanups

This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate when the data should be considered “expired”.

Because MySQL doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.

The interval at which the deletion of expired records happens is set with the cleanupIntervalInSeconds metadata property, which defaults to 3600 seconds (that is, 1 hour).

  • Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupIntervalInSeconds to a smaller value, for example 300 (300 seconds, or 5 minutes).
  • If you do not plan to use TTLs with Dapr and the MySQL state store, you should consider setting cleanupIntervalInSeconds to a value <= 0 (e.g. 0 or -1) to disable the periodic cleanup and reduce the load on the database.

3.21 - OCI Object Storage

Detailed information on the OCI Object Storage state store component

Component format

To setup OCI Object Storage state store create a component of type state.oci.objectstorage. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.oci.objectstorage
  version: v1
  metadata:
 - name: instancePrincipalAuthentication
   value: <"true" or "false">  # Optional. default: "false" 
 - name: configFileAuthentication
   value: <"true" or "false">  # Optional. default: "false" . Not used when instancePrincipalAuthentication == "true" 
 - name: configFilePath
   value: <REPLACE-WITH-FULL-QUALIFIED-PATH-OF-CONFIG-FILE>  # Optional. No default. Only used when configFileAuthentication == "true" 
 - name: configFileProfile
   value: <REPLACE-WITH-NAME-OF-PROFILE-IN-CONFIG-FILE>  # Optional. default: "DEFAULT" . Only used when configFileAuthentication == "true" 
 - name: tenancyOCID
   value: <REPLACE-WITH-TENANCY-OCID>  # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true" 
 - name: userOCID
   value: <REPLACE-WITH-USER-OCID>  # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true" 
 - name: fingerPrint
   value: <REPLACE-WITH-FINGERPRINT>  # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true" 
 - name: privateKey  # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true" 
   value: |
          -----BEGIN RSA PRIVATE KEY-----
          REPLACE-WITH-PRIVATE-KEY-AS-IN-PEM-FILE
          -----END RSA PRIVATE KEY-----    
 - name: region
   value: <REPLACE-WITH-OCI-REGION>  # Not used when configFileAuthentication == "true" or instancePrincipalAuthentication == "true" 
 - name: bucketName
	 value: <REPLACE-WITH-BUCKET-NAME>
 - name: compartmentOCID
   value: <REPLACE-WITH-COMPARTMENT-OCID>

Spec metadata fields

FieldRequiredDetailsExample
instancePrincipalAuthenticationNBoolean to indicate whether instance principal based authentication is used. Default: "false""true" or "false" .
configFileAuthenticationNBoolean to indicate whether identity credential details are provided through a configuration file. Default: "false" Not required nor used when instancePrincipalAuthentication is true."true" or "false" .
configFilePathNFull path name to the OCI configuration file. No default value exists. Not used when instancePrincipalAuthentication is true. Note: the ~/ prefix is not supported."/home/apps/configuration-files/myOCIConfig.txt".
configFileProfileNName of profile in configuration file to use. Default: "DEFAULT" Not used when instancePrincipalAuthentication is true."DEFAULT" or "PRODUCTION" .
tenancyOCIDYThe OCI tenancy identifier. Not required nor used when instancePrincipalAuthentication is true."ocid1.tenancy.oc1..aaaaaaaag7c7sljhsdjhsdyuwe723".
userOCIDYThe OCID for an OCI account (this account requires permissions to access OCI Object Storage). Not required nor used when instancePrincipalAuthentication is true."ocid1.user.oc1..aaaaaaaaby4oyyyuqwy7623yuwe76"
fingerPrintYFingerprint of the public key. Not required nor used when instancePrincipalAuthentication is true."02:91:6c:49:e2:94:21:15:a7:6b:0e:a7:34:e1:3d:1b"
privateKeyYPrivate key of the RSA key pair. Not required nor used when instancePrincipalAuthentication is true."MIIEoyuweHAFGFG2727as+7BTwQRAIW4V"
regionYOCI Region. Not required nor used when instancePrincipalAuthentication is true."us-ashburn-1"
bucketNameYName of the bucket written to and read from (and if necessary created)"application-state-store-bucket"
compartmentOCIDYThe OCID for the compartment that contains the bucket"ocid1.compartment.oc1..aaaaaaaacsssekayyuq7asjh78"

Setup OCI Object Storage

The OCI Object Storage state store needs to interact with Oracle Cloud Infrastructure. The state store supports two different approaches to authentication. One is based on an identity (a user or service account) and the other is instance principal authentication leveraging the permissions granted to the compute instance running the application workload. Note: Resource Principal Authentication - used for resources that are not instances such as serverless functions - is not currently supported.

Dapr-applications running on Oracle Cloud Infrastructure - in a compute instance or as a container on Kubernetes - can leverage instance principal authentication. See the OCI documentation on calling OCI Services from instances for more background. In short: The instance needs to be member of a Dynamic Group and this Dynamic Group needs to get permissions for interacting with the Object Storage service through IAM policies. In case of such instance principal authentication, specify property instancePrincipalAuthentication as "true". You do not need to configure the properties tenancyOCID, userOCID, region, fingerPrint and privateKey - these will be ignored if you define values for them.

Identity based authentication interacts with OCI through an OCI account that has permissions to create, read and delete objects through OCI Object Storage in the indicated bucket and that is allowed to create a bucket in the specified compartment if the bucket is not created beforehand. The OCI documentation describes how to create an OCI Account. The interaction by the state store is performed using the public key’s fingerprint and a private key from an RSA Key Pair generated for the OCI account. The instructions for generating the key pair and getting hold of the required information are available in the OCI documentation.

Details for the identity and identity’s credentials to be used for interaction with OCI can be provided directly in the Dapr component properties file - using the properties tenancyOCID, userOCID, fingerPrint, privateKey and region - or can be provided from a configuration file as is common for many OCI related tools (such as CLI and Terraform) and SDKs. In the latter case the exact file name and full path has to be provided through property configFilePath. Note: the ~/ prefix is not supported in the path. A configuration file can contain multiple profiles; the desired profile can be specified through property configFileProfile. If no value is provided, DEFAULT is used as the name for the profile to be used. Note: if the indicated profile is not found, then the DEFAULT profile (if it exists) is used instead. The OCI SDK documentation gives details about the definition of the configuration file.

If you wish to create the bucket for Dapr to use, you can do so beforehand. However, Object Storage state provider will create one - in the specified compartment - for you automatically if it doesn’t exist.

In order to setup OCI Object Storage as a state store, you need the following properties:

  • instancePrincipalAuthentication: The flag that indicates if instance principal based authentication should be used.
  • configFileAuthentication: The flag that indicates if the OCI identity credential details are provided through a configuration file. Not used when instancePrincipalAuthentication is true.
  • configFilePath: Full path name to the OCI configuration file. Not used when instancePrincipalAuthentication is true or configFileAuthentication is not true.
  • configFileProfile: Name of profile in configuration file to use. Default: "DEFAULT" Not required nor used when instancePrincipalAuthentication is true or configFileAuthentication is not true. When the specified profile is not found in the configuration file, the DEFAULT profile is used when it exists
  • tenancyOCID: The identifier for the OCI cloud tenancy in which the state is to be stored. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
  • userOCID: The identifier for the account used by the state store component to connect to OCI; this must be an account with appropriate permissions on the OCI Object Storage service in the specified compartment and bucket. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
  • fingerPrint: The fingerprint for the public key in the RSA key pair generated for the account indicated by userOCID. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
  • privateKey: The private key in the RSA key pair generated for the account indicated by userOCID. Not used when instancePrincipalAuthentication is true or configFileAuthentication is true.
  • region: The OCI region - for example us-ashburn-1, eu-amsterdam-1, ap-mumbai-1. Not used when instancePrincipalAuthentication is true
  • bucketName: The name of the bucket on OCI Object Storage in which state will be created. This bucket can exist already when the state store is initialized or it will be created during initialization of the state store. Note that the name of buckets is unique within a namespace
  • compartmentOCID: The identifier of the compartment within the tenancy in which the bucket exists or will be created.

What Happens at Runtime?

Every state entry is represented by an object in OCI Object Storage. The OCI Object Storage state store uses the key property provided in the requests to the Dapr API to determine the name of the object. The value is stored as the (literal) content of the object. Each object is assigned a unique ETag value - whenever it is created or updated (aka overwritten); this is native behavior of OCI Object Storage. The state store assigns a meta data tag to every object it writes; the tag is category and its value is dapr-state-store. This allows the objects created as state for Daprized applications to be identified.

For example, the following operation

curl -X POST http://localhost:3500/v1.0/state \
  -H "Content-Type: application/json"
  -d '[
        {
          "key": "nihilus",
          "value": "darth"
        }
      ]'

creates the following object:

BucketDirectoryObject NameObject ContentMeta Tags
as specified with bucketName in components.yaml- (root)nihilusdarthcategory: dapr-state-store

Dapr uses a fixed key scheme with composite keys to partition state across applications. For general states, the key format is: App-ID||state key The OCI Object Storage state store maps the first key segment (for App-ID) to a directory within a bucket, using the Prefixes and Hierarchy used for simulating a directory structure as described in the OCI Object Storage documentation.

The following operation therefore (notice the composite key)

curl -X POST http://localhost:3500/v1.0/state \
  -H "Content-Type: application/json"
  -d '[
        {
          "key": "myApplication||nihilus",
          "value": "darth"
        }
      ]'

will create the following object:

BucketDirectoryObject NameObject ContentMeta Tags
as specified with bucketName in components.yamlmyApplicationnihilusdarthcategory: dapr-state-store

You will be able to inspect all state stored through the OCI Object Storage state store by inspecting the contents of the bucket through the console, the APIs, CLI or SDKs. By going directly to the bucket, you can prepare state that will be available as state to your application at runtime.

Time To Live and State Expiration

The OCI Object Storage state store supports Dapr’s Time To Live logic that ensure that state cannot be retrieved after it has expired. See this How To on Setting State Time To Live for details.

OCI Object Storage does not have native support for a Time To Live setting. The implementation in this component uses a meta data tag put on each object for which a TTL has been specified. The tag is called expiry-time-from-ttl and it contains a string in ISO date time format with the UTC based expiry time. When state is retrieved through a call to Get, this component checks if it has the expiry-time-from-ttl set and if so it checks whether it is in the past. In that case, no state is returned.

The following operation therefore (notice the composite key)

curl -X POST http://localhost:3500/v1.0/state \
  -H "Content-Type: application/json"
  -d '[
        {
          "key": "temporary",
          "value": "ephemeral",
          "metadata": {"ttlInSeconds": "120"}}
        }
      ]'

creates the following object:

BucketDirectoryObject NameObject ContentMeta Tags
as specified with bucketName in components.yaml-nihilusdarthcategory: dapr-state-store , expiry-time-from-ttl: 2022-01-06T08:34:32

The exact value of the expiry-time-from-ttl depends of course on the time at which the state was created and will be 120 seconds later than that moment.

Note that expired state is not removed from the state store by this component. An application operator may decide to run a periodic job that does a form of garbage collection in order to explicitly remove all state that has an expiry-time-from-ttl label with a timestamp in the past.

Concurrency

OCI Object Storage state concurrency is achieved by using ETags. Each object in OCI Object Storage is assigned a unique ETag when it is created or updated (aka replaced). When the Set and Delete requests for this state store specify the FirstWrite concurrency policy, then the request need to provide the actual ETag value for the state to be written or removed for the request to be successful.

Consistency

OCI Object Storage state does not support Transactions.

Query

OCI Object Storage state does not support the Query API.

3.22 - Oracle Database

Detailed information on the Oracle Database state store component

Component format

Create a component properties yaml file, for example called oracle.yaml (but it could be named anything ), paste the following and replace the <CONNECTION STRING> value with your connection string. The connection string is a standard Oracle Database connection string, composed as: "oracle://user/password@host:port/servicename" for example "oracle://demo:demo@localhost:1521/xe".

In case you connect to the database using an Oracle Wallet, you should specify a value for the oracleWalletLocation property, for example: "/home/app/state/Wallet_daprDB/"; this should refer to the local file system directory that contains the file cwallet.sso that is extracted from the Oracle Wallet archive file.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.oracledatabase
  version: v1
  metadata:
  - name: connectionString
    value: "<CONNECTION STRING>"
  - name: oracleWalletLocation
    value: "<FULL PATH TO DIRECTORY WITH ORACLE WALLET CONTENTS >"  # Optional, no default
  - name: tableName
    value: "<NAME OF DATABASE TABLE TO STORE STATE IN >" # Optional, defaults to STATE
  # Uncomment this if you wish to use Oracle Database as a state store for actors (optional)
  #- name: actorStateStore
  #  value: "true"

Spec metadata fields

FieldRequiredDetailsExample
connectionStringYThe connection string for Oracle Database"oracle://user/password@host:port/servicename" for example "oracle://demo:demo@localhost:1521/xe" or for Autonomous Database "oracle://states_schema:State12345pw@adb.us-ashburn-1.oraclecloud.com:1522/k8j2agsqjsw_daprdb_low.adb.oraclecloud.com"
oracleWalletLocationNLocation of the contents of an Oracle Wallet file (required to connect to Autonomous Database on OCI)"/home/app/state/Wallet_daprDB/"
tableNameNName of the database table in which this instance of the state store records the data default "STATE""MY_APP_STATE_STORE"
actorStateStoreNConsider this state store for actors. Defaults to "false""true", "false"

What Happens at Runtime?

When the state store component initializes, it connects to the Oracle Database and checks if a table with the name specified with tableName exists. If it does not, it creates this table (with columns Key, Value, Binary_YN, ETag, Creation_Time, Update_Time, Expiration_time).

Every state entry is represented by a record in the database table. The key property provided in the request is used to determine the name of the object stored literally in the KEY column. The value is stored as the content of the object. Binary content is stored as Base64 encoded text. Each object is assigned a unique ETag value whenever it is created or updated.

For example, the following operation

curl -X POST http://localhost:3500/v1.0/state \
  -H "Content-Type: application/json"
  -d '[
        {
          "key": "nihilus",
          "value": "darth"
        }
      ]'

creates the following records in table STATE:

KEYVALUECREATION_TIMEBINARY_YNETAG
nihilusdarth2022-02-14T22:11:00N79dfb504-5b27-43f6-950f-d55d5ae0894f

Dapr uses a fixed key scheme with composite keys to partition state across applications. For general states, the key format is: App-ID||state key. The Oracle Database state store maps this key in its entirety to the KEY column.

You can easily inspect all state stored with SQL queries against the tableName table, for example the STATE table.

Time To Live and State Expiration

The Oracle Database state store component supports Dapr’s Time To Live logic that ensures that state cannot be retrieved after it has expired. See this How To on Setting State Time To Live for details.

The Oracle Database does not have native support for a Time-To-Live setting. The implementation in this component uses a column called EXPIRATION_TIME to hold the time after which the record is considered expired. The value in this column is set only when a TTL was specified in a Set request. It is calculated as the current UTC timestamp with the TTL period added to it. When state is retrieved through a call to Get, this component checks if it has the EXPIRATION_TIME set and if so, it checks whether it is in the past. In that case, no state is returned.

The following operation :

curl -X POST http://localhost:3500/v1.0/state \
  -H "Content-Type: application/json"
  -d '[
        {
          "key": "temporary",
          "value": "ephemeral",
          "metadata": {"ttlInSeconds": "120"}}
        }
      ]'

creates the following object:

KEYVALUECREATION_TIMEEXPIRATION_TIMEBINARY_YNETAG
temporaryephemeral2022-03-31T22:11:002022-03-31T22:13:00N79dfb504-5b27-43f6-950f-d55d5ae0894f

with the EXPIRATION_TIME set to a timestamp 2 minutes (120 seconds) (later than the CREATION_TIME)

Note that expired state is not removed from the state store by this component. An application operator may decide to run a periodic job that does a form of garbage collection in order to explicitly remove all state records with an EXPIRATION_TIME in the past. The SQL statement for collecting the expired garbage records:

 delete dapr_state 
 where  expiration_time < SYS_EXTRACT_UTC(SYSTIMESTAMP);

Concurrency

Concurrency in the Oracle Database state store is achieved by using ETags. Each piece of state recorded in the Oracle Database state store is assigned a unique ETag - a generated, unique string stored in the column ETag - when it is created or updated. Note: the column UPDATE_TIME is also updated whenever a Set operation is performed on an existing record.

Only when the Set and Delete requests for this state store specify the FirstWrite concurrency policy, then the request needs to provide the actual ETag value for the state to be written or removed for the request to be successful. If a different or no concurrency policy is specified, then no check is performed on the ETag value.

Consistency

The Oracle Database state store supports Transactions. Multiple Set and Delete commands can be combined in a request that is processed as a single, atomic transaction.

Note: simple Set and Delete operations are a transaction on their own; when a Set or Delete requests returns an HTTP-20X result, the database transaction has been committed successfully.

Query

Oracle Database state store does not currently support the Query API.

Create an Oracle Database and User Schema

  1. Run an instance of Oracle Database. You can run a local instance of Oracle Database in Docker CE with the following command - or of course use an existing Oracle Database:

    docker run -d -p 1521:1521 -e ORACLE_PASSWORD=TheSuperSecret1509! gvenzl/oracle-xe
    

    This example does not describe a production configuration because it sets the password for users SYS and SYSTEM in plain text.

    When the output from the conmmand indicates that the container is running, learn the container id using the docker ps command. Then start a shell session using:

    docker exec -it <container id> /bin/bash
    

    and subsequently run the SQL*Plus client, connecting to the database as the SYS user:

    sqlplus sys/TheSuperSecret1509! as sysdba
    
  2. Create a database schema for state data. Create a new user schema - for example called dapr - for storing state data. Grant this user (schema) privileges for creating a table and storing data in the associated tablespace.

    To create a new user schema in Oracle Database, run the following SQL command:

    create user dapr identified by DaprPassword4239 default tablespace users quota unlimited on users;
    grant create session, create table to dapr;
    
  3. (optional) Create table for storing state records. The Oracle Database state store component checks if the table for storing state already exists in the database user schema it connects to and if it does not, it creates that table. However, instead of having the Oracle Database state store component create the table for storing state records at run time, you can also create the table in advance. That gives you - or the DBA for the database - more control over the physical configuration of the table. This also means you do not have to grant the create table privilege to the user schema.

    Run the following DDL statement to create the table for storing the state in the dapr database user schema :

    CREATE TABLE dapr_state (
    		key varchar2(2000) NOT NULL PRIMARY KEY,
    		value clob NOT NULL,
    		binary_yn varchar2(1) NOT NULL,
    		etag varchar2(50)  NOT NULL,
    		creation_time TIMESTAMP WITH TIME ZONE DEFAULT SYSTIMESTAMP NOT NULL ,
    		expiration_time TIMESTAMP WITH TIME ZONE NULL,
    		update_time TIMESTAMP WITH TIME ZONE NULL
      )
    
  1. Create a free (or paid for) Autonomous Transaction Processing (ATP) or ADW (Autonomous Data Warehouse) instance on Oracle Cloud Infrastructure, as described in the OCI documentation for the always free autonomous database.

    You need to provide the password for user ADMIN. You use this account (initially at least) for database administration activities. You can work both in the web based SQL Developer tool, from its desktop counterpart or from any of a plethora of database development tools.

  2. Create a schema for state data. Create a new user schema in the Oracle Database for storing state data - for example using the ADMIN account. Grant this new user (schema) privileges for creating a table and storing data in the associated tablespace.

    To create a new user schema in Oracle Database, run the following SQL command:

    create user dapr identified by DaprPassword4239 default tablespace users quota unlimited on users;
    grant create session, create table to dapr;
    
  3. (optional) Create table for storing state records. The Oracle Database state store component checks if the table for storing state already exists in the database user schema it connects to and if it does not, it creates that table. However, instead of having the Oracle Database state store component create the table for storing state records at run time, you can also create the table in advance. That gives you - or the DBA for the database - more control over the physical configuration of the table. This also means you do not have to grant the create table privilege to the user schema.

    Run the following DDL statement to create the table for storing the state in the dapr database user schema :

    CREATE TABLE dapr_state (
    		key varchar2(2000) NOT NULL PRIMARY KEY,
    		value clob NOT NULL,
    		binary_yn varchar2(1) NOT NULL,
    		etag varchar2(50)  NOT NULL,
    		creation_time TIMESTAMP WITH TIME ZONE DEFAULT SYSTIMESTAMP NOT NULL ,
    		expiration_time TIMESTAMP WITH TIME ZONE NULL,
    		update_time TIMESTAMP WITH TIME ZONE NULL
      )
    

3.23 - PostgreSQL

Detailed information on the PostgreSQL state store component

This component allows using PostgreSQL (Postgres) as state store for Dapr, using the “v2” component. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.postgresql
  # Note: setting "version" to "v2" is required to use the v2 of the component
  version: v2
  metadata:
    # Connection string
    - name: connectionString
      value: "<CONNECTION STRING>"
    # Individual connection parameters - can be used instead to override connectionString parameters
    #- name: host
    #  value: "localhost"
    #- name: hostaddr
    #  value: "127.0.0.1"
    #- name: port
    #  value: "5432"
    #- name: database
    #  value: "my_db"
    #- name: user
    #  value: "postgres"
    #- name: password
    #  value: "example"
    #- name: sslRootCert
    #  value: "/path/to/ca.crt"
    # Timeout for database operations, as a Go duration or number of seconds (optional)
    #- name: timeout
    #  value: 20
    # Prefix for the table where the data is stored (optional)
    #- name: tablePrefix
    #  value: ""
    # Name of the table where to store metadata used by Dapr (optional)
    #- name: metadataTableName
    #  value: "dapr_metadata"
    # Cleanup interval in seconds, to remove expired rows (optional)
    #- name: cleanupInterval
    #  value: "1h"
    # Maximum number of connections pooled by this component (optional)
    #- name: maxConns
    #  value: 0
    # Max idle time for connections before they're closed (optional)
    #- name: connectionMaxIdleTime
    #  value: 0
    # Controls the default mode for executing queries. (optional)
    #- name: queryExecMode
    #  value: ""
    # Uncomment this if you wish to use PostgreSQL as a state store for actors or workflows (optional)
    #- name: actorStateStore
    #  value: "true"

Spec metadata fields

Authenticate using a connection string

The following metadata options are required to authenticate using a PostgreSQL connection string.

FieldRequiredDetailsExample
connectionStringYThe connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string."host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"

Authenticate using individual connection parameters

In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.

FieldRequiredDetailsExample
hostYThe host name or IP address of the PostgreSQL server"localhost"
hostaddrNThe IP address of the PostgreSQL server (alternative to host)"127.0.0.1"
portYThe port number of the PostgreSQL server"5432"
databaseYThe name of the database to connect to"my_db"
userYThe PostgreSQL user to connect as"postgres"
passwordYThe password for the PostgreSQL user"example"
sslRootCertNPath to the SSL root certificate file"/path/to/ca.crt"

Authenticate using Microsoft Entra ID

Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.

FieldRequiredDetailsExample
useAzureADYMust be set to true to enable the component to retrieve access tokens from Microsoft Entra ID."true"
connectionStringYThe connection string for the PostgreSQL database.
This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity. This is often the name of the corresponding principal (for example, the name of the Microsoft Entra ID application). This connection string should not contain any password.
"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=my_db sslmode=require"
azureTenantIdNID of the Microsoft Entra ID tenant"cd4b2887-304c-…"
azureClientIdNClient ID (application ID)"c7dd251f-811f-…"
azureClientSecretNClient secret (application password)"Ecy3X…"

Authenticate using AWS IAM

Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam database role. Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.

FieldRequiredDetailsExample
useAWSIAMYMust be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases."true"
connectionStringYThe connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS.
"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"
awsRegionNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS Region where the AWS Relational Database Service is deployed to."us-east-1"
awsAccessKeyNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account"AKIAIOSFODNN7EXAMPLE"
awsSecretKeyNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
awsSessionTokenNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials."TOKEN"

Other metadata options

FieldRequiredDetailsExample
tablePrefixNPrefix for the table where the data is stored. Can optionally have the schema name as prefix, such as public.prefix_"prefix_", "public.prefix_"
metadataTableNameNName of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata. Can optionally have the schema name as prefix, such as public.dapr_metadata"dapr_metadata", "public.dapr_metadata"
timeoutNTimeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s"30s", 30
cleanupIntervalNInterval, as a Go duration or number of seconds, to clean up rows with an expired TTL. Default: 1h (1 hour). Setting this to values <=0 disables the periodic cleanup."30m", 1800, -1
maxConnsNMaximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs."4"
connectionMaxIdleTimeNMax idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose."5m"
queryExecModeNControls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case, it may be preferrable to use exec or simple_protocol."simple_protocol"
actorStateStoreNConsider this state store for actors. Defaults to "false""true", "false"

Setup PostgreSQL

  1. Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker with the following command:

    docker run -p 5432:5432 -e POSTGRES_PASSWORD=example postgres
    

    This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of “postgres”.

  2. Create a database for state data.
    Either the default “postgres” database can be used, or create a new database for storing state data.

    To create a new database in PostgreSQL, run the following SQL command:

    CREATE DATABASE my_dapr;
    

Advanced

Differences between v1 and v2

The PostgreSQL state store v2 was introduced in Dapr 1.13. The pre-existing v1 remains available and is not deprecated.

In the v2 component, the table schema has been changed significantly, with the goal of increasing performance and reliability. Most notably, the value stored by Dapr is now of type BYTEA, which allows faster queries and, in some cases, is more space-efficient than the previously-used JSONB column.
However, due to this change, the v2 component does not support the Dapr state store query APIs.

Also, in the v2 component, ETags are now random UUIDs, which ensures better compatibility with other PostgreSQL-compatible databases, such as CockroachDB.

Because of these changes, v1 and v2 components are not able to read or write data from the same table. At this stage, it’s also impossible to migrate data between the two versions of the component.

Displaying the data in human-readable format

The PostgreSQL v2 component stores the state’s value in the value column, which is of type BYTEA. Most PostgreSQL tools, including pgAdmin, consider the value as binary and do not display it in human-readable form by default.

If you want to inspect the value in the state store, and you know it’s not binary (for example, JSON data), you can have the value displayed in human-readable form using a query like the following:

-- Replace "state" with the name of the state table in your environment
SELECT *, convert_from(value, 'utf-8') FROM state;

TTLs and cleanups

This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate after how many seconds the data should be considered “expired”.

Because PostgreSQL doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.

You can set the deletion interval of expired records with the cleanupInterval metadata property, which defaults to 3600 seconds (that is, 1 hour).

  • Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupInterval to a smaller value; for example, 5m (5 minutes).
  • If you do not plan to use TTLs with Dapr and the PostgreSQL state store, you should consider setting cleanupInterval to a value <= 0 (for example, 0 or -1) to disable the periodic cleanup and reduce the load on the database.

3.24 - PostgreSQL v1

Detailed information on the PostgreSQL v1 state store component

This component allows using PostgreSQL (Postgres) as state store for Dapr, using the “v1” component. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.postgresql
  version: v1
  metadata:
    # Connection string
    - name: connectionString
      value: "<CONNECTION STRING>"
    # Individual connection parameters - can be used instead to override connectionString parameters
    #- name: host
    #  value: "localhost"
    #- name: hostaddr
    #  value: "127.0.0.1"
    #- name: port
    #  value: "5432"
    #- name: database
    #  value: "my_db"
    #- name: user
    #  value: "postgres"
    #- name: password
    #  value: "example"
    #- name: sslRootCert
    #  value: "/path/to/ca.crt"
    # Timeout for database operations, as a Go duration or number of seconds (optional)
    #- name: timeout
    #  value: 20
    # Name of the table where to store the state (optional)
    #- name: tableName
    #  value: "state"
    # Name of the table where to store metadata used by Dapr (optional)
    #- name: metadataTableName
    #  value: "dapr_metadata"
    # Cleanup interval in seconds, to remove expired rows (optional)
    #- name: cleanupInterval
    #  value: "1h"
    # Maximum number of connections pooled by this component (optional)
    #- name: maxConns
    #  value: 0
    # Max idle time for connections before they're closed (optional)
    #- name: connectionMaxIdleTime
    #  value: 0
    # Controls the default mode for executing queries. (optional)
    #- name: queryExecMode
    #  value: ""
    # Uncomment this if you wish to use PostgreSQL as a state store for actors or workflows (optional)
    #- name: actorStateStore
    #  value: "true"

Spec metadata fields

Authenticate using a connection string

The following metadata options are required to authenticate using a PostgreSQL connection string.

FieldRequiredDetailsExample
connectionStringYThe connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string."host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"

Authenticate using individual connection parameters

In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.

FieldRequiredDetailsExample
hostYThe host name or IP address of the PostgreSQL server"localhost"
hostaddrNThe IP address of the PostgreSQL server (alternative to host)"127.0.0.1"
portYThe port number of the PostgreSQL server"5432"
databaseYThe name of the database to connect to"my_db"
userYThe PostgreSQL user to connect as"postgres"
passwordYThe password for the PostgreSQL user"example"
sslRootCertNPath to the SSL root certificate file"/path/to/ca.crt"

Authenticate using Microsoft Entra ID

Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.

FieldRequiredDetailsExample
useAzureADYMust be set to true to enable the component to retrieve access tokens from Microsoft Entra ID."true"
connectionStringYThe connection string for the PostgreSQL database.
This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity; this is often the name of the corresponding principal (e.g. the name of the Microsoft Entra ID application). This connection string should not contain any password.
"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=my_db sslmode=require"
azureTenantIdNID of the Microsoft Entra ID tenant"cd4b2887-304c-…"
azureClientIdNClient ID (application ID)"c7dd251f-811f-…"
azureClientSecretNClient secret (application password)"Ecy3X…"

Authenticate using AWS IAM

Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam database role. Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.

FieldRequiredDetailsExample
useAWSIAMYMust be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases."true"
connectionStringYThe connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS.
"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"
awsRegionNThe AWS Region where the AWS Relational Database Service is deployed to."us-east-1"
awsAccessKeyNAWS access key associated with an IAM account"AKIAIOSFODNN7EXAMPLE"
awsSecretKeyNThe secret key associated with the access key"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
awsSessionTokenNAWS session token to use. A session token is only required if you are using temporary security credentials."TOKEN"

Other metadata options

FieldRequiredDetailsExample
tableNameNName of the table where the data is stored. Defaults to state. Can optionally have the schema name as prefix, such as public.state"state", "public.state"
metadataTableNameNName of the table Dapr uses to store a few metadata properties. Defaults to dapr_metadata. Can optionally have the schema name as prefix, such as public.dapr_metadata"dapr_metadata", "public.dapr_metadata"
timeoutNTimeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s"30s", 30
cleanupIntervalNInterval, as a Go duration or number of seconds, to clean up rows with an expired TTL. Default: 1h (1 hour). Setting this to values <=0 disables the periodic cleanup."30m", 1800, -1
maxConnsNMaximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs."4"
connectionMaxIdleTimeNMax idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose."5m"
queryExecModeNControls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use exec or simple_protocol."simple_protocol"
actorStateStoreNConsider this state store for actors. Defaults to "false""true", "false"

Setup PostgreSQL

  1. Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker CE with the following command:

    docker run -p 5432:5432 -e POSTGRES_PASSWORD=example postgres
    

    This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of “postgres”.

  2. Create a database for state data.
    Either the default “postgres” database can be used, or create a new database for storing state data.

    To create a new database in PostgreSQL, run the following SQL command:

    CREATE DATABASE my_dapr;
    

Advanced

TTLs and cleanups

This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate after how many seconds the data should be considered “expired”.

Because PostgreSQL doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.

You can set the deletion interval of expired records with the cleanupInterval metadata property, which defaults to 3600 seconds (that is, 1 hour).

  • Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupInterval to a smaller value; for example, 5m (5 minutes).
  • If you do not plan to use TTLs with Dapr and the PostgreSQL state store, you should consider setting cleanupInterval to a value <= 0 (for example, 0 or -1) to disable the periodic cleanup and reduce the load on the database.

The column in the state table where the expiration date for records is stored in, expiredate, does not have an index by default, so each periodic cleanup must perform a full-table scan. If you have a table with a very large number of records, and only some of them use a TTL, you may find it useful to create an index on that column. Assuming that your state table name is state (the default), you can use this query:

CREATE INDEX expiredate_idx
    ON state
    USING btree (expiredate ASC NULLS LAST);

3.25 - Redis

Detailed information on the Redis state store component

Component format

To setup Redis state store create a component of type state.redis. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.redis
  version: v1
  metadata:
  - name: redisHost
    value: <HOST>
  - name: redisPassword # Optional.
    value: <PASSWORD>
  - name: useEntraID
    value: <bool> # Optional. Allowed: true, false.
  - name: enableTLS
    value: <bool> # Optional. Allowed: true, false.
  - name: clientCert
    value: # Optional
  - name: clientKey
    value: # Optional    
  - name: maxRetries
    value: # Optional
  - name: maxRetryBackoff
    value: # Optional
  - name: failover
    value: <bool> # Optional. Allowed: true, false.
  - name: sentinelMasterName
    value: <string> # Optional
  - name: sentinelUsername
    value: # Optional
  - name: sentinelPassword
    value: # Optional
  - name: redeliverInterval
    value: # Optional
  - name: processingTimeout
    value: # Optional
  - name: redisType
    value: # Optional
  - name: redisDB
    value: # Optional
  - name: redisMaxRetries
    value: # Optional
  - name: redisMinRetryInterval
    value: # Optional
  - name: redisMaxRetryInterval
    value: # Optional
  - name: dialTimeout
    value: # Optional
  - name: readTimeout
    value: # Optional
  - name: writeTimeout
    value: # Optional
  - name: poolSize
    value: # Optional
  - name: poolTimeout
    value: # Optional
  - name: maxConnAge
    value: # Optional
  - name: minIdleConns
    value: # Optional
  - name: idleCheckFrequency
    value: # Optional
  - name: idleTimeout
    value: # Optional
  - name: ttlInSeconds
    value: <int> # Optional
  - name: queryIndexes
    value: <string> # Optional
  # Uncomment this if you wish to use Redis as a state store for actors (optional)
  #- name: actorStateStore
  #  value: "true"

If you wish to use Redis as an actor store, append the following to the yaml.

  - name: actorStateStore
    value: "true"

Spec metadata fields

FieldRequiredDetailsExample
redisHostYConnection-string for the redis hostlocalhost:6379, redis-master.default.svc.cluster.local:6379
redisPasswordNPassword for Redis host. No Default. Can be secretKeyRef to use a secret reference"", "KeFg23!"
redisUsernameNUsername for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly."", "default"
useEntraIDNImplements EntraID support for Azure Cache for Redis. Before enabling this:
  • The redisHost name must be specified in the form of "server:port"
  • TLS must be enabled
Learn more about this setting under Create a Redis instance > Azure Cache for Redis
"true", "false"
enableTLSNIf the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to "false""true", "false"
clientCertNThe content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here"----BEGIN CERTIFICATE-----\nMIIC..."
clientKeyNThe content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here"----BEGIN PRIVATE KEY-----\nMIIE..."
maxRetriesNMaximum number of retries before giving up. Defaults to 35, 10
maxRetryBackoffNMaximum backoff between each retry. Defaults to 2 seconds; "-1" disables backoff.3000000000
failoverNProperty to enable failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See Redis Sentinel Documentation. Defaults to "false""true", "false"
sentinelMasterNameNThe sentinel master name. See Redis Sentinel Documentation"", "mymaster"
sentinelUsernameNUsername for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled"username"
sentinelPasswordNPassword for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled"password"
redeliverIntervalNThe interval between checking for pending messages to redelivery. Defaults to "60s". "0" disables redelivery."30s"
processingTimeoutNThe amount time a message must be pending before attempting to redeliver it. Defaults to "15s". "0" disables redelivery."30s"
redisTypeNThe type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node"."cluster"
redisDBNDatabase selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0"."0"
redisMaxRetriesNAlias for maxRetries. If both values are set maxRetries is ignored."5"
redisMinRetryIntervalNMinimum backoff for redis commands between each retry. Default is "8ms"; "-1" disables backoff."8ms"
redisMaxRetryIntervalNAlias for maxRetryBackoff. If both values are set maxRetryBackoff is ignored."5s"
dialTimeoutNDial timeout for establishing new connections. Defaults to "5s"."5s"
readTimeoutNTimeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s", "-1" for no timeout."3s"
writeTimeoutNTimeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout."3s"
poolSizeNMaximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU."20"
poolTimeoutNAmount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second."5s"
maxConnAgeNConnection age at which the client retires (closes) the connection. Default is to not close aged connections."30m"
minIdleConnsNMinimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0"."2"
idleCheckFrequencyNFrequency of idle checks made by idle connections reaper. Default is "1m". "-1" disables idle connections reaper."-1"
idleTimeoutNAmount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m". "-1" disables idle timeout check."10m"
ttlInSecondsNAllows specifying a default Time-to-live (TTL) in seconds that will be applied to every state store request unless TTL is explicitly defined via the request metadata.600
queryIndexesNIndexing schemas for querying JSON objectssee Querying JSON objects
actorStateStoreNConsider this state store for actors. Defaults to "false""true", "false"

Setup Redis

Dapr can use any Redis instance: containerized, running on your local dev machine, or a managed cloud service.

A Redis instance is automatically created as a Docker container when you run dapr init

You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.

  1. Install Redis into your cluster. Note that we’re explicitly setting an image tag to get a version greater than 5, which is what Dapr’ pub/sub functionality requires. If you’re intending on using Redis as just a state store (and not for pub/sub), you do not have to set the image version.

    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm install redis bitnami/redis
    
  2. Run kubectl get pods to see the Redis containers now running in your cluster.

  3. Add redis-master:6379 as the redisHost in your redis.yaml file. For example:

        metadata:
        - name: redisHost
          value: redis-master:6379
    
  4. Next, get the Redis password, which is slightly different depending on the OS we’re using:

    • Windows: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64, which creates a file with your encoded password. Next, run certutil -decode encoded.b64 password.txt, which will put your redis password in a text file called password.txt. Copy the password and delete the two files.

    • Linux/MacOS: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode and copy the outputted password.

    Add this password as the redisPassword value in your redis.yaml file. For example:

        metadata:
        - name: redisPassword
          value: lhDOkwTlp0
    
  1. Create an Azure Cache for Redis instance using the official Microsoft documentation.

  2. Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.

    • For the Host name:
      • Navigate to the resource’s Overview page.
      • Copy the Host name value.
    • For your access key:
      • Navigate to Settings > Access Keys.
      • Copy and save your key.
  3. Add your key and your host name to a redis.yaml file that Dapr can apply to your cluster.

    • If you’re running a sample, add the host and key to the provided redis.yaml.
    • If you’re creating a project from the ground up, create a redis.yaml file as specified in the Component format section.
  4. Set the redisHost key to [HOST NAME FROM PREVIOUS STEP]:6379 and the redisPassword key to the key you saved earlier.

    Note: In a production-grade application, follow secret management instructions to securely manage your secrets.

  5. Enable EntraID support:

    • Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
    • Set useEntraID to "true" to implement EntraID support for Azure Cache for Redis.
  6. Set enableTLS to "true" to support TLS.

Note:useEntraID assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify the azureClientID property.

Querying JSON objects (optional)

In addition to supporting storing and querying state data as key/value pairs, the Redis state store optionally supports querying of JSON objects to meet more complex querying or filtering requirements. To enable this feature, the following steps are required:

  1. The Redis store must support Redis modules and specifically both Redisearch and RedisJson. If you are deploying and running Redis then load redisearch and redisjson modules when deploying the Redis service. ``
  2. Specify queryIndexes entry in the metadata of the component config. The value of the queryIndexes is a JSON array of the following format:
[
  {
    "name": "<indexing name>",
    "indexes": [
      {
        "key": "<JSONPath-like syntax for selected element inside documents>",
        "type": "<value type (supported types: TEXT, NUMERIC)>",
      },
      ...
    ]
  },
  ...
]
  1. When calling state management API, add the following metadata to the API calls:
  • Save State, Get State, Delete State:
    • add metadata.contentType=application/json URL query parameter to HTTP API request
    • add "contentType": "application/json" pair to the metadata of gRPC API request
  • Query State:
    • add metadata.contentType=application/json&metadata.queryIndexName=<indexing name> URL query parameters to HTTP API request
    • add "contentType" : "application/json" and "queryIndexName" : "<indexing name>" pairs to the metadata of gRPC API request

Consider an example where you store documents like that:

{
  "key": "1",
  "value": {
    "person": {
      "org": "Dev Ops",
      "id": 1036
    },
    "city": "Seattle",
    "state": "WA"
  }
}

The component config file containing corresponding indexing schema looks like that:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
spec:
  type: state.redis
  version: v1
  initTimeout: 1m
  metadata:
  - name: redisHost
    value: "localhost:6379"
  - name: redisPassword
    value: ""
  - name: queryIndexes
    value: |
      [
        {
          "name": "orgIndx",
          "indexes": [
            {
              "key": "person.org",
              "type": "TEXT"
            },
            {
              "key": "person.id",
              "type": "NUMERIC"
            },
            {
              "key": "state",
              "type": "TEXT"
            },
            {
              "key": "city",
              "type": "TEXT"
            }
          ]
        }
      ]

Consecutively, you can now store, retrieve, and query these documents.

Consider the example from “How-To: Query state” guide. Let’s run it with Redis.

If you are using a self-hosted deployment of Dapr, a Redis instance without the JSON module is automatically created as a Docker container when you run dapr init.

Alternatively, you can create an instance of Redis by running the following command:

docker run -p 6379:6379 --name redis --rm redis

The Redis container that gets created on dapr init or via the above command, cannot be used with state store query API alone. You can run redislabs/rejson docker image on a different port(than the already installed Redis is using) to work with they query API.

Note: redislabs/rejson has support only for amd64 architecture.

Use following command to create an instance of redis compatible with query API.

docker run -p 9445:9445 --name rejson --rm redislabs/rejson:2.0.6

Follow instructions for Redis deployment in Kubernetes with one extra detail.

When installing Redis Helm package, provide a configuration file that specifies container image and enables required modules:

helm install redis bitnami/redis --set image.tag=6.2 -f values.yaml

where values.yaml looks like:

image:
  repository: redislabs/rejson
  tag: 2.0.6

master:
  extraFlags:
   - --loadmodule
   - /usr/lib/redis/modules/rejson.so
   - --loadmodule
   - /usr/lib/redis/modules/redisearch.so

Follow instructions for Redis deployment in AWS.

Next is to start a Dapr application. Refer to this component configuration file, which contains query indexing schemas. Make sure to modify the redisHost to reflect the local forwarding port which redislabs/rejson uses.

dapr run --app-id demo --dapr-http-port 3500 --resources-path query-api-examples/components/redis

Now populate the state store with the employee dataset, so you can then query it later.

curl -X POST -H "Content-Type: application/json" -d @query-api-examples/dataset.json \
  http://localhost:3500/v1.0/state/querystatestore?metadata.contentType=application/json

To make sure the data has been properly stored, you can retrieve a specific object

curl http://localhost:3500/v1.0/state/querystatestore/1?metadata.contentType=application/json

The result will be:

{
  "city": "Seattle",
  "state": "WA",
  "person": {
    "org": "Dev Ops",
    "id": 1036
  }
}

Now, let’s find all employees in the state of California and sort them by their employee ID in descending order.

This is the query:

{
    "filter": {
        "EQ": { "state": "CA" }
    },
    "sort": [
        {
            "key": "person.id",
            "order": "DESC"
        }
    ]
}

Execute the query with the following command:

curl -s -X POST -H "Content-Type: application/json" -d @query-api-examples/query1.json \
  'http://localhost:3500/v1.0-alpha1/state/querystatestore/query?metadata.contentType=application/json&metadata.queryIndexName=orgIndx'

The result will be:

{
  "results": [
    {
      "key": "3",
      "data": {
        "person": {
          "org": "Finance",
          "id": 1071
        },
        "city": "Sacramento",
        "state": "CA"
      },
      "etag": "1"
    },
    {
      "key": "7",
      "data": {
        "person": {
          "org": "Dev Ops",
          "id": 1015
        },
        "city": "San Francisco",
        "state": "CA"
      },
      "etag": "1"
    },
    {
      "key": "5",
      "data": {
        "person": {
          "org": "Hardware",
          "id": 1007
        },
        "city": "Los Angeles",
        "state": "CA"
      },
      "etag": "1"
    },
    {
      "key": "9",
      "data": {
        "person": {
          "org": "Finance",
          "id": 1002
        },
        "city": "San Diego",
        "state": "CA"
      },
      "etag": "1"
    }
  ]
}

The query syntax and documentation is available here

3.26 - RethinkDB

Detailed information on the RethinkDB state store component

Component format

To setup RethinkDB state store, create a component of type state.rethinkdb. See the how-to guide to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.rethinkdb
  version: v1
  metadata:
  - name: address
    value: <REPLACE-RETHINKDB-ADDRESS> # Required, e.g. 127.0.0.1:28015 or rethinkdb.default.svc.cluster.local:28015).
  - name: database
    value: <REPLACE-RETHINKDB-DB-NAME> # Required, e.g. dapr (alpha-numerics only)
  - name: table
    value: # Optional
  - name: username
    value: <USERNAME> # Optional
  - name: password
    value: <PASSWORD> # Optional
  - name: archive
    value: bool # Optional (whether or not store should keep archive table of all the state changes)

If the optional archive metadata is set to true, on each state change, the RethinkDB state store will also log state changes with timestamp in the daprstate_archive table. This allows for time series analyses of the state managed by Dapr.

Spec metadata fields

FieldRequiredDetailsExample
addressYThe address for RethinkDB server"127.0.0.1:28015", "rethinkdb.default.svc.cluster.local:28015"
databaseYThe database to use. Alpha-numerics only"dapr"
tableNThe table name to use"table"
usernameNThe username to connect with"user"
passwordNThe password to connect with"password"
archiveNWhether or not to archive the table"true", "false"

Setup RethinkDB

You can run RethinkDB locally using Docker:

docker run --name rethinkdb -v "$PWD:/rethinkdb-data" -d rethinkdb:latest

To connect to the admin UI:

open "http://$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' rethinkdb):8080"

3.27 - SQLite

Detailed information on the SQLite state store component

This component allows using SQLite 3 as state store for Dapr.

The component is currently compiled with SQLite version 3.41.2.

Create a Dapr component

Create a file called sqlite.yaml, paste the following, and replace the <CONNECTION STRING> value with your connection string, which is the path to a file on disk.

If you want to also configure SQLite to store actors, add the actorStateStore option as in the example below.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.sqlite
  version: v1
  metadata:
  # Connection string
  - name: connectionString
    value: "data.db"
  # Timeout for database operations, in seconds (optional)
  #- name: timeoutInSeconds
  #  value: 20
  # Name of the table where to store the state (optional)
  #- name: tableName
  #  value: "state"
  # Cleanup interval in seconds, to remove expired rows (optional)
  #- name: cleanupInterval
  #  value: "1h"
  # Set busy timeout for database operations
  #- name: busyTimeout
  #  value: "2s"
  # Uncomment this if you wish to use SQLite as a state store for actors (optional)
  #- name: actorStateStore
  #  value: "true"

Spec metadata fields

FieldRequiredDetailsExample
connectionStringYThe connection string for the SQLite database. See below for more details."path/to/data.db", "file::memory:?cache=shared"
timeoutNTimeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s"30s", 30
tableNameNName of the table where the data is stored. Defaults to state."state"
metadataTableNameNName of the table used by Dapr to store metadata for the component. Defaults to metadata."metadata"
cleanupIntervalNInterval, as a Go duration, to clean up rows with an expired TTL. Setting this to values <=0 disables the periodic cleanup. Default: 0 (i.e. disabled)"2h", "30m", -1
busyTimeoutNInterval, as a Go duration, to wait in case the SQLite database is currently busy serving another request, before returning a “database busy” error. Default: 2s"100ms", "5s"
disableWALNIf set to true, disables Write-Ahead Logging for journaling of the SQLite database. You should set this to false if the database is stored on a network file system (for example, a folder mounted as a SMB or NFS share). This option is ignored for read-only or in-memory databases."true", "false"
actorStateStoreNConsider this state store for actors. Defaults to "false""true", "false"

The connectionString parameter configures how to open the SQLite database.

  • Normally, this is the path to a file on disk, relative to the current working directory, or absolute. For example: "data.db" (relative to the working directory) or "/mnt/data/mydata.db".
  • The path is interpreted by the SQLite library, so it’s possible to pass additional options to the SQLite driver using “URI options” if the path begins with file:. For example: "file:path/to/data.db?mode=ro" opens the database at path path/to/data.db in read-only mode. Refer to the SQLite documentation for all supported URI options.
  • The special case ":memory:" launches the component backed by an in-memory SQLite database. This database is not persisted on disk, not shared across multiple Dapr instances, and all data is lost when the Dapr sidecar is stopped. When using an in-memory database, Dapr automatically sets the cache=shared URI option.

Advanced

TTLs and cleanups

This state store supports Time-To-Live (TTL) for records stored with Dapr. When storing data using Dapr, you can set the ttlInSeconds metadata property to indicate when the data should be considered “expired”.

Because SQLite doesn’t have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered “expired”. Records that are “expired” are not returned to the caller, even if they’re still physically stored in the database. A background “garbage collector” periodically scans the state table for expired rows and deletes them.

The cleanupInterval metadata property sets the expired records deletion interval, which is disabled by default.

  • Longer intervals require less frequent scans for expired rows, but can cause the database to store expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting cleanupInterval to a smaller value, for example 5m.
  • If you do not plan to use TTLs with Dapr and the SQLite state store, you should consider setting cleanupInterval to a value <= 0 (e.g. 0 or -1) to disable the periodic cleanup and reduce the load on the database. This is the default behavior.

The expiration_time column in the state table, where the expiration date for records is stored, does not have an index by default, so each periodic cleanup must perform a full-table scan. If you have a table with a very large number of records, and only some of them use a TTL, you may find it useful to create an index on that column. Assuming that your state table name is state (the default), you can use this query:

CREATE INDEX idx_expiration_time
  ON state (expiration_time);

Dapr does not automatically vacuum SQLite databases.

Sharing a SQLite database and using networked filesystems

Although you can have multiple Dapr instances accessing the same SQLite database (for example, because your application is scaled horizontally or because you have multiple apps accessing the same state store), there are some caveats you should keep in mind.

SQLite works best when all clients access a database file on the same, locally-mounted disk. Using virtual disks that are mounted from a SAN (Storage Area Network), as is common practice in virtualized or cloud environments, is fine.

However, storing your SQLite database in a networked filesystem (for example via NFS or SMB, but these examples are not an exhaustive list) should be done with care. The official SQLite documentation has a page dedicated to recommendations and caveats for running SQLite over a network.

Given the risk of data corruption that running SQLite over a networked filesystem (such as via NFS or SMB) comes with, we do not recommend doing that with Dapr in production environment. However, if you do want to do that, you should configure your SQLite Dapr component with disableWAL set to true.

3.28 - Zookeeper

Detailed information on the Zookeeper state store component

Component format

To setup Zookeeper state store create a component of type state.zookeeper. See this guide on how to create and apply a state store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: state.zookeeper
  version: v1
  metadata:
  - name: servers
    value: <REPLACE-WITH-COMMA-DELIMITED-SERVERS> # Required. Example: "zookeeper.default.svc.cluster.local:2181"
  - name: sessionTimeout
    value: <REPLACE-WITH-SESSION-TIMEOUT> # Required. Example: "5s"
  - name: maxBufferSize
    value: <REPLACE-WITH-MAX-BUFFER-SIZE> # Optional. default: "1048576"
  - name: maxConnBufferSize
    value: <REPLACE-WITH-MAX-CONN-BUFFER-SIZE> # Optional. default: "1048576"
  - name: keyPrefixPath
    value: <REPLACE-WITH-KEY-PREFIX-PATH> # Optional.

Spec metadata fields

FieldRequiredDetailsExample
serversYComma delimited list of servers"zookeeper.default.svc.cluster.local:2181"
sessionTimeoutYThe session timeout value"5s"
maxBufferSizeNThe maximum size of buffer. Defaults to "1048576""1048576"
maxConnBufferSizeNThe maximum size of connection buffer. Defaults to "1048576""1048576"
keyPrefixPathNThe key prefix path in Zookeeper. No default"dapr"

Setup Zookeeper

You can run Zookeeper locally using Docker:

docker run --name some-zookeeper --restart always -d zookeeper

You can then interact with the server using localhost:2181.

The easiest way to install Zookeeper on Kubernetes is by using the Helm chart:

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install zookeeper incubator/zookeeper

This installs Zookeeper into the default namespace. To interact with Zookeeper, find the service with: kubectl get svc zookeeper.

For example, if installing using the example above, the Zookeeper host address would be:

zookeeper.default.svc.cluster.local:2181

4 - Secret store component specs

The supported secret stores that interface with Dapr

The following table lists secret stores supported by the Dapr secrets building block. Learn how to set up different secret stores for Dapr secrets management.

Table headers to note:

HeaderDescriptionExample
StatusComponent certification statusAlpha
Beta
Stable
Component versionThe version of the componentv1
Since runtime versionThe version of the Dapr runtime when the component status was set or updated1.11

Generic

ComponentMultiple Key-Values Per SecretStatusComponent versionSince runtime version
HashiCorp VaultStablev11.10
Kubernetes secretsStablev11.0
Local environment variablesMultiple Key-Values Per Secret: Not supportedStablev11.9
Local fileStablev11.9

Alibaba Cloud

ComponentMultiple Key-Values Per SecretStatusComponent versionSince runtime version
AlibabaCloud OOS Parameter StoreMultiple Key-Values Per Secret: Not supportedAlphav11.6

Amazon Web Services (AWS)

ComponentMultiple Key-Values Per SecretStatusComponent versionSince runtime version
AWS Secrets ManagerMultiple Key-Values Per Secret: Not supportedBetav11.15
AWS SSM Parameter StoreMultiple Key-Values Per Secret: Not supportedAlphav11.1

Google Cloud Platform (GCP)

ComponentMultiple Key-Values Per SecretStatusComponent versionSince runtime version
GCP Secret ManagerMultiple Key-Values Per Secret: Not supportedAlphav11.0

Microsoft Azure

ComponentMultiple Key-Values Per SecretStatusComponent versionSince runtime version
Azure Key VaultMultiple Key-Values Per Secret: Not supportedStablev11.0

4.1 - AlibabaCloud OOS Parameter Store

Detailed information on the AlibabaCloud OOS Parameter Store - secret store component

Component format

To setup AlibabaCloud OOS Parameter Store secret store create a component of type secretstores.alicloud.parameterstore. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: alibabacloudparameterstore
spec:
  type: secretstores.alicloud.parameterstore
  version: v1
  metadata:
  - name: regionId
    value: "[alicloud_region_id]"
  - name: accessKeyId 
    value: "[alicloud_access_key_id]"
  - name: accessKeySecret
    value: "[alicloud_access_key_secret]"
  - name: securityToken
    value: "[alicloud_security_token]"

Spec metadata fields

FieldRequiredDetailsExample
regionIdYThe specific region the AlibabaCloud OOS Parameter Store instance is deployed in"cn-hangzhou"
accessKeyIdYThe AlibabaCloud Access Key ID to access this resource"accessKeyId"
accessKeySecretYThe AlibabaCloud Access Key Secret to access this resource"accessKeySecret"
securityTokenNThe AlibabaCloud Security Token to use"securityToken"

Optional per-request metadata properties

The following optional query parameters can be provided when retrieving secrets from this secret store:

Query ParameterDescription
metadata.version_idVersion for the given secret key
metadata.path(For bulk requests only) The path from the metadata. If not set, defaults to root path (all secrets).

Create an AlibabaCloud OOS Parameter Store instance

Setup AlibabaCloud OOS Parameter Store using the AlibabaCloud documentation: https://www.alibabacloud.com/help/en/doc-detail/186828.html.

4.2 - AWS Secrets Manager

Detailed information on the secret store component

Component format

To setup AWS Secrets Manager secret store create a component of type secretstores.aws.secretmanager. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

See Authenticating to AWS for information about authentication-related attributes.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: awssecretmanager
spec:
  type: secretstores.aws.secretmanager
  version: v1
  metadata:
  - name: region
    value: "[aws_region]"
  - name: accessKey
    value: "[aws_access_key]"
  - name: secretKey
    value: "[aws_secret_key]"
  - name: sessionToken
    value: "[aws_session_token]"

Spec metadata fields

FieldRequiredDetailsExample
regionYThe specific AWS region the AWS Secrets Manager instance is deployed in"us-east-1"
accessKeyYThe AWS Access Key to access this resource"key"
secretKeyYThe AWS Secret Access Key to access this resource"secretAccessKey"
sessionTokenNThe AWS session token to use"sessionToken"

Optional per-request metadata properties

The following optional query parameters can be provided when retrieving secrets from this secret store:

Query ParameterDescription
metadata.version_idVersion for the given secret key.
metadata.version_stageVersion stage for the given secret key.

Create an AWS Secrets Manager instance

Setup AWS Secrets Manager using the AWS documentation: https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html.

4.3 - AWS SSM Parameter Store

Detailed information on the AWS SSM Parameter Store - secret store component

Component format

To setup AWS SSM Parameter Store secret store create a component of type secretstores.aws.parameterstore. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

See Authenticating to AWS for information about authentication-related attributes.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: awsparameterstore
spec:
  type: secretstores.aws.parameterstore
  version: v1
  metadata:
  - name: region
    value: "[aws_region]"
  - name: accessKey
    value: "[aws_access_key]"
  - name: secretKey
    value: "[aws_secret_key]"
  - name: sessionToken
    value: "[aws_session_token]"
  - name: prefix
    value: "[secret_name]"

Spec metadata fields

FieldRequiredDetailsExample
regionYThe specific AWS region the AWS SSM Parameter Store instance is deployed in"us-east-1"
accessKeyYThe AWS Access Key to access this resource"key"
secretKeyYThe AWS Secret Access Key to access this resource"secretAccessKey"
sessionTokenNThe AWS session token to use"sessionToken"
prefixNAllows you to specify more than one SSM parameter store secret store component."prefix"

Create an AWS SSM Parameter Store instance

Setup AWS SSM Parameter Store using the AWS documentation: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html.

4.4 - Azure Key Vault secret store

Detailed information on the Azure Key Vault secret store component

Component format

To setup Azure Key Vault secret store, create a component of type secretstores.azure.keyvault.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: azurekeyvault
spec:
  type: secretstores.azure.keyvault
  version: v1
  metadata:
  - name: vaultName # Required
    value: [your_keyvault_name]
  - name: azureEnvironment # Optional, defaults to AZUREPUBLICCLOUD
    value: "AZUREPUBLICCLOUD"
  # See authentication section below for all options
  - name: azureTenantId
    value: "[your_service_principal_tenant_id]"
  - name: azureClientId
    value: "[your_service_principal_app_id]"
  - name: azureCertificateFile
    value : "[pfx_certificate_file_fully_qualified_local_path]"

Authenticating with Microsoft Entra ID

The Azure Key Vault secret store component supports authentication with Microsoft Entra ID only. Before you enable this component:

  1. Read the Authenticating to Azure document.
  2. Create an Microsoft Entra ID application (also called Service Principal).
  3. Alternatively, create a managed identity for your application platform.

Spec metadata fields

FieldRequiredDetailsExample
vaultNameYThe name of the Azure Key Vault"mykeyvault"
azureEnvironmentNOptional name for the Azure environment if using a different Azure cloud"AZUREPUBLICCLOUD" (default value), "AZURECHINACLOUD", "AZUREUSGOVERNMENTCLOUD", "AZUREGERMANCLOUD"
Auth metadataSee Authenticating to Azure for more information

Additionally, you must provide the authentication fields as explained in the Authenticating to Azure document.

Optional per-request metadata properties

The following optional query parameters can be provided when retrieving secrets from this secret store:

Query ParameterDescription
metadata.version_idVersion for the given secret key.
metadata.maxresults(For bulk requests only) Number of secrets to return, after which the request will be truncated.

Example

Prerequisites

  • Azure Subscription
  • Azure CLI
  • jq
  • You are using bash or zsh shell
  • You’ve created an Microsoft Entra ID application (Service Principal) per the instructions in Authenticating to Azure. You will need the following values:
    ValueDescription
    SERVICE_PRINCIPAL_IDThe ID of the Service Principal that you created for a given application

Create an Azure Key Vault and authorize a Service Principal

  1. Set a variable with the Service Principal that you created:
SERVICE_PRINCIPAL_ID="[your_service_principal_object_id]"
  1. Set a variable with the location in which to create all resources:
LOCATION="[your_location]"

(You can get the full list of options with: az account list-locations --output tsv)

  1. Create a Resource Group, giving it any name you’d like:
RG_NAME="[resource_group_name]"
RG_ID=$(az group create \
  --name "${RG_NAME}" \
  --location "${LOCATION}" \
  | jq -r .id)
  1. Create an Azure Key Vault that uses Azure RBAC for authorization:
KEYVAULT_NAME="[key_vault_name]"
az keyvault create \
  --name "${KEYVAULT_NAME}" \
  --enable-rbac-authorization true \
  --resource-group "${RG_NAME}" \
  --location "${LOCATION}"
  1. Using RBAC, assign a role to the Microsoft Entra ID application so it can access the Key Vault.
    In this case, assign the “Key Vault Secrets User” role, which has the “Get secrets” permission over Azure Key Vault.
az role assignment create \
  --assignee "${SERVICE_PRINCIPAL_ID}" \
  --role "Key Vault Secrets User" \
  --scope "${RG_ID}/providers/Microsoft.KeyVault/vaults/${KEYVAULT_NAME}"

Other less restrictive roles, like “Key Vault Secrets Officer” and “Key Vault Administrator”, can be used, depending on your application. See Microsoft Docs for more information about Azure built-in roles for Key Vault.

Configure the component

Using a client secret

To use a client secret, create a file called azurekeyvault.yaml in the components directory. Use the following template, filling in the Microsoft Entra ID application you created:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: azurekeyvault
spec:
  type: secretstores.azure.keyvault
  version: v1
  metadata:
  - name: vaultName
    value: "[your_keyvault_name]"
  - name: azureTenantId
    value: "[your_tenant_id]"
  - name: azureClientId
    value: "[your_client_id]"
  - name: azureClientSecret
    value : "[your_client_secret]"

Using a certificate

If you want to use a certificate saved on the local disk instead, use the following template. Fill in the details of the Microsoft Entra ID application you created:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: azurekeyvault
spec:
  type: secretstores.azure.keyvault
  version: v1
  metadata:
  - name: vaultName
    value: "[your_keyvault_name]"
  - name: azureTenantId
    value: "[your_tenant_id]"
  - name: azureClientId
    value: "[your_client_id]"
  - name: azureCertificateFile
    value : "[pfx_certificate_file_fully_qualified_local_path]"

In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file. Before you start, you need the details of the Microsoft Entra ID application you created.

Using a client secret

  1. Create a Kubernetes secret using the following command:

    kubectl create secret generic [your_k8s_secret_name] --from-literal=[your_k8s_secret_key]=[your_client_secret]
    
    • [your_client_secret] is the application’s client secret as generated above
    • [your_k8s_secret_name] is secret name in the Kubernetes secret store
    • [your_k8s_secret_key] is secret key in the Kubernetes secret store
  2. Create an azurekeyvault.yaml component file.

    The component yaml refers to the Kubernetes secretstore using auth property and secretKeyRef refers to the client secret stored in the Kubernetes secret store.

    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: azurekeyvault
    spec:
      type: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
        value: "[your_keyvault_name]"
      - name: azureTenantId
        value: "[your_tenant_id]"
      - name: azureClientId
        value: "[your_client_id]"
      - name: azureClientSecret
        secretKeyRef:
          name: "[your_k8s_secret_name]"
          key: "[your_k8s_secret_key]"
    auth:
      secretStore: kubernetes
    
  3. Apply the azurekeyvault.yaml component:

    kubectl apply -f azurekeyvault.yaml
    

Using a certificate

  1. Create a Kubernetes secret using the following command:

    kubectl create secret generic [your_k8s_secret_name] --from-file=[your_k8s_secret_key]=[pfx_certificate_file_fully_qualified_local_path]
    
    • [pfx_certificate_file_fully_qualified_local_path] is the path of PFX file you obtained earlier
    • [your_k8s_secret_name] is secret name in the Kubernetes secret store
    • [your_k8s_secret_key] is secret key in the Kubernetes secret store
  2. Create an azurekeyvault.yaml component file.

    The component yaml refers to the Kubernetes secretstore using auth property and secretKeyRef refers to the certificate stored in the Kubernetes secret store.

    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: azurekeyvault
    spec:
      type: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
        value: "[your_keyvault_name]"
      - name: azureTenantId
        value: "[your_tenant_id]"
      - name: azureClientId
        value: "[your_client_id]"
      - name: azureCertificate
        secretKeyRef:
          name: "[your_k8s_secret_name]"
          key: "[your_k8s_secret_key]"
    auth:
      secretStore: kubernetes
    
  3. Apply the azurekeyvault.yaml component:

    kubectl apply -f azurekeyvault.yaml
    

Using Azure managed identity

  1. Ensure your AKS cluster has managed identity enabled and follow the guide for using managed identities.

  2. Create an azurekeyvault.yaml component file.

    The component yaml refers to a particular KeyVault name. The managed identity you will use in a later step must be given read access to this particular KeyVault instance.

    apiVersion: dapr.io/v1alpha1
    kind: Component
    metadata:
      name: azurekeyvault
    spec:
      type: secretstores.azure.keyvault
      version: v1
      metadata:
      - name: vaultName
        value: "[your_keyvault_name]"
    
  3. Apply the azurekeyvault.yaml component:

    kubectl apply -f azurekeyvault.yaml
    
  4. Create and assign a managed identity at the pod-level via either:

    Important: While both Microsoft Entra ID pod identity and workload identity are in preview, currently Microsoft Entra ID Workload Identity is planned for general availability (stable state).

  5. After creating a workload identity, give it read permissions:

    • On your desired KeyVault instance
    • In your application deployment. Inject the pod identity both:
      • Via a label annotation
      • By specifying the Kubernetes service account associated with the desired workload identity
    apiVersion: v1
    kind: Pod
    metadata:
      name: mydaprdemoapp
      labels:
        aadpodidbinding: $POD_IDENTITY_NAME
    

Using Azure managed identity directly vs. via Microsoft Entra ID workload identity

When using managed identity directly, you can have multiple identities associated with an app, requiring azureClientId to specify which identity should be used.

However, when using managed identity via Microsoft Entra ID workload identity, azureClientId is not necessary and has no effect. The Azure identity to be used is inferred from the service account tied to an Azure identity via the Azure federated identity.

References

4.5 - GCP Secret Manager

Detailed information on the GCP Secret Manager secret store component

Component format

To setup GCP Secret Manager secret store create a component of type secretstores.gcp.secretmanager. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: gcpsecretmanager
spec:
  type: secretstores.gcp.secretmanager
  version: v1
  metadata:
  - name: type
    value: <replace-with-account-type>
  - name: project_id
    value: <replace-with-project-id>
  - name: private_key_id
    value: <replace-with-private-key-id>
  - name: client_email
    value: <replace-with-email>
  - name: client_id
    value: <replace-with-client-id>
  - name: auth_uri
    value: <replace-with-auth-uri>
  - name: token_uri
    value: <replace-with-token-uri>
  - name: auth_provider_x509_cert_url
    value: <replace-with-auth-provider-cert-url>
  - name: client_x509_cert_url
    value: <replace-with-client-cert-url>
  - name: private_key
    value: <replace-with-private-key>

Spec metadata fields

FieldRequiredDetailsExample
project_idYThe project ID associated with this component."project_id"
typeNThe type of the account."service_account"
private_key_idNIf using explicit credentials, this field should contain the private_key_id field from the service account json document"privateKeyId"
private_keyNIf using explicit credentials, this field should contain the private_key field from the service account json. Replace with x509 cert12345-12345
client_emailNIf using explicit credentials, this field should contain the client_email field from the service account json"client@email.com"
client_idNIf using explicit credentials, this field should contain the client_id field from the service account json0123456789-0123456789
auth_uriNIf using explicit credentials, this field should contain the auth_uri field from the service account jsonhttps://accounts.google.com/o/oauth2/auth
token_uriNIf using explicit credentials, this field should contain the token_uri field from the service account jsonhttps://oauth2.googleapis.com/token
auth_provider_x509_cert_urlNIf using explicit credentials, this field should contain the auth_provider_x509_cert_url field from the service account jsonhttps://www.googleapis.com/oauth2/v1/certs
client_x509_cert_urlNIf using explicit credentials, this field should contain the client_x509_cert_url field from the service account jsonhttps://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com

GCP Credentials

Since the GCP Secret Manager component uses the GCP Go Client Libraries, by default it authenticates using Application Default Credentials. This is explained further in the Authenticate to GCP Cloud services using client libraries guide. Also, see how to Set up Application Default Credentials.

Optional per-request metadata properties

The following optional query parameters can be provided to the GCP Secret Manager component:

Query ParameterDescription
metadata.version_idVersion for the given secret key.

Setup GCP Secret Manager instance

Setup GCP Secret Manager using the GCP documentation: https://cloud.google.com/secret-manager/docs/quickstart.

4.6 - HashiCorp Vault

Detailed information on the HashiCorp Vault secret store component

Create the Vault component

To setup HashiCorp Vault secret store create a component of type secretstores.hashicorp.vault. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: vault
spec:
  type: secretstores.hashicorp.vault
  version: v1
  metadata:
  - name: vaultAddr
    value: [vault_address] # Optional. Default: "https://127.0.0.1:8200"
  - name: caCert # Optional. This or caPath or caPem
    value: "[ca_cert]"
  - name: caPath # Optional. This or CaCert or caPem
    value: "[path_to_ca_cert_file]"
  - name: caPem # Optional. This or CaCert or CaPath
    value : "[encoded_ca_cert_pem]"
  - name: skipVerify # Optional. Default: false
    value : "[skip_tls_verification]"
  - name: tlsServerName # Optional.
    value : "[tls_config_server_name]"
  - name: vaultTokenMountPath # Required if vaultToken not provided. Path to token file.
    value : "[path_to_file_containing_token]"
  - name: vaultToken # Required if vaultTokenMountPath not provided. Token value.
    value : "[path_to_file_containing_token]"
  - name: vaultKVPrefix # Optional. Default: "dapr"
    value : "[vault_prefix]"
  - name: vaultKVUsePrefix # Optional. default: "true"
    value: "[true/false]"
  - name: enginePath # Optional. default: "secret"
    value: "secret"
  - name: vaultValueType # Optional. default: "map"
    value: "map"

Spec metadata fields

FieldRequiredDetailsExample
vaultAddrNThe address of the Vault server. Defaults to "https://127.0.0.1:8200""https://127.0.0.1:8200"
caPemNThe inlined contents of the CA certificate to use, in PEM format. If defined, takes precedence over caPath and caCert.See below
caPathNThe path to a folder holding the CA certificate file to use, in PEM format. If the folder contains multiple files, only the first file found will be used. If defined, takes precedence over caCert."path/to/cacert/holding/folder"
caCertNThe path to the CA certificate to use, in PEM format.""path/to/cacert.pem"
skipVerifyNSkip TLS verification. Defaults to "false""true", "false"
tlsServerNameNThe name of the server requested during TLS handshake in order to support virtual hosting. This value is also used to verify the TLS certificate presented by Vault server."tls-server"
vaultTokenMountPathYPath to file containing token"path/to/file"
vaultTokenYToken for authentication within Vault."tokenValue"
vaultKVPrefixNThe prefix in vault. Defaults to "dapr""dapr", "myprefix"
vaultKVUsePrefixNIf false, vaultKVPrefix is forced to be empty. If the value is not given or set to true, vaultKVPrefix is used when accessing the vault. Setting it to false is needed to be able to use the BulkGetSecret method of the store."true", "false"
enginePathNThe engine path in vault. Defaults to "secret""kv", "any"
vaultValueTypeNVault value type. map means to parse the value into map[string]string, text means to use the value as a string. ‘map’ sets the multipleKeyValuesPerSecret behavior. text makes Vault behave as a secret store with name/value semantics. Defaults to "map""map", "text"

Optional per-request metadata properties

The following optional query parameters can be provided to Hashicorp Vault secret store component:

Query ParameterDescription
metadata.version_idVersion for the given secret key.

Setup Hashicorp Vault instance

Setup Hashicorp Vault using the Vault documentation: https://www.vaultproject.io/docs/install/index.html.

For Kubernetes, you can use the Helm Chart: https://github.com/hashicorp/vault-helm.

Multiple key-values per secret

HashiCorp Vault supports multiple key-values in a secret. While this behavior is ultimately dependent on the underlying secret engine configured by enginePath, it may change the way you store and retrieve keys from Vault. For instance, multiple key-values in a secret is the behavior exposed in the secret engine, the default engine configured by the enginePath field.

When retrieving secrets, a JSON payload is returned with the key names as fields and their respective values.

Suppose you add a secret to your Vault setup as follows:

vault kv put secret/dapr/mysecret firstKey=aValue secondKey=anotherValue thirdKey=yetAnotherDistinctValue

In the example above, the secret is named mysecret and it has 3 key-values under it. Observe that the secret is created under a dapr prefix, as this is the default value for the vaultKVPrefix flag. Retrieving it from Dapr would result in the following output:

$ curl http://localhost:3501/v1.0/secrets/my-hashicorp-vault/mysecret
{
  "firstKey": "aValue",
  "secondKey": "anotherValue",
  "thirdKey": "yetAnotherDistinctValue"
}

Notice that the name of the secret (mysecret) is not repeated in the result.

TLS Server verification

The fields skipVerify, tlsServerName, caCert, caPath, and caPem control if and how Dapr verifies the vault server’s certificate while connecting using TLS/HTTPS.

Inline CA PEM caPem

The caPem field value should be the contents of the PEM CA certificate you want to use. Given PEM certificates are made of multiple lines, defining that value might seem challenging at first. YAML allows for a few ways of defining a multiline values.

Below is one way to define a caPem field.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: vault
spec:
  type: secretstores.hashicorp.vault
  version: v1
  metadata:
  - name: vaultAddr
    value: https://127.0.0.1:8200
  - name: caPem
    value: |-
          -----BEGIN CERTIFICATE-----
          << the rest of your PEM file content's here, indented appropriately. >>
          -----END CERTIFICATE-----

4.7 - HuaweiCloud Cloud Secret Management Service (CSMS)

Detailed information on the HuaweiCloud Cloud Secret Management Service (CSMS) - secret store component

Component format

To setup HuaweiCloud Cloud Secret Management Service (CSMS) secret store create a component of type secretstores.huaweicloud.csms. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: huaweicloudcsms
spec:
  type: secretstores.huaweicloud.csms
  version: v1
  metadata:
  - name: region
    value: "[huaweicloud_region]"
  - name: accessKey 
    value: "[huaweicloud_access_key]"
  - name: secretAccessKey
    value: "[huaweicloud_secret_access_key]"

Spec metadata fields

FieldRequiredDetailsExample
regionYThe specific region the HuaweiCloud CSMS instance is deployed in"cn-north-4"
accessKeyYThe HuaweiCloud Access Key to access this resource"accessKey"
secretAccessKeyYThe HuaweiCloud Secret Access Key to access this resource"secretAccessKey"

Optional per-request metadata properties

The following optional query parameters can be provided when retrieving secrets from this secret store:

Query ParameterDescription
metadata.version_idVersion for the given secret key.

Setup HuaweiCloud Cloud Secret Management Service (CSMS) instance

Setup HuaweiCloud Cloud Secret Management Service (CSMS) using the HuaweiCloud documentation: https://support.huaweicloud.com/intl/en-us/usermanual-dew/dew_01_9993.html.

4.8 - Kubernetes secrets

Detailed information on the Kubernetes secret store component

Default Kubernetes secret store component

When Dapr is deployed to a Kubernetes cluster, a secret store with the name kubernetes is automatically provisioned. This pre-provisioned secret store allows you to use the native Kubernetes secret store with no need to author, deploy or maintain a component configuration file for the secret store and is useful for developers looking to simply access secrets stored natively in a Kubernetes cluster.

A custom component definition file for a Kubernetes secret store can still be configured (See below for details). Using a custom definition decouples referencing the secret store in your code from the hosting platform as the store name is not fixed and can be customized, keeping your code more generic and portable. Additionally, by explicitly defining a Kubernetes secret store component you can connect to a Kubernetes secret store from a local Dapr self-hosted installation. This requires a valid kubeconfig file.

Create a custom Kubernetes secret store component

To setup a Kubernetes secret store create a component of type secretstores.kubernetes. See this guide on how to create and apply a secretstore configuration. See this guide on referencing secrets to retrieve and use the secret with Dapr components.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mycustomsecretstore
spec:
  type: secretstores.kubernetes
  version: v1
  metadata:[]

Spec metadata fields

FieldRequiredDetailsExample
defaultNamespaceNDefault namespace to retrieve secrets from. If unset, the namespace must be specified in each request metadata or via environment variable NAMESPACE"default-ns"
kubeconfigPathNThe path to the kubeconfig file. If not specified, the store uses the default in-cluster config value"/path/to/kubeconfig"

Optional per-request metadata properties

The following optional query parameters can be provided to Kubernetes secret store component:

Query ParameterDescription
metadata.namespaceThe namespace of the secret. If not specified, the namespace of the pod is used.

4.9 - Local environment variables (for Development)

Detailed information on the local environment secret store component

This Dapr secret store component uses locally defined environment variable and does not use authentication.

Component format

To setup local environment variables secret store create a component of type secretstores.local.env. Create a file with the following content in your ./components directory:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: envvar-secret-store
spec:
  type: secretstores.local.env
  version: v1
  metadata:
    # - name: prefix
    #   value: "MYAPP_"

Spec metadata fields

FieldRequiredDetailsExample
prefixNIf set, limits operations to environment variables with the given prefix. The prefix is removed from the returned secrets’ names.
The matching is case-insensitive on Windows and case-sensitive on all other operating systems.
"MYAPP_"

Notes

For security reasons, this component cannot be used to access these environment variables:

  • APP_API_TOKEN
  • Any variable whose name begins with the DAPR_ prefix

4.10 - Local file (for Development)

Detailed information on the local file secret store component

This Dapr secret store component reads plain text JSON from a given file and does not use authentication.

Component format

To setup local file based secret store create a component of type secretstores.local.file. Create a file with the following content in your ./components directory:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: local-secret-store
spec:
  type: secretstores.local.file
  version: v1
  metadata:
  - name: secretsFile
    value: [path to the JSON file]
  - name: nestedSeparator
    value: ":"
  - name: multiValued
    value: "false"

Spec metadata fields

FieldRequiredDetailsExample
secretsFileYThe path to the file where secrets are stored"path/to/file.json"
nestedSeparatorNUsed by the store when flattening the JSON hierarchy to a map. Defaults to ":"":"
multiValuedN"true" sets the multipleKeyValuesPerSecret behavior. Allows one level of multi-valued key/value pairs before flattening JSON hierarchy. Defaults to "false""true"

Setup JSON file to hold the secrets

Given the following JSON loaded from secretsFile:

{
    "redisPassword": "your redis password",
    "connectionStrings": {
        "sql": "your sql connection string",
        "mysql": "your mysql connection string"
    }
}

The flag multiValued determines whether the secret store presents a name/value behavior or a multiple key-value per secret behavior.

Name/Value semantics

If multiValued is false, the store loads the JSON file and create a map with the following key-value pairs:

flattened keyvalue
“redisPassword”"your redis password"
“connectionStrings:sql”"your sql connection string"
“connectionStrings:mysql”"your mysql connection string"

If the multiValued setting set to true, invoking a GET request on the key connectionStrings results in a 500 HTTP response and an error message. For example:

$ curl http://localhost:3501/v1.0/secrets/local-secret-store/connectionStrings
{
  "errorCode": "ERR_SECRET_GET",
  "message": "failed getting secret with key connectionStrings from secret store local-secret-store: secret connectionStrings not found"
}

This error is expected, since the connectionStrings key is not present, per the table above.

However, requesting for flattened key connectionStrings:sql would result in a successful response, with the following:

$ curl http://localhost:3501/v1.0/secrets/local-secret-store/connectionStrings:sql
{
  "connectionStrings:sql": "your sql connection string"
}

Multiple key-values behavior

If multiValued is true, the secret store enables multiple key-value per secret behavior:

  • Nested structures after the top level will be flattened.
  • It parses the same JSON file into this table:
keyvalue
“redisPassword”"your redis password"
“connectionStrings”{"mysql":"your mysql connection string","sql":"your sql connection string"}

Notice that in the above table:

  • connectionStrings is now a JSON object with two keys: mysql and sql.
  • The connectionStrings:sql and connectionStrings:mysql flattened keys from the table mapped for name/value semantics are missing.

Invoking a GET request on the key connectionStrings now results in a successful HTTP response similar to the following:

$ curl http://localhost:3501/v1.0/secrets/local-secret-store/connectionStrings
{
  "sql": "your sql connection string",
  "mysql": "your mysql connection string"
}

Meanwhile, requesting for the flattened key connectionStrings:sql would now return a 500 HTTP error response with the following:

{
  "errorCode": "ERR_SECRET_GET",
  "message": "failed getting secret with key connectionStrings:sql from secret store local-secret-store: secret connectionStrings:sql not found"
}

Handling deeper nesting levels

Notice that, as stated in the spec metadata fields table, multiValued only handles a single nesting level.

Let’s say you have a local file secret store with multiValued enabled, pointing to a secretsFile with the following JSON content:

{
    "redisPassword": "your redis password",
    "connectionStrings": {
        "mysql": {
          "username": "your mysql username",
          "password": "your mysql password"
        }
    }
}

The contents of key mysql under connectionStrings has a nesting level greater than 1 and would be flattened.

Here is how it would look in memory:

keyvalue
“redisPassword”"your redis password"
“connectionStrings”{ "mysql:username": "your mysql username", "mysql:password": "your mysql password" }

Once again, requesting for key connectionStrings results in a successful HTTP response but its contents, as shown in the table above, would be flattened:

$ curl http://localhost:3501/v1.0/secrets/local-secret-store/connectionStrings
{
  "mysql:username": "your mysql username",
  "mysql:password": "your mysql password"
}

This is useful in order to mimic secret stores like Vault or Kubernetes that return multiple key/value pairs per secret key.

5 - Configuration store component specs

The supported configuration stores that interface with Dapr

Table headers to note:

HeaderDescriptionExample
StatusComponent certification statusAlpha
Beta
Stable
Component versionThe version of the componentv1
Since runtime versionThe version of the Dapr runtime when the component status was set or updated1.11

Generic

ComponentStatusComponent versionSince runtime version
PostgreSQLStablev11.11
RedisStablev11.11

Microsoft Azure

ComponentStatusComponent versionSince runtime version
Azure App ConfigurationAlphav11.9

5.1 - Azure App Configuration

Detailed information on the Azure App Configuration configuration store component

Component format

To set up an Azure App Configuration configuration store, create a component of type configuration.azure.appconfig.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: configuration.azure.appconfig
  version: v1
  metadata:
  - name: host # host should be used when Azure Authentication mechanism is used.
    value: <HOST>
  - name: connectionString # connectionString should not be used when Azure Authentication mechanism is used.
    value: <CONNECTIONSTRING>
  - name: maxRetries
    value: # Optional
  - name: retryDelay
    value: # Optional
  - name: maxRetryDelay
    value: # Optional
  - name: azureEnvironment # Optional, defaults to AZUREPUBLICCLOUD
    value: "AZUREPUBLICCLOUD"
  # See authentication section below for all options
  - name: azureTenantId # Optional
    value: "[your_service_principal_tenant_id]"
  - name: azureClientId # Optional
    value: "[your_service_principal_app_id]"
  - name: azureCertificateFile # Optional
    value : "[pfx_certificate_file_fully_qualified_local_path]"
  - name: subscribePollInterval # Optional
    value: #Optional [Expected format example - 24h]

Spec metadata fields

FieldRequiredDetailsExample
connectionStringY*Connection String for the Azure App Configuration instance. No Default. Can be secretKeyRef to use a secret reference. *Mutally exclusive with host field. *Not to be used when Azure Authentication is usedEndpoint=https://foo.azconfig.io;Id=osOX-l9-s0:sig;Secret=00000000000000000000000000000000000000000000
hostN*Endpoint for the Azure App Configuration instance. No Default. *Mutally exclusive with connectionString field. *To be used when Azure Authentication is usedhttps://dapr.azconfig.io
maxRetriesNMaximum number of retries before giving up. Defaults to 35, 10
retryDelayNRetryDelay specifies the initial amount of delay to use before retrying an operation. The delay increases exponentially with each retry up to the maximum specified by MaxRetryDelay. Defaults to 4 seconds; "-1" disables delay between retries.4s
maxRetryDelayNMaxRetryDelay specifies the maximum delay allowed before retrying an operation. Typically the value is greater than or equal to the value specified in RetryDelay. Defaults to 120 seconds; "-1" disables the limit120s
subscribePollIntervalNsubscribePollInterval specifies the poll interval in nanoseconds for polling the subscribed keys for any changes. This will be updated in the future to Go Time format. Default polling interval is set to 24 hours.24h

Note: either host or connectionString must be specified.

Authenticating with Connection String

Access an App Configuration instance using its connection string, which is available in the Azure portal. Since connection strings contain credential information, you should treat them as secrets and use a secret store.

Authenticating with Microsoft Entra ID

The Azure App Configuration configuration store component also supports authentication with Microsoft Entra ID. Before you enable this component:

  • Read the Authenticating to Azure document.
  • Create an Microsoft Entra ID application (also called Service Principal).
  • Alternatively, create a managed identity for your application platform.

Set up Azure App Configuration

You need an Azure subscription to set up Azure App Configuration.

  1. Start the Azure App Configuration creation flow. Log in if necessary.

  2. Click Create to kickoff deployment of your Azure App Configuration instance.

  3. Once your instance is created, grab the Host (Endpoint) or your Connection string:

    • For the Host: navigate to the resource’s Overview and copy Endpoint.
    • For your connection string: navigate to Settings > Access Keys and copy your Connection string.
  4. Add your host or your connection string to an azappconfig.yaml file that Dapr can apply.

    Set the host key to [Endpoint] or the connectionString key to the values you saved earlier.

Azure App Configuration request metadata

In Azure App Configuration, you can use labels to define different values for the same key. For example, you can define a single key with different values for development and production. You can specify which label to load when connecting to App Configuration

The Azure App Configuration store component supports the following optional label metadata property:

label: The label of the configuration to retrieve. If not present, the configuration store returns the configuration for the specified key and a null label.

The label can be populated using query parameters in the request URL:

GET curl http://localhost:<daprPort>/v1.0/configuration/<store-name>?key=<key name>&metadata.label=<label value>

5.2 - PostgreSQL

Detailed information on the PostgreSQL configuration store component

Component format

To set up an PostgreSQL configuration store, create a component of type configuration.postgresql

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: configuration.postgresql
  version: v1
  metadata:
    # Connection string
    - name: connectionString
      value: "host=localhost user=postgres password=example port=5432 connect_timeout=10 database=config"
    # Name of the table which holds configuration information
    - name: table
      value: "[your_configuration_table_name]" 
    # Individual connection parameters - can be used instead to override connectionString parameters
    #- name: host
    #  value: "localhost"
    #- name: hostaddr
    #  value: "127.0.0.1"
    #- name: port
    #  value: "5432"
    #- name: database
    #  value: "my_db"
    #- name: user
    #  value: "postgres"
    #- name: password
    #  value: "example"
    #- name: sslRootCert
    #  value: "/path/to/ca.crt"
    # Timeout for database operations, in seconds (optional)
    #- name: timeoutInSeconds
    #  value: 20
    # Name of the table where to store the state (optional)
    #- name: tableName
    #  value: "state"
    # Name of the table where to store metadata used by Dapr (optional)
    #- name: metadataTableName
    #  value: "dapr_metadata"
    # Cleanup interval in seconds, to remove expired rows (optional)
    #- name: cleanupIntervalInSeconds
    #  value: 3600
    # Maximum number of connections pooled by this component (optional)
    #- name: maxConns
    #  value: 0
    # Max idle time for connections before they're closed (optional)
    #- name: connectionMaxIdleTime
    #  value: 0
    # Controls the default mode for executing queries. (optional)
    #- name: queryExecMode
    #  value: ""
    # Uncomment this if you wish to use PostgreSQL as a state store for actors (optional)
    #- name: actorStateStore
    #  value: "true"

Spec metadata fields

Authenticate using a connection string

The following metadata options are required to authenticate using a PostgreSQL connection string.

FieldRequiredDetailsExample
connectionStringYThe connection string for the PostgreSQL database. See the PostgreSQL documentation on database connections for information on how to define a connection string."host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"

Authenticate using individual connection parameters

In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.

FieldRequiredDetailsExample
hostYThe host name or IP address of the PostgreSQL server"localhost"
hostaddrNThe IP address of the PostgreSQL server (alternative to host)"127.0.0.1"
portYThe port number of the PostgreSQL server"5432"
databaseYThe name of the database to connect to"my_db"
userYThe PostgreSQL user to connect as"postgres"
passwordYThe password for the PostgreSQL user"example"
sslRootCertNPath to the SSL root certificate file"/path/to/ca.crt"

Authenticate using Microsoft Entra ID

Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials (“service principal”) and Managed Identity.

FieldRequiredDetailsExample
useAzureADYMust be set to true to enable the component to retrieve access tokens from Microsoft Entra ID."true"
connectionStringYThe connection string for the PostgreSQL database.
This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity; this is often the name of the corresponding principal (e.g. the name of the Microsoft Entra ID application). This connection string should not contain any password.
"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=my_db sslmode=require"
azureTenantIdNID of the Microsoft Entra ID tenant"cd4b2887-304c-…"
azureClientIdNClient ID (application ID)"c7dd251f-811f-…"
azureClientSecretNClient secret (application password)"Ecy3X…"

Authenticate using AWS IAM

Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the rds_iam database role. Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. The AWS authentication token will be dynamically rotated before it’s expiration time with AWS.

FieldRequiredDetailsExample
useAWSIAMYMust be set to true to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases."true"
connectionStringYThe connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS.
"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"
awsRegionNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘region’ instead. The AWS Region where the AWS Relational Database Service is deployed to."us-east-1"
awsAccessKeyNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘accessKey’ instead. AWS access key associated with an IAM account"AKIAIOSFODNN7EXAMPLE"
awsSecretKeyNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘secretKey’ instead. The secret key associated with the access key"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
awsSessionTokenNThis maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use ‘sessionToken’ instead. AWS session token to use. A session token is only required if you are using temporary security credentials."TOKEN"

Other metadata options

FieldRequiredDetailsExample
tableYTable name for configuration information, must be lowercased.configtable
timeoutNTimeout for operations on the database, as a Go duration. Integers are interpreted as number of seconds. Defaults to 20s"30s", 30
maxConnsNMaximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs."4"
connectionMaxIdleTimeNMax idle time before unused connections are automatically closed in the connection pool. By default, there’s no value and this is left to the database driver to choose."5m"
queryExecModeNControls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use exec or simple_protocol."simple_protocol"

Set up PostgreSQL as Configuration Store

  1. Start the PostgreSQL Database

  2. Connect to the PostgreSQL database and setup a configuration table with following schema:

    FieldDatatypeNullableDetails
    KEYVARCHARNHolds "Key" of the configuration attribute
    VALUEVARCHARNHolds Value of the configuration attribute
    VERSIONVARCHARNHolds version of the configuration attribute
    METADATAJSONYHolds Metadata as JSON
    CREATE TABLE IF NOT EXISTS table_name (
      KEY VARCHAR NOT NULL,
      VALUE VARCHAR NOT NULL,
      VERSION VARCHAR NOT NULL,
      METADATA JSON
    );
    
  3. Create a TRIGGER on configuration table. An example function to create a TRIGGER is as follows:

    CREATE OR REPLACE FUNCTION notify_event() RETURNS TRIGGER AS $$
        DECLARE 
            data json;
            notification json;
    
        BEGIN
    
            IF (TG_OP = 'DELETE') THEN
                data = row_to_json(OLD);
            ELSE
                data = row_to_json(NEW);
            END IF;
    
            notification = json_build_object(
                              'table',TG_TABLE_NAME,
                              'action', TG_OP,
                              'data', data);
            PERFORM pg_notify('config',notification::text);
            RETURN NULL; 
        END;
    $$ LANGUAGE plpgsql;
    
  4. Create the trigger with data encapsulated in the field labeled as data:

    notification = json_build_object(
      'table',TG_TABLE_NAME,
      'action', TG_OP,
      'data', data
    );
    
  5. The channel mentioned as attribute to pg_notify should be used when subscribing for configuration notifications

  6. Since this is a generic created trigger, map this trigger to configuration table

    CREATE TRIGGER config
    AFTER INSERT OR UPDATE OR DELETE ON configtable
        FOR EACH ROW EXECUTE PROCEDURE notify_event();
    
  7. In the subscribe request add an additional metadata field with key as pgNotifyChannel and value should be set to same channel name mentioned in pg_notify. From the above example, it should be set to config

5.3 - Redis

Detailed information on the Redis configuration store component

Component format

To setup Redis configuration store create a component of type configuration.redis. See this guide on how to create and apply a configuration store configuration.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: configuration.redis
  version: v1
  metadata:
  - name: redisHost
    value: <address>:6379
  - name: redisPassword
    value: **************
  - name: useEntraID
    value: "true"
  - name: enableTLS
    value: <bool>

Spec metadata fields

FieldRequiredDetailsExample
redisHostYOutputThe Redis host address
redisPasswordNOutputThe Redis password
redisUsernameNOutputUsername for Redis host. Defaults to empty. Make sure your Redis server version is 6 or above, and have created acl rule correctly.
enableTLSNOutputIf the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to "false"
clientCertNOutputThe content of the client certificate, used for Redis instances that require client-side certificates. Must be used with clientKey and enableTLS must be set to true. It is recommended to use a secret store as described here
clientKeyNOutputThe content of the client private key, used in conjunction with clientCert for authentication. It is recommended to use a secret store as described here
failoverNOutputProperty to enable failover configuration. Needs sentinelMasterName to be set. Defaults to "false"
sentinelMasterNameNOutputThe Sentinel master name. See Redis Sentinel Documentation
sentinelUsernameNOutputUsername for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled
sentinelPasswordNOutputPassword for Redis Sentinel. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled
redisTypeNOutputThe type of Redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for Redis cluster mode. Defaults to "node".
redisDBNOutputDatabase selected after connecting to Redis. If "redisType" is "cluster", this option is ignored. Defaults to "0".
redisMaxRetriesNOutputMaximum number of times to retry commands before giving up. Default is to not retry failed commands.
redisMinRetryIntervalNOutputMinimum backoff for Redis commands between each retry. Default is "8ms"; "-1" disables backoff.
redisMaxRetryIntervalNOutputMaximum backoff for Redis commands between each retry. Default is "512ms";"-1" disables backoff.
dialTimeoutNOutputDial timeout for establishing new connections. Defaults to "5s".
readTimeoutNOutputTimeout for socket reads. If reached, Redis commands fail with a timeout instead of blocking. Defaults to "3s", "-1" for no timeout.
writeTimeoutNOutputTimeout for socket writes. If reached, Redis commands fail with a timeout instead of blocking. Defaults is readTimeout.
poolSizeNOutputMaximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU.
poolTimeoutNOutputAmount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second.
maxConnAgeNOutputConnection age at which the client retires (closes) the connection. Default is to not close aged connections.
minIdleConnsNOutputMinimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0".
idleCheckFrequencyNOutputFrequency of idle checks made by idle connections reaper. Default is "1m". "-1" disables idle connections reaper.
idleTimeoutNOutputAmount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m". "-1" disables idle timeout check.

Setup Redis

Dapr can use any Redis instance: containerized, running on your local dev machine, or a managed cloud service.

A Redis instance is automatically created as a Docker container when you run dapr init

You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.

  1. Install Redis into your cluster. Note that we’re explicitly setting an image tag to get a version greater than 5, which is what Dapr’ pub/sub functionality requires. If you’re intending on using Redis as just a state store (and not for pub/sub), you do not have to set the image version.

    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm install redis bitnami/redis --set image.tag=6.2
    
  2. Run kubectl get pods to see the Redis containers now running in your cluster.

  3. Add redis-master:6379 as the redisHost in your redis.yaml file. For example:

        metadata:
        - name: redisHost
          value: redis-master:6379
    
  4. Next, get the Redis password, which is slightly different depending on the OS we’re using:

    • Windows: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64, which creates a file with your encoded password. Next, run certutil -decode encoded.b64 password.txt, which will put your redis password in a text file called password.txt. Copy the password and delete the two files.

    • Linux/MacOS: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode and copy the outputted password.

    Add this password as the redisPassword value in your redis.yaml file. For example:

        metadata:
        - name: redisPassword
          value: lhDOkwTlp0
    
  1. Create an Azure Cache for Redis instance using the official Microsoft documentation.

  2. Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.

    • For the Host name:
      • Navigate to the resource’s Overview page.
      • Copy the Host name value.
    • For your access key:
      • Navigate to Settings > Access Keys.
      • Copy and save your key.
  3. Add your key and your host name to a redis.yaml file that Dapr can apply to your cluster.

    • If you’re running a sample, add the host and key to the provided redis.yaml.
    • If you’re creating a project from the ground up, create a redis.yaml file as specified in the Component format section.
  4. Set the redisHost key to [HOST NAME FROM PREVIOUS STEP]:6379 and the redisPassword key to the key you saved earlier.

    Note: In a production-grade application, follow secret management instructions to securely manage your secrets.

  5. Enable EntraID support:

    • Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
    • Set useEntraID to "true" to implement EntraID support for Azure Cache for Redis.
  6. Set enableTLS to "true" to support TLS.

Note:useEntraID assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify the azureClientID property.

6 - Lock component specs

The supported locks that interface with Dapr

Table headers to note:

HeaderDescriptionExample
StatusComponent certification statusAlpha
Beta
Stable
Component versionThe version of the componentv1
Since runtime versionThe version of the Dapr runtime when the component status was set or updated1.11

Generic

ComponentStatusComponent versionSince runtime version
RedisAlphav11.8

6.1 - Redis

Detailed information on the Redis lock component

Component format

To set up the Redis lock, create a component of type lock.redis. See this guide on how to create a lock.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: lock.redis
  version: v1
  metadata:
  - name: redisHost
    value: <HOST>
  - name: redisPassword #Optional.
    value: <PASSWORD>
  - name: useEntraID
    value: <bool> # Optional. Allowed: true, false.
  - name: enableTLS
    value: <bool> # Optional. Allowed: true, false.
  - name: failover
    value: <bool> # Optional. Allowed: true, false.
  - name: sentinelMasterName
    value: <string> # Optional
  - name: maxRetries
    value: # Optional
  - name: maxRetryBackoff
    value: # Optional
  - name: redeliverInterval
    value: # Optional
  - name: processingTimeout
    value: # Optional
  - name: redisType
    value: # Optional
  - name: redisDB
    value: # Optional
  - name: redisMaxRetries
    value: # Optional
  - name: redisMinRetryInterval
    value: # Optional
  - name: redisMaxRetryInterval
    value: # Optional
  - name: dialTimeout
    value: # Optional
  - name: readTimeout
    value: # Optional
  - name: writeTimeout
    value: # Optional
  - name: poolSize
    value: # Optional
  - name: poolTimeout
    value: # Optional
  - name: maxConnAge
    value: # Optional
  - name: minIdleConns
    value: # Optional
  - name: idleCheckFrequency
    value: # Optional
  - name: idleTimeout
    value: # Optional

Spec metadata fields

FieldRequiredDetailsExample
redisHostYConnection string for the redis hostlocalhost:6379, redis-master.default.svc.cluster.local:6379
redisPasswordNPassword for Redis host. No Default. Can be secretKeyRef to use a secret reference"", "KeFg23!"
redisUsernameNUsername for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly."", "default"
useEntraIDNImplements EntraID support for Azure Cache for Redis. Before enabling this:
  • The redisHost name must be specified in the form of "server:port"
  • TLS must be enabled
Learn more about this setting under Create a Redis instance > Azure Cache for Redis
"true", "false"
enableTLSNIf the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to "false""true", "false"
maxRetriesNMaximum number of retries before giving up. Defaults to 35, 10
maxRetryBackoffNMaximum backoff between each retry. Defaults to 2 seconds; "-1" disables backoff.3000000000
failoverNEnable failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See Redis Sentinel Documentation. Defaults to "false""true", "false"
sentinelMasterNameNThe sentinel master name. See Redis Sentinel Documentation"mymaster"
redeliverIntervalNThe interval between checking for pending messages for redelivery. Defaults to "60s". "0" disables redelivery."30s"
processingTimeoutNThe amount of time a message must be pending before attempting to redeliver it. Defaults to "15s". "0" disables redelivery."30s"
redisTypeNThe type of redis. There are two valid values, one is "node" for single node mode, the other is "cluster" for redis cluster mode. Defaults to "node"."cluster"
redisDBNDatabase selected after connecting to redis. If "redisType" is "cluster" this option is ignored. Defaults to "0"."0"
redisMaxRetriesNAlias for maxRetries. If both values are set maxRetries is ignored."5"
redisMinRetryIntervalNMinimum backoff for redis commands between each retry. Default is "8ms"; "-1" disables backoff."8ms"
redisMaxRetryIntervalNAlias for maxRetryBackoff. If both values are set maxRetryBackoff is ignored."5s"
dialTimeoutNDial timeout for establishing new connections. Defaults to "5s"."5s"
readTimeoutNTimeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to "3s", "-1" for no timeout."3s"
writeTimeoutNTimeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout."3s"
poolSizeNMaximum number of socket connections. Default is 10 connections per every CPU as reported by runtime (NumCPU)`“20”
poolTimeoutNAmount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second."5s"
maxConnAgeNConnection age at which the client retires (closes) the connection. Default is to not close aged connections."30m"
minIdleConnsNMinimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to "0"."2"
idleCheckFrequencyNFrequency of idle checks made by idle connections reaper. Default is "1m". "-1" disables idle connections reaper."-1"
idleTimeoutNAmount of time after which the client closes idle connections. Should be less than server’s timeout. Default is "5m". "-1" disables idle timeout check."10m"

Setup Redis

Dapr can use any Redis instance: containerized, running on your local dev machine, or a managed cloud service.

A Redis instance is automatically created as a Docker container when you run dapr init

You can use Helm to quickly create a Redis instance in our Kubernetes cluster. This approach requires Installing Helm.

  1. Install Redis into your cluster. Note that we’re explicitly setting an image tag to get a version greater than 5, which is what Dapr’ pub/sub functionality requires. If you’re intending on using Redis as just a state store (and not for pub/sub), you do not have to set the image version.

    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm install redis bitnami/redis --set image.tag=6.2
    
  2. Run kubectl get pods to see the Redis containers now running in your cluster.

  3. Add redis-master:6379 as the redisHost in your redis.yaml file. For example:

        metadata:
        - name: redisHost
          value: redis-master:6379
    
  4. Next, get the Redis password, which is slightly different depending on the OS we’re using:

    • Windows: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64, which creates a file with your encoded password. Next, run certutil -decode encoded.b64 password.txt, which will put your redis password in a text file called password.txt. Copy the password and delete the two files.

    • Linux/MacOS: Run kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode and copy the outputted password.

    Add this password as the redisPassword value in your redis.yaml file. For example:

        metadata:
        - name: redisPassword
          value: lhDOkwTlp0
    
  1. Create an Azure Cache for Redis instance using the official Microsoft documentation.

  2. Once your instance is created, grab the Host name (FQDN) and your access key from the Azure portal.

    • For the Host name:
      • Navigate to the resource’s Overview page.
      • Copy the Host name value.
    • For your access key:
      • Navigate to Settings > Access Keys.
      • Copy and save your key.
  3. Add your key and your host name to a redis.yaml file that Dapr can apply to your cluster.

    • If you’re running a sample, add the host and key to the provided redis.yaml.
    • If you’re creating a project from the ground up, create a redis.yaml file as specified in the Component format section.
  4. Set the redisHost key to [HOST NAME FROM PREVIOUS STEP]:6379 and the redisPassword key to the key you saved earlier.

    Note: In a production-grade application, follow secret management instructions to securely manage your secrets.

  5. Enable EntraID support:

    • Enable Entra ID authentication on your Azure Redis server. This may takes a few minutes.
    • Set useEntraID to "true" to implement EntraID support for Azure Cache for Redis.
  6. Set enableTLS to "true" to support TLS.

Note:useEntraID assumes that either your UserPrincipal (via AzureCLICredential) or the SystemAssigned managed identity have the RedisDataOwner role permission. If a user-assigned identity is used, you need to specify the azureClientID property.

7 - Cryptography component specs

The supported cryptography components that interface with Dapr

Table headers to note:

HeaderDescriptionExample
StatusComponent certification statusAlpha
Beta
Stable
Component versionThe version of the componentv1
Since runtime versionThe version of the Dapr runtime when the component status was set or updated1.11

Using the Dapr cryptography engine

ComponentStatusComponent versionSince runtime version
JSON Web Key Sets (JWKS)Alphav11.11
Kubernetes secretsAlphav11.11
Local storageAlphav11.11

Microsoft Azure

ComponentStatusComponent versionSince runtime version
Azure Key VaultAlphav11.11

7.1 - Azure Key Vault

Detailed information on the Azure Key Vault cryptography component

Component format

A Dapr crypto.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: azurekeyvault
spec:
  type: crypto.azure.keyvault
  metadata:
  - name: vaultName
    value: mykeyvault
  # See authentication section below for all options
  - name: azureTenantId
    value: ${{AzureKeyVaultTenantId}}
  - name: azureClientId
    value: ${{AzureKeyVaultServicePrincipalClientId}}
  - name: azureClientSecret
    value: ${{AzureKeyVaultServicePrincipalClientSecret}}

Authenticating with Microsoft Entra ID

The Azure Key Vault cryptography component supports authentication with Microsoft Entra ID only. Before you enable this component:

  1. Read the Authenticating to Azure document.
  2. Create an Microsoft Entra ID application (also called a Service Principal).
  3. Alternatively, create a managed identity for your application platform.

Spec metadata fields

FieldRequiredDetailsExample
vaultNameYAzure Key Vault name"mykeyvault"
Auth metadataYSee Authenticating to Azure for more information

7.2 - JSON Web Key Sets (JWKS)

Detailed information on the JWKS cryptography component

Component format

The purpose of this component is to load keys from a JSON Web Key Set (RFC 7517). These are JSON documents that contain 1 or more keys as JWK (JSON Web Key); they can be public, private, or shared keys.

This component supports loading a JWKS:

  • From a local file; in this case, Dapr watches for changes to the file on disk and reloads it automatically.
  • From a HTTP(S) URL, which is periodically refreshed.
  • By passing the actual JWKS in the jwks metadata property, as a string (optionally, base64-encoded).

A Dapr crypto.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: jwks
spec:
  type: crypto.dapr.jwks
  version: v1
  metadata:
    # Example 1: load JWKS from file
    - name: "jwks"
      value: "fixtures/crypto/jwks/jwks.json"
    # Example 2: load JWKS from a HTTP(S) URL
    # Only "jwks" is required
    - name: "jwks"
      value: "https://example.com/.well-known/jwks.json"
    - name: "requestTimeout"
      value: "30s"
    - name: "minRefreshInterval"
      value: "10m"
    # Option 3: include the actual JWKS
    - name: "jwks"
      value: |
        {
          "keys": [
            {
              "kty": "RSA",
              "use": "sig",
              "kid": "…",
              "n": "…",
              "e": "…",
              "issuer": "https://example.com"
            }
          ]
        }
    # Option 3b: include the JWKS base64-encoded
    - name: "jwks"
      value: |
        eyJrZXlzIjpbeyJ…

Spec metadata fields

FieldRequiredDetailsExample
jwksYPath to the JWKS documentLocal file: "fixtures/crypto/jwks/jwks.json"
HTTP(S) URL: "https://example.com/.well-known/jwks.json"
Embedded JWKS: {"keys": […]} (can be base64-encoded)
requestTimeoutNTimeout for network requests when fetching the JWKS document from a HTTP(S) URL, as a Go duration. Default: “30s”"5s"
minRefreshIntervalNMinimum interval to wait before subsequent refreshes of the JWKS document from a HTTP(S) source, as a Go duration. Default: “10m”"1h"

Cryptography building block

7.3 - Kubernetes Secrets

Detailed information on the Kubernetes secret cryptography component

Component format

The purpose of this component is to load the Kubernetes secret named after the key name.

A Dapr crypto.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
spec:
  type: crypto.dapr.kubernetes.secrets
  version: v1
  metadata:[]

Spec metadata fields

FieldRequiredDetailsExample
defaultNamespaceNDefault namespace to retrieve secrets from. If unset, the namespace must be specified for each key, as namespace/secretName/key"default-ns"
kubeconfigPathNThe path to the kubeconfig file. If not specified, the component uses the default in-cluster config value"/path/to/kubeconfig"

Cryptography building block

7.4 - Local storage

Detailed information on the local storage cryptography component

Component format

The purpose of this component is to load keys from a local directory.

The component accepts as input the name of a folder, and loads keys from there. Each key is in its own file, and when users request a key with a given name, Dapr loads the file with that name.

Supported file formats:

  • PEM with public and private keys (supports: PKCS#1, PKCS#8, PKIX)
  • JSON Web Key (JWK) containing a public, private, or symmetric key
  • Raw key data for symmetric keys

A Dapr crypto.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mycrypto
spec:
  type: crypto.dapr.localstorage
  metadata:
    version: v1
    - name: path
      value: /path/to/folder/

Spec metadata fields

FieldRequiredDetailsExample
pathYFolder containing the keys to be loaded. When loading a key, the name of the key will be used as name of the file in this folder./path/to/folder

Example

Let’s say you’ve set path=/mnt/keys, which contains the following files:

  • /mnt/keys/mykey1.pem
  • /mnt/keys/mykey2

When using the component, you can reference the keys as mykey1.pm and mykey2.

Cryptography building block

8 - Conversation component specs

The supported conversation components that interface with Dapr

Table headers to note:

HeaderDescriptionExample
StatusComponent certification statusAlpha
Beta
Stable
Component versionThe version of the componentv1
Since runtime versionThe version of the Dapr runtime when the component status was set or updated1.11

Amazon Web Services (AWS)

ComponentStatusComponent versionSince runtime version
AWS BedrockAlphav11.15

Generic

ComponentStatusComponent versionSince runtime version
AnthropicAlphav11.15
DeepSeekAlphav11.15
GoogleAIAlphav11.16
HuggingfaceAlphav11.15
MistralAlphav11.15
OllamaAlphav11.16
OpenAIAlphav11.15

8.1 - Anthropic

Detailed information on the Anthropic conversation component

Component format

A Dapr conversation.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: anthropic
spec:
  type: conversation.anthropic
  metadata:
  - name: key
    value: "mykey"
  - name: model
    value: claude-3-5-sonnet-20240620
  - name: cacheTTL
    value: 10m

Spec metadata fields

FieldRequiredDetailsExample
keyYAPI key for Anthropic."mykey"
modelNThe Anthropic LLM to use. Defaults to claude-3-5-sonnet-20240620claude-3-5-sonnet-20240620
cacheTTLNA time-to-live value for a prompt cache to expire. Uses Golang duration format.10m

8.2 - AWS Bedrock

Detailed information on the AWS Bedrock conversation component

Component format

A Dapr conversation.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: awsbedrock
spec:
  type: conversation.aws.bedrock
  metadata:
  - name: endpoint
    value: "http://localhost:4566"
  - name: model
    value: amazon.titan-text-express-v1
  - name: cacheTTL
    value: 10m

Spec metadata fields

FieldRequiredDetailsExample
endpointNAWS endpoint for the component to use and connect to emulators. Not recommended for production AWS use.http://localhost:4566
modelNThe LLM to use. Defaults to Bedrock’s default provider model from Amazon.amazon.titan-text-express-v1
cacheTTLNA time-to-live value for a prompt cache to expire. Uses Golang duration format.10m

Authenticating AWS

Instead of using a key parameter, AWS Bedrock authenticates using Dapr’s standard method of IAM or static credentials. Learn more about authenticating with AWS.

8.3 - DeepSeek

Detailed information on the DeepSeek conversation component

Component format

A Dapr conversation.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: deepseek
spec:
  type: conversation.deepseek
  metadata:
  - name: key
    value: mykey
  - name: maxTokens
    value: 2048

Spec metadata fields

FieldRequiredDetailsExample
keyYAPI key for DeepSeek.mykey
maxTokensNThe max amount of tokens for each request.2048

8.4 - Local Testing

Detailed information on the echo conversation component used for local testing

Component format

A Dapr conversation.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: echo
spec:
  type: conversation.echo
  version: v1

8.5 - GoogleAI

Detailed information on the GoogleAI conversation component

Component format

A Dapr conversation.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: googleai
spec:
  type: conversation.googleai
  metadata:
  - name: key
    value: mykey
  - name: model
    value: gemini-1.5-flash
  - name: cacheTTL
    value: 10m

Spec metadata fields

FieldRequiredDetailsExample
keyYAPI key for GoogleAI.mykey
modelNThe GoogleAI LLM to use. Defaults to gemini-1.5-flash.gemini-2.0-flash
cacheTTLNA time-to-live value for a prompt cache to expire. Uses Golang duration format.10m

8.6 - Huggingface

Detailed information on the Huggingface conversation component

Component format

A Dapr conversation.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: huggingface
spec:
  type: conversation.huggingface
  metadata:
  - name: key
    value: mykey
  - name: model
    value: meta-llama/Meta-Llama-3-8B
  - name: cacheTTL
    value: 10m

Spec metadata fields

FieldRequiredDetailsExample
keyYAPI key for Huggingface.mykey
modelNThe Huggingface LLM to use. Defaults to meta-llama/Meta-Llama-3-8B.meta-llama/Meta-Llama-3-8B
cacheTTLNA time-to-live value for a prompt cache to expire. Uses Golang duration format.10m

8.7 - Mistral

Detailed information on the Mistral conversation component

Component format

A Dapr conversation.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mistral
spec:
  type: conversation.mistral
  metadata:
  - name: key
    value: mykey
  - name: model
    value: open-mistral-7b
  - name: cacheTTL
    value: 10m

Spec metadata fields

FieldRequiredDetailsExample
keyYAPI key for Mistral.mykey
modelNThe Mistral LLM to use. Defaults to open-mistral-7b.open-mistral-7b
cacheTTLNA time-to-live value for a prompt cache to expire. Uses Golang duration format.10m

8.8 - Ollama

Detailed information on the Ollama conversation component

Component format

A Dapr conversation.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: ollama
spec:
  type: conversation.ollama
  metadata:
  - name: model
    value: llama3.2:latest
  - name: cacheTTL
    value: 10m

Spec metadata fields

FieldRequiredDetailsExample
modelNThe Ollama LLM to use. Defaults to llama3.2:latest.phi4:latest
cacheTTLNA time-to-live value for a prompt cache to expire. Uses Golang duration format.10m

8.9 - OpenAI

Detailed information on the OpenAI conversation component

Component format

A Dapr conversation.yaml component file has the following structure:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: openai
spec:
  type: conversation.openai
  metadata:
  - name: key
    value: mykey
  - name: model
    value: gpt-4-turbo
  - name: endpoint
    value: 'https://api.openai.com/v1'
  - name: cacheTTL
    value: 10m
  # - name: apiType # Optional
  #   value: `azure`
  # - name: apiVersion # Optional
  #   value: '2025-01-01-preview'

Spec metadata fields

FieldRequiredDetailsExample
keyYAPI key for OpenAI.mykey
modelNThe OpenAI LLM to use. Defaults to gpt-4-turbo.gpt-4-turbo
endpointNCustom API endpoint URL for OpenAI API-compatible services. If not specified, the default OpenAI API endpoint is used. Required when apiType is set to azure.https://api.openai.com/v1, https://example.openai.azure.com/
cacheTTLNA time-to-live value for a prompt cache to expire. Uses Golang duration format.10m
apiTypeNSpecifies the API provider type. Required when using a provider that does not follow the default OpenAI API endpoint conventions.azure
apiVersionNThe API version to use. Required when the apiType is set to azure.2025-04-01-preview

9 - Name resolution provider component specs

The supported name resolution providers to enable Dapr service invocation

The following components provide name resolution for the service invocation building block.

Name resolution components are configured via the configuration.

Table headers to note:

HeaderDescriptionExample
StatusComponent certification statusAlpha
Beta
Stable
Component versionThe version of the componentv1
Since runtime versionThe version of the Dapr runtime when the component status was set or updated1.11

Generic

ComponentStatusComponent versionSince runtime version
HashiCorp ConsulAlphav11.2
SQLiteAlphav11.13

Kubernetes

ComponentStatusComponent versionSince runtime version
KubernetesStablev11.0

Self-Hosted

ComponentStatusComponent versionSince runtime version
mDNSStablev11.0

9.1 - HashiCorp Consul

Detailed information on the HashiCorp Consul name resolution component

Configuration format

Hashicorp Consul is setup within the Dapr Configuration.

Within the config, add a nameResolution spec and set the component field to "consul".

If you are using the Dapr sidecar to register your service to Consul then you will need the following configuration:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  nameResolution:
    component: "consul"
    configuration:
      selfRegister: true

If Consul service registration is managed externally from Dapr you need to ensure that the Dapr-to-Dapr internal gRPC port is added to the service metadata under DAPR_PORT (this key is configurable) and that the Consul service Id matches the Dapr app Id. You can then omit selfRegister from the config above.

Behaviour

On init the Consul component either validates the connection to the configured (or default) agent or registers the service if configured to do so. The name resolution interface does not cater for an “on shutdown” pattern so consider this when using Dapr to register services to Consul as it does not deregister services.

The component resolves target apps by filtering healthy services and looks for a DAPR_PORT in the metadata (key is configurable) in order to retrieve the Dapr sidecar port. Consul service.meta is used over service.port so as to not interfere with existing Consul estates.

Spec configuration fields

The configuration spec is fixed to v1.3.0 of the Consul API

FieldRequiredTypeDetailsExamples
ClientN*api.ConfigConfigures client connection to the Consul agent. If blank it will use the sdk defaults, which in this case is just an address of 127.0.0.1:850010.0.4.4:8500
QueryOptionsN*api.QueryOptionsConfigures query used for resolving healthy services, if blank it will default to UseCache:trueUseCache: false, Datacenter: "myDC"
ChecksN[]*api.AgentServiceCheckConfigures health checks if/when registering. If blank it will default to a single health check on the Dapr sidecar health endpointSee sample configs
TagsN[]stringConfigures any tags to include if/when registering services- "dapr"
MetaNmap[string]stringConfigures any additional metadata to include if/when registering servicesDAPR_METRICS_PORT: "${DAPR_METRICS_PORT}"
DaprPortMetaKeyNstringThe key used for getting the Dapr sidecar port from Consul service metadata during service resolution, it will also be used to set the Dapr sidecar port in metadata during registration. If blank it will default to DAPR_PORT"DAPR_TO_DAPR_PORT"
SelfRegisterNboolControls if Dapr will register the service to Consul. The name resolution interface does not cater for an “on shutdown” pattern so please consider this if using Dapr to register services to Consul as it will not deregister services. If blank it will default to falsetrue
AdvancedRegistrationN*api.AgentServiceRegistrationGives full control of service registration through configuration. If configured the component will ignore any configuration of Checks, Tags, Meta and SelfRegister.See sample configs

Sample configurations

Basic

The minimum configuration needed is the following:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  nameResolution:
    component: "consul"

Registration with additional customizations

Enabling SelfRegister it is then possible to customize the checks, tags and meta

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  nameResolution:
    component: "consul"
    configuration:
      client:
        address: "127.0.0.1:8500"
      selfRegister: true
      checks:
        - name: "Dapr Health Status"
          checkID: "daprHealth:${APP_ID}"
          interval: "15s"
          http: "http://${HOST_ADDRESS}:${DAPR_HTTP_PORT}/v1.0/healthz"
        - name: "Service Health Status"
          checkID: "serviceHealth:${APP_ID}"
          interval: "15s"
          http: "http://${HOST_ADDRESS}:${APP_PORT}/health"
      tags:
        - "dapr"
        - "v1"
        - "${OTHER_ENV_VARIABLE}"
      meta:
        DAPR_METRICS_PORT: "${DAPR_METRICS_PORT}"
        DAPR_PROFILE_PORT: "${DAPR_PROFILE_PORT}"
      daprPortMetaKey: "DAPR_PORT"
      queryOptions:
        useCache: true
        filter: "Checks.ServiceTags contains dapr"

Advanced registration

Configuring the advanced registration gives you full control over setting all the Consul properties possible when registering.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  nameResolution:
    component: "consul"
    configuration:
      client:
          address: "127.0.0.1:8500"
      selfRegister: false
      queryOptions:
        useCache: true
      daprPortMetaKey: "DAPR_PORT"
      advancedRegistration:
        name: "${APP_ID}"
        port: ${APP_PORT}
        address: "${HOST_ADDRESS}"
        check:
          name: "Dapr Health Status"
          checkID: "daprHealth:${APP_ID}"
          interval: "15s"
          http: "http://${HOST_ADDRESS}:${DAPR_HTTP_PORT}/v1.0/healthz"
        meta:
          DAPR_METRICS_PORT: "${DAPR_METRICS_PORT}"
          DAPR_PROFILE_PORT: "${DAPR_PROFILE_PORT}"
        tags:
          - "dapr"

Setup HashiCorp Consul

HashiCorp offer in depth guides on how to setup Consul for different hosting models. Check out the self-hosted guide here

HashiCorp offer in depth guides on how to setup Consul for different hosting models. Check out the Kubernetes guide here

9.2 - Kubernetes DNS

Detailed information on the Kubernetes DNS name resolution component

Configuration format

Generally, Kubernetes DNS name resolution is configured automatically in Kubernetes mode by Dapr. There is no configuration needed to use Kubernetes DNS as your name resolution provider unless some overrides are necessary for the Kubernetes name resolution component.

In the scenario that an override is required, within a Dapr Configuration CRD, add a nameResolution spec and set the component field to "kubernetes". The other configuration fields can be set as needed in a configuration map, as seen below.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  nameResolution:
    component: "kubernetes"
    configuration:
      clusterDomain: "cluster.local"  # Mutually exclusive with the template field
      template: "{{.ID}}-{{.Data.region}}.internal:{{.Port}}" # Mutually exclusive with the clusterDomain field

Behaviour

The component resolves target apps by using the Kubernetes cluster’s DNS provider. You can learn more in the Kubernetes docs.

Spec configuration fields

The configuration spec is fixed to v1.3.0 of the Consul API

FieldRequiredTypeDetailsExamples
clusterDomainNstringThe cluster domain to be used for resolved addresses. This field is mutually exclusive with the template file.cluster.local
templateNstringA template string to be parsed when addresses are resolved using text/template . The template will be populated by the fields in the ResolveRequest struct. This field is mutually exclusive with clusterDomain field.{{.ID}}-{{.Data.region}}.{{.Namespace}}.internal:{{.Port}}

9.3 - mDNS

Detailed information on the mDNS name resolution component

Configuration format

Multicast DNS (mDNS) is configured automatically in self-hosted mode by Dapr. There is no configuration needed to use mDNS as your name resolution provider.

Behaviour

The component resolves target apps by using the host system’s mDNS service. You can learn more about mDNS here.

Troubleshooting

In some cloud provider virtual networks, such as Microsoft Azure, mDNS is not available. Use an alternate provider such as HashiCorp Consul instead.

On some enterprise-managed systems, mDNS may be disabled on macOS if a network filter/proxy is configured. Check with your IT department if mDNS is disabled and you are unable to use service invocation locally.

Spec configuration fields

Not applicable, as mDNS is configured by Dapr when running in self-hosted mode.

9.4 - SQLite

Detailed information on the SQLite name resolution component

As an alternative to mDNS, the SQLite name resolution component can be used for running Dapr on single-node environments and for local development scenarios. Dapr sidecars that are part of the cluster store their information in a SQLite database on the local machine.

Configuration format

Name resolution is configured via the Dapr Configuration.

Within the Configuration YAML, set the spec.nameResolution.component property to "sqlite", then pass configuration options in the spec.nameResolution.configuration dictionary.

This is the basic example of a Configuration resource:

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  nameResolution:
    component: "sqlite"
    version: "v1"
    configuration:
      connectionString: "/home/user/.dapr/nr.db"

Spec configuration fields

When using the SQLite name resolver component, the spec.nameResolution.configuration dictionary contains these options:

FieldRequiredTypeDetailsExamples
connectionStringYstringThe connection string for the SQLite database. Normally, this is the path to a file on disk, relative to the current working directory, or absolute."nr.db" (relative to the working directory), "/home/user/.dapr/nr.db"
updateIntervalNGo duration (as a string)Interval for active Dapr sidecars to update their status in the database, which is used as healthcheck.
Smaller intervals reduce the likelihood of stale data being returned if an application goes offline, but increase the load on the database.
Must be at least 1s greater than timeout. Values with fractions of seconds are truncated (for example, 1500ms becomes 1s). Default: 5s
"2s"
timeoutNGo duration (as a string).
Must be at least 1s.
Timeout for operations on the database. Integers are interpreted as number of seconds. Defaults to 1s"2s", 2
tableNameNstringName of the table where the data is stored. If the table does not exist, the table is created by Dapr. Defaults to hosts."hosts"
metadataTableNameNstringName of the table used by Dapr to store metadata for the component. If the table does not exist, the table is created by Dapr. Defaults to metadata."metadata"
cleanupIntervalNGo duration (as a string)Interval to remove stale records from the database. Default: 1h (1 hour)"10m"
busyTimeoutNGo duration (as a string)Interval to wait in case the SQLite database is currently busy serving another request, before returning a “database busy” error. This is an advanced setting.
busyTimeout controls how locking works in SQLite. With SQLite, writes are exclusive, so every time any app is writing the database is locked. If another app tries to write, it waits up to busyTimeout before returning the “database busy” error. However the timeout setting controls the timeout for the entire operation. For example if the query “hangs”, after the database has acquired the lock (so after busy timeout is cleared), then timeout comes into effect. Default: 800ms (800 milliseconds)
"100ms"
disableWALNboolIf set to true, disables Write-Ahead Logging for journaling of the SQLite database. This is for advanced scenarios onlytrue, false

10 - Middleware component specs

List of all the supported middleware components that can be injected in Dapr’s processing pipeline.

The following table lists middleware components supported by Dapr. Learn how to customize processing pipelines and set up middleware components.

Table headers to note:

HeaderDescriptionExample
StatusComponent certification statusAlpha
Beta
Stable
Component versionThe version of the componentv1
Since runtime versionThe version of the Dapr runtime when the component status was set or updated1.11

HTTP

ComponentDescriptionStatusComponent version
OAuth2 Authorization Grant flowEnables the OAuth2 Authorization Grant flow on a Web APIAlphav1
OAuth2 Client Credentials Grant flowEnables the OAuth2 Client Credentials Grant flow on a Web APIAlphav1
OpenID ConnectVerifies a Bearer Token using OpenID Connect on a Web APIStablev1
Rate limitRestricts the maximum number of allowed HTTP requests per secondStablev1
Rego/OPA PoliciesApplies Rego/OPA Policies to incoming Dapr HTTP requestsAlphav1
Router AliasUse Router Alias to map arbitrary HTTP routes to valid Dapr API endpointsAlphav1
RouterCheckerUse RouterChecker middleware to block invalid http request routingAlphav1
SentinelUse Sentinel middleware to guarantee the reliability and resiliency of your applicationAlphav1
UppercaseConverts the body of the request to uppercase letters (demo)Stablev1
WasmUse Wasm middleware in your HTTP pipelineAlphav1

10.1 - Bearer

Use bearer middleware to secure HTTP endpoints by verifying bearer tokens

The bearer HTTP middleware verifies a Bearer Token using OpenID Connect on a Web API, without modifying the application. This design separates authentication/authorization concerns from the application, so that application operators can adopt and configure authentication/authorization providers without impacting the application code.

Component format

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: bearer-token
spec:
  type: middleware.http.bearer
  version: v1
  metadata:
    - name: audience
      value: "<your token audience; i.e. the application's client ID>"
    - name: issuer
      value: "<your token issuer, e.g. 'https://accounts.google.com'>"

    # Optional values
    - name: jwksURL
      value: "<JWKS URL, e.g. 'https://accounts.google.com/.well-known/openid-configuration'>"

Spec metadata fields

FieldRequiredDetailsExample
audienceYThe audience expected in the tokens. Usually, this corresponds to the client ID of your application that is created as part of a credential hosted by a OpenID Connect platform.
issuerYThe issuer authority, which is the value expected in the issuer claim in the tokens."https://accounts.google.com"
jwksURLNAddress of the JWKS (JWK Set containing the public keys for verifying tokens). If empty, will try to fetch the URL set in the OpenID Configuration document <issuer>/.well-known/openid-configuration."https://accounts.google.com/.well-known/openid-configuration"

Common values for issuer include:

  • Auth0: https://{domain}, where {domain} is the domain of your Auth0 application
  • Microsoft Entra ID: https://login.microsoftonline.com/{tenant}/v2.0, where {tenant} should be replaced with the tenant ID of your application, as a UUID
  • Google: https://accounts.google.com
  • Salesforce (Force.com): https://login.salesforce.com

Dapr configuration

To be applied, the middleware must be referenced in configuration. See middleware pipelines.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  httpPipeline:
    handlers:
    - name: bearer-token
      type: middleware.http.bearer

10.2 - OAuth2

Use OAuth2 middleware to secure HTTP endpoints

The OAuth2 HTTP middleware enables the OAuth2 Authorization Code flow on a Web API without modifying the application. This design separates authentication/authorization concerns from the application, so that application operators can adopt and configure authentication/authorization providers without impacting the application code.

Component format

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: oauth2
spec:
  type: middleware.http.oauth2
  version: v1
  metadata:
  - name: clientId
    value: "<your client ID>"
  - name: clientSecret
    value: "<your client secret>"
  - name: scopes
    value: "https://www.googleapis.com/auth/userinfo.email"
  - name: authURL
    value: "https://accounts.google.com/o/oauth2/v2/auth"
  - name: tokenURL
    value: "https://accounts.google.com/o/oauth2/token"
  - name: redirectURL
    value: "http://dummy.com"
  - name: authHeaderName
    value: "authorization"
  - name: forceHTTPS
    value: "false"

Spec metadata fields

FieldDetailsExample
clientIdThe client ID of your application that is created as part of a credential hosted by a OAuth-enabled platform
clientSecretThe client secret of your application that is created as part of a credential hosted by a OAuth-enabled platform
scopesA list of space-delimited, case-sensitive strings of scopes which are typically used for authorization in the application"https://www.googleapis.com/auth/userinfo.email"
authURLThe endpoint of the OAuth2 authorization server"https://accounts.google.com/o/oauth2/v2/auth"
tokenURLThe endpoint is used by the client to obtain an access token by presenting its authorization grant or refresh token"https://accounts.google.com/o/oauth2/token"
redirectURLThe URL of your web application that the authorization server should redirect to once the user has authenticated"https://myapp.com"
authHeaderNameThe authorization header name to forward to your application"authorization"
forceHTTPSIf true, enforces the use of TLS/SSL"true","false"

Dapr configuration

To be applied, the middleware must be referenced in configuration. See middleware pipelines.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  httpPipeline:
    handlers:
    - name: oauth2
      type: middleware.http.oauth2

10.3 - OAuth2 client credentials

Use OAuth2 client credentials middleware to secure HTTP endpoints

The OAuth2 client credentials HTTP middleware enables the OAuth2 Client Credentials flow on a Web API without modifying the application. This design separates authentication/authorization concerns from the application, so that application operators can adopt and configure authentication/authorization providers without impacting the application code.

Component format

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: oauth2clientcredentials
spec:
  type: middleware.http.oauth2clientcredentials
  version: v1
  metadata:
  - name: clientId
    value: "<your client ID>"
  - name: clientSecret
    value: "<your client secret>"
  - name: scopes
    value: "https://www.googleapis.com/auth/userinfo.email"
  - name: tokenURL
    value: "https://accounts.google.com/o/oauth2/token"
  - name: headerName
    value: "authorization"

Spec metadata fields

FieldDetailsExample
clientIdThe client ID of your application that is created as part of a credential hosted by a OAuth-enabled platform
clientSecretThe client secret of your application that is created as part of a credential hosted by a OAuth-enabled platform
scopesA list of space-delimited, case-sensitive strings of scopes which are typically used for authorization in the application"https://www.googleapis.com/auth/userinfo.email"
tokenURLThe endpoint is used by the client to obtain an access token by presenting its authorization grant or refresh token"https://accounts.google.com/o/oauth2/token"
headerNameThe authorization header name to forward to your application"authorization"
endpointParamsQuerySpecifies additional parameters for requests to the token endpointtrue
authStyleOptionally specifies how the endpoint wants the client ID & client secret sent. See the table of possible values below0

Possible values for authStyle

ValueMeaning
1Sends the “client_id” and “client_secret” in the POST body as application/x-www-form-urlencoded parameters.
2Sends the “client_id” and “client_secret” using HTTP Basic Authorization. This is an optional style described in the OAuth2 RFC 6749 section 2.3.1.
0Means to auto-detect which authentication style the provider wants by trying both ways and caching the successful way for the future.

Dapr configuration

To be applied, the middleware must be referenced in a configuration. See middleware pipelines.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  httpPipeline:
    handlers:
    - name: oauth2clientcredentials
      type: middleware.http.oauth2clientcredentials

10.4 - Apply Open Policy Agent (OPA) policies

Use middleware to apply Open Policy Agent (OPA) policies on incoming requests

The Open Policy Agent (OPA) HTTP middleware applies OPA Policies to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints.

Component format

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: my-policy
spec:
  type: middleware.http.opa
  version: v1
  metadata:
    # `includedHeaders` is a comma-separated set of case-insensitive headers to include in the request input.
    # Request headers are not passed to the policy by default. Include to receive incoming request headers in
    # the input
    - name: includedHeaders
      value: "x-my-custom-header, x-jwt-header"

    # `defaultStatus` is the status code to return for denied responses
    - name: defaultStatus
      value: 403

    # `readBody` controls whether the middleware reads the entire request body in-memory and make it
    # available for policy decisions.
    - name: readBody
      value: "false"

    # `rego` is the open policy agent policy to evaluate. required
    # The policy package must be http and the policy must set data.http.allow
    - name: rego
      value: |
        package http

        default allow = true

        # Allow may also be an object and include other properties

        # For example, if you wanted to redirect on a policy failure, you could set the status code to 301 and set the location header on the response:
        allow = {
            "status_code": 301,
            "additional_headers": {
                "location": "https://my.site/authorize"
            }
        } {
            not jwt.payload["my-claim"]
        }

        # You can also allow the request and add additional headers to it:
        allow = {
            "allow": true,
            "additional_headers": {
                "x-my-claim": my_claim
            }
        } {
            my_claim := jwt.payload["my-claim"]
        }
        jwt = { "payload": payload } {
            auth_header := input.request.headers["Authorization"]
            [_, jwt] := split(auth_header, " ")
            [_, payload, _] := io.jwt.decode(jwt)
        }

You can prototype and experiment with policies using the official OPA playground. For example, you can find the example policy above here.

Spec metadata fields

FieldDetailsExample
regoThe Rego policy languageSee above
defaultStatusThe status code to return for denied responses"https://accounts.google.com", "https://login.salesforce.com"
readBodyIf set to true (the default value), the body of each request is read fully in-memory and can be used to make policy decisions. If your policy doesn’t depend on inspecting the request body, consider disabling this (setting to false) for significant performance improvements."false"
includedHeadersA comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input"x-my-custom-header, x-jwt-header"

Dapr configuration

To be applied, the middleware must be referenced in configuration. See middleware pipelines.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  httpPipeline:
    handlers:
    - name: my-policy
      type: middleware.http.opa

Input

This middleware supplies a HTTPRequest as input.

HTTPRequest

The HTTPRequest input contains all the relevant information about an incoming HTTP Request.

type Input struct {
  request HTTPRequest
}

type HTTPRequest struct {
  // The request method (e.g. GET,POST,etc...)
  method string
  // The raw request path (e.g. "/v2/my-path/")
  path string
  // The path broken down into parts for easy consumption (e.g. ["v2", "my-path"])
  path_parts string[]
  // The raw query string (e.g. "?a=1&b=2")
  raw_query string
  // The query broken down into keys and their values
  query map[string][]string
  // The request headers
  // NOTE: By default, no headers are included. You must specify what headers
  // you want to receive via `spec.metadata.includedHeaders` (see above)
  headers map[string]string
  // The request scheme (e.g. http, https)
  scheme string
  // The request body (e.g. http, https)
  body string
}

Result

The policy must set data.http.allow with either a boolean value, or an object value with an allow boolean property. A true allow will allow the request, while a false value will reject the request with the status specified by defaultStatus. The following policy, with defaults, demonstrates a 403 - Forbidden for all requests:

package http

default allow = false

which is the same as:

package http

default allow = {
  "allow": false
}

Changing the rejected response status code

When rejecting a request, you can override the status code the that gets returned. For example, if you wanted to return a 401 instead of a 403, you could do the following:

package http

default allow = {
  "allow": false,
  "status_code": 401
}

Adding response headers

To redirect, add headers and set the status_code to the returned result:

package http

default allow = {
  "allow": false,
  "status_code": 301,
  "additional_headers": {
    "Location": "https://my.redirect.site"
  }
}

Adding request headers

You can also set additional headers on the allowed request:

package http

default allow = false

allow = { "allow": true, "additional_headers": { "X-JWT-Payload": payload } } {
  not input.path[0] == "forbidden"
  // Where `jwt` is the result of another rule
  payload := base64.encode(json.marshal(jwt.payload))
}

Result structure

type Result bool
// or
type Result struct {
  // Whether to allow or deny the incoming request
  allow bool
  // Overrides denied response status code; Optional
  status_code int
  // Sets headers on allowed request or denied response; Optional
  additional_headers map[string]string
}

10.5 - Rate limiting

Use rate limit middleware to limit requests per second

The rate limit HTTP middleware allows restricting the maximum number of allowed HTTP requests per second. Rate limiting can protect your application from Denial of Service (DoS) attacks. DoS attacks can be initiated by malicious 3rd parties but also by bugs in your software (a.k.a. a “friendly fire” DoS attack).

Component format

In the following definition, the maximum requests per second are set to 10:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: ratelimit
spec:
  type: middleware.http.ratelimit
  version: v1
  metadata:
  - name: maxRequestsPerSecond
    value: 10

Spec metadata fields

FieldDetailsExample
maxRequestsPerSecondThe maximum requests per second by remote IP.
The component looks at the X-Forwarded-For and X-Real-IP headers to determine the caller’s IP.
10

Once the limit is reached, the requests will fail with HTTP Status code 429: Too Many Requests.

Alternatively, the max concurrency setting can be used to rate-limit applications and applies to all traffic, regardless of remote IP, protocol, or path.

Dapr configuration

To be applied, the middleware must be referenced in configuration. See middleware pipelines.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  httpPipeline:
    handlers:
    - name: ratelimit
      type: middleware.http.ratelimit

10.6 - Router alias http request routing

Use router alias middleware to alias arbitrary http routes to Dapr endpoints

The router alias HTTP middleware component allows you to convert arbitrary HTTP routes arriving into Dapr to valid Dapr API endpoints.

Component format

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: routeralias 
spec:
  type: middleware.http.routeralias
  version: v1
  metadata:
    # String containing a JSON-encoded or YAML-encoded dictionary
    # Each key in the dictionary is the incoming path, and the value is the path it's converted to
    - name: "routes"
      value: |
        {
          "/mall/activity/info": "/v1.0/invoke/srv.default/method/mall/activity/info",
          "/hello/activity/{id}/info": "/v1.0/invoke/srv.default/method/hello/activity/info",
          "/hello/activity/{id}/user": "/v1.0/invoke/srv.default/method/hello/activity/user"
        }

In the example above, an incoming HTTP request for /mall/activity/info?id=123 is transformed into /v1.0/invoke/srv.default/method/mall/activity/info?id=123.

Spec metadata fields

FieldDetailsExample
routesString containing a JSON-encoded or YAML-encoded dictionary. Each key in the dictionary is the incoming path, and the value is the path it’s converted to.See example above

Dapr configuration

To be applied, the middleware must be referenced in configuration. See middleware pipelines.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  httpPipeline:
    handlers:
    - name: routeralias 
      type: middleware.http.routeralias

10.7 - RouterChecker http request routing

Use routerchecker middleware to block invalid http request routing

The RouterChecker HTTP middleware component leverages regexp to check the validity of HTTP request routing to prevent invalid routers from entering the Dapr cluster. In turn, the RouterChecker component filters out bad requests and reduces noise in the telemetry and log data.

Component format

The RouterChecker applies a set of rules to the incoming HTTP request. You define these rules in the component metadata using regular expressions. In the following example, the HTTP request RouterChecker is set to validate all requests message against the ^[A-Za-z0-9/._-]+$: regex.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: routerchecker 
spec:
  type: middleware.http.routerchecker
  version: v1
  metadata:
  - name: rule
    value: "^[A-Za-z0-9/._-]+$"

In this example, the above definition would result in the following PASS/FAIL cases:

PASS /v1.0/invoke/demo/method/method
PASS /v1.0/invoke/demo.default/method/method
PASS /v1.0/invoke/demo.default/method/01
PASS /v1.0/invoke/demo.default/method/METHOD
PASS /v1.0/invoke/demo.default/method/user/info
PASS /v1.0/invoke/demo.default/method/user_info
PASS /v1.0/invoke/demo.default/method/user-info

FAIL /v1.0/invoke/demo.default/method/cat password
FAIL /v1.0/invoke/demo.default/method/" AND 4210=4210 limit 1
FAIL /v1.0/invoke/demo.default/method/"$(curl

Spec metadata fields

FieldDetailsExample
rulethe regexp expression to be used by the HTTP request RouterChecker^[A-Za-z0-9/._-]+$

Dapr configuration

To be applied, the middleware must be referenced in configuration. See middleware pipelines.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  httpPipeline:
    handlers:
    - name: routerchecker 
      type: middleware.http.routerchecker

10.8 - Sentinel fault-tolerance middleware component

Use Sentinel middleware to guarantee the reliability and resiliency of your application

Sentinel is a powerful fault-tolerance component that takes “flow” as the breakthrough point and covers multiple fields including flow control, traffic shaping, concurrency limiting, circuit breaking, and adaptive system protection to guarantee the reliability and resiliency of microservices.

The Sentinel HTTP middleware enables Dapr to facilitate Sentinel’s powerful abilities to protect your application. You can refer to Sentinel Wiki for more details on Sentinel.

Component format

In the following definition, the maximum requests per second are set to 10:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: sentinel
spec:
  type: middleware.http.sentinel
  version: v1
  metadata:
  - name: appName
    value: "nodeapp"
  - name: logDir
    value: "/var/tmp"
  - name: flowRules
    value: >-
      [
        {
          "resource": "POST:/v1.0/invoke/nodeapp/method/neworder",
          "threshold": 10,
          "tokenCalculateStrategy": 0,
          "controlBehavior": 0
        }
      ]

Spec metadata fields

FieldDetailsExample
appNamethe name of current running servicenodeapp
logDirthe log directory path/var/tmp/sentinel
flowRulesjson array of sentinel flow control rulesflow control rule
circuitBreakerRulesjson array of sentinel circuit breaker rulescircuit breaker rule
hotSpotParamRulesjson array of sentinel hotspot parameter flow control ruleshotspot rule
isolationRulesjson array of sentinel isolation rulesisolation rule
systemRulesjson array of sentinel system rulessystem rule

Once the limit is reached, the request will return HTTP Status code 429: Too Many Requests.

Special note to resource field in each rule’s definition. In Dapr, it follows the following format:

POST/GET/PUT/DELETE:Dapr HTTP API Request Path

All concrete HTTP API information can be found from Dapr API Reference. In the above sample config, the resource field is set to POST:/v1.0/invoke/nodeapp/method/neworder.

Dapr configuration

To be applied, the middleware must be referenced in configuration. See middleware pipelines.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: daprConfig
spec:
  httpPipeline:
    handlers:
      - name: sentinel
        type: middleware.http.sentinel

10.9 - Uppercase request body

Test your HTTP pipeline is functioning with the uppercase middleware

The uppercase HTTP middleware converts the body of the request to uppercase letters and is used for testing that the pipeline is functioning. It should only be used for local development.

Component format

In the following definition, it make content of request body into uppercase:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: uppercase
spec:
  type: middleware.http.uppercase
  version: v1

This component has no metadata to configure.

Dapr configuration

To be applied, the middleware must be referenced in configuration. See middleware pipelines.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  httpPipeline:
    handlers:
    - name: uppercase
      type: middleware.http.uppercase

10.10 - Wasm

Use Wasm middleware in your HTTP pipeline

WebAssembly is a way to safely run code compiled in other languages. Runtimes execute WebAssembly Modules (Wasm), which are most often binaries with a .wasm extension.

The Wasm HTTP middleware allows you to manipulate an incoming request or serve a response with custom logic compiled to a Wasm binary. In other words, you can extend Dapr using external files that are not pre-compiled into the daprd binary. Dapr embeds wazero to accomplish this without CGO.

Wasm binaries are loaded from a URL. For example, the URL file://rewrite.wasm loads rewrite.wasm from the current directory of the process. On Kubernetes, see How to: Mount Pod volumes to the Dapr sidecar to configure a filesystem mount that can contain Wasm modules. It is also possible to fetch the Wasm binary from a remote URL. In this case, the URL must point exactly to one Wasm binary. For example:

  • http://example.com/rewrite.wasm, or
  • https://example.com/rewrite.wasm.

Component format

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: wasm
spec:
  type: middleware.http.wasm
  version: v1
  metadata:
  - name: url
    value: "file://router.wasm"
  - name: guestConfig
    value: {"environment":"production"}

Spec metadata fields

Minimally, a user must specify a Wasm binary implements the http-handler. How to compile this is described later.

FieldDetailsRequiredExample
urlThe URL of the resource including the Wasm binary to instantiate. The supported schemes include file://, http://, and https://. The path of a file:// URL is relative to the Dapr process unless it begins with /.truefile://hello.wasm, https://example.com/hello.wasm
guestConfigAn optional configuration passed to Wasm guests. Users can pass an arbitrary string to be parsed by the guest code.falseenvironment=production,{"environment":"production"}

Dapr configuration

To be applied, the middleware must be referenced in configuration. See middleware pipelines.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: appconfig
spec:
  httpPipeline:
    handlers:
    - name: wasm
      type: middleware.http.wasm

Note: WebAssembly middleware uses more resources than native middleware. This result in a resource constraint faster than the same logic in native code. Production usage should Control max concurrency.

Generating Wasm

This component lets you manipulate an incoming request or serve a response with custom logic compiled using the http-handler Application Binary Interface (ABI). The handle_request function receives an incoming request and can manipulate it or serve a response as necessary.

To compile your Wasm, you must compile source using a http-handler compliant guest SDK such as TinyGo.

Here’s an example in TinyGo:

package main

import (
	"strings"

	"github.com/http-wasm/http-wasm-guest-tinygo/handler"
	"github.com/http-wasm/http-wasm-guest-tinygo/handler/api"
)

func main() {
	handler.HandleRequestFn = handleRequest
}

// handleRequest implements a simple HTTP router.
func handleRequest(req api.Request, resp api.Response) (next bool, reqCtx uint32) {
	// If the URI starts with /host, trim it and dispatch to the next handler.
	if uri := req.GetURI(); strings.HasPrefix(uri, "/host") {
		req.SetURI(uri[5:])
		next = true // proceed to the next handler on the host.
		return
	}

	// Serve a static response
	resp.Headers().Set("Content-Type", "text/plain")
	resp.Body().WriteString("hello")
	return // skip the next handler, as we wrote a response.
}

If using TinyGo, compile as shown below and set the spec metadata field named “url” to the location of the output (for example, file://router.wasm):

tinygo build -o router.wasm -scheduler=none --no-debug -target=wasi router.go`

Wasm guestConfig example

Here is an example of how to use guestConfig to pass configurations to Wasm. In Wasm code, you can use the function handler.Host.GetConfig defined in guest SDK to get the configuration. In the following example, the Wasm middleware parses the executed environment from JSON config defined in the component.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: wasm
spec:
  type: middleware.http.wasm
  version: v1
  metadata:
  - name: url
    value: "file://router.wasm"
  - guestConfig
    value: {"environment":"production"}

Here’s an example in TinyGo:

package main

import (
	"encoding/json"
	"github.com/http-wasm/http-wasm-guest-tinygo/handler"
	"github.com/http-wasm/http-wasm-guest-tinygo/handler/api"
)

type Config struct {
	Environment string `json:"environment"`
}

func main() {
	// get config bytes, which is the value of guestConfig defined in the component.
	configBytes := handler.Host.GetConfig()
	
	config := Config{}
	json.Unmarshal(configBytes, &config)
	handler.Host.Log(api.LogLevelInfo, "Config environment: "+config.Environment)
}